Mousavi, Hamid and Drefs, Jakob and Hirschberger, Florian and Lücke , Jörg (2023) Generic unsupervised optimization for a latent variable model with exponential family observables. Journal of machine learning research, 24 (285). pp. 1-59. ISSN 1533-7928

[img]
Preview


Volltext (4Mb)
Official URL: https://jmlr.org/papers/v24/22-0359.html

Abstract

Latent variable models (LVMs) represent observed variables by parameterized functions of latent variables. Prominent examples of LVMs for unsupervised learning are probabilistic PCA or probabilistic sparse coding which both assume a weighted linear summation of the latents to determine the mean of a Gaussian distribution for the observables. In many cases, however, observables do not follow a Gaussian distribution. For unsupervised learning, LVMs which assume specific non-Gaussian observables (e.g., Bernoulli or Poisson) have therefore been considered. Already for specific choices of distributions, parameter optimization is challenging and only a few previous contributions considered LVMs with more generally defined observable distributions. In this contribution, we do consider LVMs that are defined for a range of different distributions, i.e., observables can follow any (regular) distribution of the exponential family. Furthermore, the novel class of LVMs presented here is defined for binary latents, and it uses maximization in place of summation to link the latents to observables. In order to derive an optimization procedure, we follow an expectation maximization approach for maximum likelihood parameter estimation. We then show, as our main result, that a set of very concise parameter update equations can be derived which feature the same functional form for all exponential family distributions. The derived generic optimization can consequently be applied (without further derivations) to different types of metric data (Gaussian and non-Gaussian) as well as to different types of discrete data. Moreover, the derived optimization equations can be combined with a recently suggested variational acceleration which is likewise generically applicable to the LVMs considered here. Thus, the combination maintains generic and direct applicability of the derived optimization procedure, but, crucially, enables efficient scalability. We numerically verify our analytical results using different observable distributions, and, furthermore, discuss some potential applications such as learning of variance structure, noise type estimation and denoising.

Item Type: Article
Divisions: Faculty of Medicine and Health Sciences > Department of Medical Physics and Acoustics
Date Deposited: 19 Mar 2024 12:59
Last Modified: 19 Mar 2024 12:59
URI: https://oops.uni-oldenburg.de/id/eprint/6354
URN: urn:nbn:de:gbv:715-oops-64356
DOI:
Nutzungslizenz:

Actions (login required)

View Item View Item

Document Downloads

More statistics for this item...