BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:Aapo Hyvärinen (University of Helsinki)
DTSTART:20200804T163000Z
DTEND:20200804T174500Z
DTSTAMP:20260423T021059Z
UID:IASML/16
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/IASML/16/">N
 onlinear independent component analysis</a>\nby Aapo Hyvärinen (Universit
 y of Helsinki) as part of IAS Seminar Series on Theoretical Machine Learni
 ng\n\n\nAbstract\nUnsupervised learning\, in particular learning general n
 onlinear representations\, is one of the deepest problems in machine learn
 ing. Estimating latent quantities in a generative model provides a princip
 led framework\, and has been successfully used in the linear case\, e.g. w
 ith independent component analysis (ICA) and sparse coding. However\, exte
 nding ICA to the nonlinear case has proven to be extremely difficult: A st
 raight-forward extension is unidentifiable\, i.e. it is not possible to re
 cover those latent components that actually generated the data. Here\, we 
 show that this problem can be solved by using additional information eithe
 r in the form of temporal structure or an additional observed variable. We
  start by formulating two generative models in which the data is an arbitr
 ary but invertible nonlinear transformation of time series (components) wh
 ich are statistically independent of each other. Drawing from the theory o
 f linear ICA\, we formulate two distinct classes of temporal structure of 
 the components which enable identification\, i.e. recovery of the original
  independent components. We further generalize the framework to the case w
 here instead of temporal structure\, an additional "auxiliary" variable is
  observed and used by means of conditioning (e.g. audio in addition to vid
 eo). Our methods are closely related to "self-supervised" methods heuristi
 cally proposed in computer vision\, and also provide a theoretical foundat
 ion for such methods in terms of estimating a latent-variable model. Likew
 ise\, we show how variants of deep latent-variable models such as VAE's ca
 n be seen as nonlinear ICA\, and made identifiable by suitable conditionin
 g.\n
LOCATION:https://researchseminars.org/talk/IASML/16/
END:VEVENT
END:VCALENDAR
