BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:Sara Magliacane (University of Amsterdam and MIT-IBM Watson AI Lab
 )
DTSTART:20230608T160000Z
DTEND:20230608T170000Z
DTSTAMP:20260423T003231Z
UID:MPML/108
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/MPML/108/">C
 ausal vs causality-inspired representation learning</a>\nby Sara Magliacan
 e (University of Amsterdam and MIT-IBM Watson AI Lab) as part of Mathemati
 cs\, Physics and Machine Learning (IST\, Lisbon)\n\n\nAbstract\n<p>Causal 
 representation learning (CRL) aims at learning causal factors and their ca
 usal relations from high-dimensional observations\, e.g. images. In genera
 l\, this is an ill-posed problem\, but under certain assumptions or with t
 he help of additional information or interventions\, we are able to guaran
 tee that the representations we learn are corresponding to some true under
 lying causal factors up to some equivalence class.<br />\nIn this talk I w
 ill first present CITRIS (<a href="https://proceedings.mlr.press/v162/lipp
 e22a/lippe22a.pdf" rel="noreferrer" target="_blank">https://proceedings.ml
 r.press/v162/lippe22a/lippe22a.pdf</a>)\, a variational autoencoder framew
 ork for causal representation learning from temporal sequences of images\,
  in systems in which we can perform interventions. CITRIS exploits tempora
 lity and observing intervention targets to identify scalar and multidimens
 ional causal factors\, such as 3D rotation angles. In experiments on 3D re
 ndered image sequences\, CITRIS outperforms previous methods on recovering
  the underlying causal variables. Moreover\, using pretrained autoencoders
 \, CITRIS can even generalize to unseen instantiations of causal factors.<
 br />\n<br />\nWhile CRL is an exciting and promising new field of researc
 h\, the assumptions required by CITRIS and other current CRL methods can b
 e difficult to satisfy in many settings. Moreover\, in many practical case
 s learning representations that are not guaranteed to be fully causal\, bu
 t exploit some ideas from causality\, can still be extremely useful. As ex
 amples\, I will describe some of our work on exploiting these "causality-i
 nspired" representations for adapting policies across domains in RL (<a hr
 ef="https://openreview.net/forum?id=8H5bpVwvt5" rel="noreferrer" target="_
 blank">https://openreview.net/forum?id=8H5bpVwvt5</a>) and to nonstationar
 y environments (<a href="https://openreview.net/forum?id=VQ9fogN1q6e" rel=
 "noreferrer" target="_blank">https://openreview.net/forum?id=VQ9fogN1q6e</
 a>)\, and how learning a factored graphical representations (even if not n
 ecessarily causal) can be beneficial in these (and possibly other) setting
 s.</p>\n
LOCATION:https://researchseminars.org/talk/MPML/108/
END:VEVENT
END:VCALENDAR
