BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:Igor Pruenster (Bocconi University))
DTSTART:20211129T160000Z
DTEND:20211129T164500Z
DTSTAMP:20260422T185331Z
UID:CMO-21w5107/1
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/CMO-21w5107/
 1/">Nonparametric priors for partially exchangeable data: dependence struc
 ture and borrowing of information</a>\nby Igor Pruenster (Bocconi Universi
 ty)) as part of CMO-Foundations of Objective Bayesian Methodology\n\nAbstr
 act: TBA\n
LOCATION:https://researchseminars.org/talk/CMO-21w5107/1/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Beatrice Franzolini (Bocconi University\, Italy)
DTSTART:20211129T164500Z
DTEND:20211129T173000Z
DTSTAMP:20260422T185331Z
UID:CMO-21w5107/2
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/CMO-21w5107/
 2/">Nonparametric priors with full-range borrowing of information</a>\nby 
 Beatrice Franzolini (Bocconi University\, Italy) as part of CMO-Foundation
 s of Objective Bayesian Methodology\n\n\nAbstract\nWhen data are grouped i
 nto distinct samples\, they typically are homogeneous within and heterogen
 eous across groups. In this case\, the Bayesian paradigm requires a prior 
 law over a collection of distributions. From a modelling point of view\, i
 t is essential to study how this structure reflects on the observables\, e
 specially in nonparametric models. We introduce the notion of hyper-ties a
 nd show that they play the same role of actual ties in the exchangeable se
 tting\, driving the dependence between observations. Using hyper-ties\, we
  can compute correlation between observables and show how its sign depends
  from the joint specification. Finally\, we propose a novel class of depen
 dent nonparametric priors\, which may induce either positive or negative c
 orrelation across samples.\n\n"\n
LOCATION:https://researchseminars.org/talk/CMO-21w5107/2/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Marta Catalano (University of Warwick\, UK)
DTSTART:20211129T180000Z
DTEND:20211129T184500Z
DTSTAMP:20260422T185331Z
UID:CMO-21w5107/3
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/CMO-21w5107/
 3/">A Wasserstein index of dependence for Bayesian nonparametric modeling<
 /a>\nby Marta Catalano (University of Warwick\, UK) as part of CMO-Foundat
 ions of Objective Bayesian Methodology\n\n\nAbstract\nOptimal transport (O
 T) methods and Wasserstein distances are flourishing in many scientific fi
 elds as an effective means for comparing and connecting different random s
 tructures. In this talk we describe the first use of an OT distance betwee
 n Lévy measures with infinite mass to solve a statistical problem. Comple
 x phenomena often yield data from different but related sources\, which ar
 e ideally suited to Bayesian modeling because of its inherent borrowing of
  information. In a nonparametric setting\, this is regulated by the depend
 ence between random measures: we derive a general Wasserstein index for a 
 principled quantification of the dependence gaining insight into the model
 s’ deep structure. It also allows for an informed prior elicitation and 
 provides a fair ground for model comparison. Our analysis unravels many ke
 y properties of the OT distance between Lévy measures\, whose interest go
 es beyond Bayesian statistics\, spanning to the theory of partial differen
 tial equations and of Lévy processes.\n
LOCATION:https://researchseminars.org/talk/CMO-21w5107/3/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Isadora Antoniano-Villalobos (Ca' Foscari University of Venice)
DTSTART:20211129T184500Z
DTEND:20211129T193000Z
DTSTAMP:20260422T185331Z
UID:CMO-21w5107/4
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/CMO-21w5107/
 4/">Bayesian mixture models for the prediction of extreme observations</a>
 \nby Isadora Antoniano-Villalobos (Ca' Foscari University of Venice) as pa
 rt of CMO-Foundations of Objective Bayesian Methodology\n\n\nAbstract\nIn 
 many applications with interest in large or extreme observations\, usual i
 nferential methods may fail to reproduce the tail behaviour of the variabl
 es involved. Recent literature has proposed the use of multivariate extrem
 e value theory to predict an unobserved component of a random vector given
  large observed values of the rest. This is achieved through the estimatio
 n of the angular measure controlling the dependence structure in the tail 
 of the distribution. The idea can be extended and used for prediction of m
 ultiple components at adequately large levels\, provided the model used fo
 r the angular measure is sufficiently flexible enough to capture complex d
 ependence structures. The use of Bernstein polynomials ensures such flexib
 ility and their interpretation as mixture models allows the use of current
  trans-dimensional MCMC posterior simulation methods for inference.\n
LOCATION:https://researchseminars.org/talk/CMO-21w5107/4/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Julyan Arbel (Inria Grenoble\, France)
DTSTART:20211129T220000Z
DTEND:20211129T224500Z
DTSTAMP:20260422T185331Z
UID:CMO-21w5107/5
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/CMO-21w5107/
 5/">Improving MCMC convergence diagnostic with a local version of R-hat</a
 >\nby Julyan Arbel (Inria Grenoble\, France) as part of CMO-Foundations of
  Objective Bayesian Methodology\n\n\nAbstract\nDiagnosing convergence of M
 arkov chain Monte Carlo (MCMC) is crucial in Bayesian analysis. Among the 
 most popular methods\, the potential scale reduction factor (commonly name
 d R-hat) is an indicator that monitors the convergence of all chains to th
 e stationary distribution\, based on a comparison of the between- and with
 in-variance of the chains. Several improvements have been suggested since 
 its introduction by Gelman & Rubin (1992). Here\, we analyse some properti
 es of the theoretical value R associated to R-hat in the case of a localiz
 ed version that focuses on quantiles of the distribution. This leads to pr
 oposing a new indicator\, which is shown to allow both for localizing the 
 MCMC convergence in different quantiles of the distribution\, and at the s
 ame time for handling some convergence issues not detected by other R-hat 
 versions.\n
LOCATION:https://researchseminars.org/talk/CMO-21w5107/5/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Campbell Trevor (University of British Columbia\, Canada)
DTSTART:20211129T224500Z
DTEND:20211129T230000Z
DTSTAMP:20260422T185331Z
UID:CMO-21w5107/6
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/CMO-21w5107/
 6/">Parallel Tempering on Optimized Paths</a>\nby Campbell Trevor (Univers
 ity of British Columbia\, Canada) as part of CMO-Foundations of Objective 
 Bayesian Methodology\n\n\nAbstract\nParallel tempering (PT) is a class of 
 Markov chain Monte Carlo algorithms that constructs a path of distribution
 s annealing between a tractable reference and an intractable target\, and 
 then interchanges states along the path to improve mixing in the target. T
 he performance of PT depends on how quickly a sample from the reference di
 stribution makes its way to the target\, which in turn depends on the part
 icular path of annealing distributions. However\, past work on PT has used
  only simple paths constructed from convex combinations of the reference a
 nd target log-densities. In this talk I'll show that this path performs po
 orly in the common setting where the reference and target are nearly mutua
 lly singular. To address this issue\, I'll present an extension of the PT 
 framework to general families of paths\, formulate the choice of path as a
 n optimization problem that admits tractable gradient estimates\, and pres
 ent a flexible new family of spline interpolation paths for use in practic
 e. Theoretical and empirical results will demonstrate that the proposed me
 thodology breaks previously-established upper performance limits for tradi
 tional paths.\n
LOCATION:https://researchseminars.org/talk/CMO-21w5107/6/
END:VEVENT
BEGIN:VEVENT
SUMMARY:María Fernanda Gil Leyva Villa (Bocconi University)
DTSTART:20211130T000000Z
DTEND:20211130T004500Z
DTSTAMP:20260422T185331Z
UID:CMO-21w5107/7
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/CMO-21w5107/
 7/">Gibbs sampling for mixtures in order of appearance: the ordered alloca
 tion sampler</a>\nby María Fernanda Gil Leyva Villa (Bocconi University) 
 as part of CMO-Foundations of Objective Bayesian Methodology\n\n\nAbstract
 \nGibbs sampling methods for mixture models are based on data augmentation
  schemes that account for the unobserved partition in the data. They have 
 been broadly classified into two categories: marginal and conditional samp
 lers. Marginal samplers are termed this way because they integrate out par
 t of the mixing distribution and model directly the partition structure. T
 hey can be used to implement mixture models with a tractable exchangeable 
 partition probability function (EPPF) associated to the mixing distributio
 n. However\, if the EPPF is not available in closed form\, marginal sample
 rs are hard to adapt. In contrast\, conditional samplers rely on allocatio
 n variables that identify each observation with a mixture component.  Whil
 e conditional samplers are more broadly applicable and allow direct infere
 nce on the mixing distribution\, they are known to suffer from slow mixing
 . Moreover\, for mixtures models with infinitely many components some form
  of truncation\, either deterministic or random\, is required. As for mixt
 ures with a random number of components\, the exploration of parameter spa
 ces of different dimensions can also be challenging. We tackle these issue
 s by expressing the mixture components in the random order of appearance i
 n an exchangeable sequence directed by the mixing distribution. We derive 
 a sampler\, called the ordered allocation sampler\, that is straightforwar
 d to implement for mixing distributions with tractable size-biased ordered
  weights. In infinite mixtures\, no form of truncation is necessary. As fo
 r finite mixtures with random dimension\, a simple updating of the number 
 of components is obtained by a blocking argument\, thus easing challenges 
 found in trans-dimensional moves via Metropolis-Hasting steps. Although th
 e ordered allocation sampler is a conditional sampler\, sampling occurs in
  the space of ordered partitions with blocks labelled in the least element
  order. This improves mixing and promotes a consistent labelling of mixtur
 e components throughout iterations.\n
LOCATION:https://researchseminars.org/talk/CMO-21w5107/7/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Anirban Bhattarcharya (Texas A&M University)
DTSTART:20211130T004500Z
DTEND:20211130T013000Z
DTSTAMP:20260422T185331Z
UID:CMO-21w5107/8
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/CMO-21w5107/
 8/">Coupling-based convergence assessment of some Gibbs samplers for high-
 dimensional Bayesian regression with shrinkage priors</a>\nby Anirban Bhat
 tarcharya (Texas A&M University) as part of CMO-Foundations of Objective B
 ayesian Methodology\n\n\nAbstract\nWe consider Markov chain Monte Carlo (M
 CMC) algorithms for Bayesian high-dimensional regression with continuous s
 hrinkage priors. A common challenge with these algorithms is the choice of
  the number of iterations to perform. This is critical when each iteration
  is expensive\, as is the case when dealing with modern data sets\, such a
 s genome-wide association studies with thousands of rows and up to hundred
 s of thousands of columns. We develop coupling techniques tailored to the 
 setting of high-dimensional regression with shrinkage priors\, which enabl
 e practical\, non-asymptotic diagnostics of convergence without relying on
  traceplots or long-run asymptotics. By establishing geometric drift and m
 inorization conditions for the algorithm under consideration\, we prove th
 at the proposed couplings have finite expected meeting time. Focusing on a
  class of shrinkage priors which includes the 'Horseshoe'\, we empirically
  demonstrate the scalability of the proposed couplings. A highlight of our
  findings is that less than 1000 iterations can be enough for a Gibbs samp
 ler to reach stationarity in a regression on 100\,000 covariates. The nume
 rical results also illustrate the impact of the prior on the computational
  efficiency of the coupling\, and suggest the use of priors where the loca
 l precisions are Half-t distributed with degree of freedom larger than one
 . (Joint work with Niloy Biswas\, Pierre Jacob\, and James Johndrow)\n
LOCATION:https://researchseminars.org/talk/CMO-21w5107/8/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Helen Ogden (University of Southampton\, UK09:45 - 10:30)
DTSTART:20211130T160000Z
DTEND:20211130T164500Z
DTSTAMP:20260422T185331Z
UID:CMO-21w5107/9
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/CMO-21w5107/
 9/">Approximate cross validation for mixture models</a>\nby Helen Ogden (U
 niversity of Southampton\, UK09:45 - 10:30) as part of CMO-Foundations of 
 Objective Bayesian Methodology\n\n\nAbstract\nChoosing appropriate priors 
 and hyperparameters to control the number of components used by a mixture 
 model is often challenging: it is typically hard to interpret such paramet
 ers directly\, which makes it difficult to use subjective prior knowledge.
  I will focus instead on how to choose these quantities to give a model wi
 th good frequentist properties. In principle\, models could be assessed by
  cross validation\, but in practice direct calculation of a cross validati
 on criterion is computationally expensive and numerically unstable. I will
  discuss methods for approximating cross validation criteria for mixture m
 odels\, which aim to address both of these issues.\n
LOCATION:https://researchseminars.org/talk/CMO-21w5107/9/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Alexander Ly (University of Amsterdam/CWI Amsterdam)
DTSTART:20211130T164500Z
DTEND:20211130T173000Z
DTSTAMP:20260422T185331Z
UID:CMO-21w5107/10
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/CMO-21w5107/
 10/">Default Bayes Factors for Testing the (In)equality of Several Populat
 ion Variances</a>\nby Alexander Ly (University of Amsterdam/CWI Amsterdam)
  as part of CMO-Foundations of Objective Bayesian Methodology\n\n\nAbstrac
 t\nThe goal of this presentation is to elaborate on the notion of objectiv
 ity Bayesian tests. Concretely\, I’ll discuss Harold Jeffreys’s deside
 rata for objective Bayes factors that were formalised by Bayarri\, Berger\
 , Forte and García-Donato (2012) within the context of testing the (in)eq
 uality of several population variances. I’ll also put forth the desidera
 tum of across-sample consistency for K-sample problems\, and show that for
  this problem\, such an objective Bayes factor adhering to all these desid
 erata (1) exists\, (2) is easily calculable\, and (3) has good frequentist
  properties. If time allows\, I’ll also discuss the sequential propertie
 s of the resulting Bayes factor.\n
LOCATION:https://researchseminars.org/talk/CMO-21w5107/10/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Luis E. Nieto-Barajas (ITAM Mexico)
DTSTART:20211130T180000Z
DTEND:20211130T184500Z
DTSTAMP:20260422T185331Z
UID:CMO-21w5107/11
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/CMO-21w5107/
 11/">Characterizing variation of nonparametric random probability measures
  using the Kullback–Leibler divergence</a>\nby Luis E. Nieto-Barajas (IT
 AM Mexico) as part of CMO-Foundations of Objective Bayesian Methodology\n\
 n\nAbstract\nThis work characterizes the dispersion of some popular random
  probability measures\, including the bootstrap\, the Bayesian bootstrap\,
  and the Pólya tree prior. This dispersion is measured in terms of the va
 riation of the Kullback–Leibler divergence of a random draw from the pro
 cess to that of its baseline centring measure. By providing a quantitative
  expression of this dispersion around the baseline distribution\, our work
  provides insight for comparing different parameterizations of the models 
 and for the setting of prior parameters in applied Bayesian settings. This
  highlights some limitations of the existing canonical choice of parameter
  settings in the Pólya tree process.\n
LOCATION:https://researchseminars.org/talk/CMO-21w5107/11/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Chris Holmes (Oxford University)
DTSTART:20211130T184500Z
DTEND:20211130T193000Z
DTSTAMP:20260422T185331Z
UID:CMO-21w5107/12
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/CMO-21w5107/
 12/">Predictive Inference: a view towards objectivity</a>\nby Chris Holmes
  (Oxford University) as part of CMO-Foundations of Objective Bayesian Meth
 odology\n\n\nAbstract\nWe revisit the predictive approach to Bayesian stat
 istics\, advocated by Geisser and others\, as a framework to facilitate ob
 jective inference. We explore the predictive viewpoint of Bayesian nonpara
 metric learning as a means to improve robustness in M-open and we point to
  future research directions.\n
LOCATION:https://researchseminars.org/talk/CMO-21w5107/12/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Judith Rousseau (University of Oxford)
DTSTART:20211130T220000Z
DTEND:20211130T224500Z
DTSTAMP:20260422T185331Z
UID:CMO-21w5107/13
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/CMO-21w5107/
 13/">Using cut posterior in semi parametric inference with applications to
  semiparametric and nonparametric Bayesian inference in hidden Markov mode
 ls</a>\nby Judith Rousseau (University of Oxford) as part of CMO-Foundatio
 ns of Objective Bayesian Methodology\n\n\nAbstract\nIf the theory of Bayes
 ian approaches in standard nonparametric or high dimensional models is beg
 inning to be well developed\, not so much is known in the context of semi-
 parametric models outside very specific priors and models. We propose in t
 his talk a pseudo Bayesian approach\, based on the cut posterior which all
 ows for the construction of a distribution on the whole parameter and is c
 onstructed such that the marginal posterior on the parameter of interest h
 as optimal properties. We apply this approach to the setup of nonparametri
 c hidden Markov models with finite state space and nonparametric emission 
 distributions. Since the seminal paper of Gassiat et al. (2016)\, it is kn
 own that in such models the transition matrix $Q$ and the emission distrib
 utions $F_1\, · · · \, F_K$ are identifiable\, up to label switching. W
 e a cut posterior to simultaneously estimate $Q$ at the rate $\\sqrt{n}$ a
 nd the emission distributions at the usual nonparametric rates. To do so\,
  we first consider a prior $\\pi_1$ on $Q$ and $F_1\, · · · \, F_K$ whi
 ch leads to a posterior marginal distribution on $Q$ which verifies the Be
 rnstein von mises property and thus to an estimator of $Q$ which is effici
 ent. We then combine the marginal posterior on $Q$ with an other posterior
  distribution on the emission distributions\, following the cut-posterior 
 approach\, to obtain a posterior which also concentrates around the emissi
 on distributions at the minimax rates. In addition an important intermedia
 te result of our work is an inversion inequality which allows to upper bou
 nd the $L_1$ norms between the emission densities by the $L_1$ norms betwe
 en marginal densities of 3 consecutive observations.\n
LOCATION:https://researchseminars.org/talk/CMO-21w5107/13/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Sinead Williamson (University of Texas at Austin)
DTSTART:20211130T224500Z
DTEND:20211130T230000Z
DTSTAMP:20260422T185331Z
UID:CMO-21w5107/14
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/CMO-21w5107/
 14/">Posterior normalizing flows</a>\nby Sinead Williamson (University of 
 Texas at Austin) as part of CMO-Foundations of Objective Bayesian Methodol
 ogy\n\n\nAbstract\nNormalizing flows allow us to construct complex probabi
 lity distributions $\\mathbb{P}(X)$ by transforming simpler distributions 
 $\\mathbb{Q}(Z)$\, via a change of variables $X=f_\\theta(Z)$. If we model
  the change-of-variables transformation $f_\\theta$ using an invertible ne
 ural network with an analytically tractable Jacobian\, we can evaluate lik
 elihoods under the resulting distribution $\\mathbb{P}(X)$\, allowing us t
 o perform maximum likelihood density estimation. Such maximum likelihood d
 ensity estimation is likely to overfit\, particularly if the number of obs
 ervations is small. Rather than creating a mapping between a pair of distr
 ibutions\, we use normalizing flows to describe the relationship between t
 wo families of distributions. This allows us to use nonparametric learning
  techniques to learn posterior distributions in a lightweight manner.  (Jo
 int work with Evan Ott)\n
LOCATION:https://researchseminars.org/talk/CMO-21w5107/14/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Michele Guindani (University of California\, USA)
DTSTART:20211201T000000Z
DTEND:20211201T004500Z
DTSTAMP:20260422T185331Z
UID:CMO-21w5107/15
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/CMO-21w5107/
 15/">A Common Atom Model for the Bayesian Nonparametric Analysis of Nested
  Data</a>\nby Michele Guindani (University of California\, USA) as part of
  CMO-Foundations of Objective Bayesian Methodology\n\n\nAbstract\nThe use 
 of large datasets for targeted therapeutic interventions requires new ways
  to characterize the heterogeneity observed across subgroups of a specific
  population. In particular\, models for partially exchangeable data are ne
 eded for inference on nested datasets\, where the observations are assumed
  to be organized in different units and some sharing of information is req
 uired to learn distinctive features of the units. In this talk\, we propos
 e a nested Common Atoms Model (CAM) that is particularly suited for the an
 alysis of nested datasets where the distributions of the units are expecte
 d to differ only over a small fraction of the observations sampled from ea
 ch unit. The proposed CAM allows a two-layered clustering at the distribut
 ional and observational level and is amenable to scalable posterior infere
 nce through the use of a computationally efficient nested slice sampler al
 gorithm. We further discuss how to extend the proposed modeling framework 
 to handle discrete measurements\, and we conduct posterior inference on a 
 real microbiome dataset from a diet swap study to investigate how the alte
 rations in intestinal microbiota composition are associated with different
  eating habits. If time allows\, we will also discuss an application to th
 e analysis of time series calcium imaging experiments in awake behaving an
 imals. We further investigate the performance of our model in capturing tr
 ue distributional structures in the population by means of simulation stud
 ies.\n
LOCATION:https://researchseminars.org/talk/CMO-21w5107/15/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Giovanni Rebaudo (University of Texas at Austin)
DTSTART:20211201T004500Z
DTEND:20211201T013000Z
DTSTAMP:20260422T185331Z
UID:CMO-21w5107/16
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/CMO-21w5107/
 16/">Graph-Aligned Random Partition Model</a>\nby Giovanni Rebaudo (Univer
 sity of Texas at Austin) as part of CMO-Foundations of Objective Bayesian 
 Methodology\n\n\nAbstract\nBayesian nonparametric mixtures and random part
 ition models are eﬀective tools to perform probabilistic clustering. How
 ever\, standard independent mixture models can be restrictive in some appl
 ications such as inference on cell-lineage due to the biological relations
  of the clusters. The increasing availability of large genomics data and s
 tudies require new statistical tolls to perform model-based clustering and
  infer the relationship between the homogeneous subgroups of units. Motiva
 ted by single-cell RNA applications we develop a novel dependent mixture m
 odel to jointly perform cluster analysis and align the cluster on a graph.
  Our flexible graph-aligned random partition model (gRPM) cleverly exploit
 s Gibbs -type priors as building blocks allowing us to derive analytical r
 esults on the probability mass function of the random partition. From the 
 pmf of the random partition\, we derive a generalization of the well-known
  Chinese restaurant process and a related eﬃcient MCMC algorithm to perf
 orm Bayesian inference. We perform posterior inference on real single-cell
  RNA data from mice stem cells. We further investigate the performance of 
 our model in capturing underlying clustering structure as well as the unde
 rlying graph by means of a simulation study.\n
LOCATION:https://researchseminars.org/talk/CMO-21w5107/16/
END:VEVENT
BEGIN:VEVENT
SUMMARY:David Rossell (Universitat Pompeu Fabra\, Spain)
DTSTART:20211201T160000Z
DTEND:20211201T164500Z
DTSTAMP:20260422T185331Z
UID:CMO-21w5107/17
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/CMO-21w5107/
 17/">Confounder importance learning for treatment effect inference</a>\nby
  David Rossell (Universitat Pompeu Fabra\, Spain) as part of CMO-Foundatio
 ns of Objective Bayesian Methodology\n\n\nAbstract\nAn important basic pro
 blem is to estimate the association of a set of covariates of interest (tr
 eatments) while accounting for many potential confounders. It has been sho
 wn that standard high-dimensional Bayesian and penalized likelihood method
 s perform poorly in practice. The sparsity embedded in such methods leads 
 to low power when there are strong correlations between treatments and con
 founders\, or between confoundres\, which causes an under-selection (or om
 itted variable) bias. Current solutions encourage the inclusion of confoun
 ders to increase power\, but as we show this can lead to serious over-sele
 ction problems. To address these issues\, we propose an empirical Bayes fr
 amework to learn what confounders should be encouraged (or disencouraged) 
 to feature in the regression. We develop exact computations and a faster e
 xpectation-propagation strategy for the family of exponential regression m
 odels. We illustrate the applied impact of these issues to study the assoc
 iation between salary and potentially discriminatory factors such as gende
 r\, race and place of birth.\n
LOCATION:https://researchseminars.org/talk/CMO-21w5107/17/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Jack Jewson (Universitat Pompeu Fabra\, Spain)
DTSTART:20211201T164500Z
DTEND:20211201T173000Z
DTSTAMP:20260422T185331Z
UID:CMO-21w5107/18
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/CMO-21w5107/
 18/">General Bayesian Loss Function Selection and the use of Improper Mode
 ls</a>\nby Jack Jewson (Universitat Pompeu Fabra\, Spain) as part of CMO-F
 oundations of Objective Bayesian Methodology\n\n\nAbstract\nStatisticians 
 often face the choice between using probability models or a paradigm defin
 ed by minimising a loss function.  Both approaches are useful and\, if the
  loss can be re-cast into a proper probability model\, there are many tool
 s to decide which model or loss  is more appropriate for the observed data
 \, in the sense of explaining \nthe data’s nature. However\, when the lo
 ss leads to an improper model\,  there are no principled ways to guide thi
 s choice. We address this task by combining the Hyvarinen score\, which na
 turally targets infinitesimal relative probabilities\, and general Bayesia
 n updating\, which provides a unifying framework for inference on losses a
 nd models. Specifically we propose the H-score\, a general Bayesian select
 ion criterion and prove that it consistently selects the (possibly imprope
 r) model closest to \nthe data-generating truth in Fisher’s divergence. 
 We also prove that an associated H-posterior consistently learns optimal h
 yper-parameters featuring in loss functions\, including a challenging temp
 ering parameter in generalised Bayesian inference. As salient examples\, w
 e consider robust regression and non-parametric density estimation where p
 opular loss functions define improper models for the data and hence cannot
  be dealt with using standard model selection tools. These examples illust
 rate advantages in robustness-efficiency tradeoffs and provide a Bayesian 
 implementation for kernel density estimation\, opening a new avenue for Ba
 yesian non-parametrics.\n
LOCATION:https://researchseminars.org/talk/CMO-21w5107/18/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Veronika Rockova (University of Chicago)
DTSTART:20211201T180000Z
DTEND:20211201T184500Z
DTSTAMP:20260422T185331Z
UID:CMO-21w5107/19
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/CMO-21w5107/
 19/">Metropolis-Hastings via Classification</a>\nby Veronika Rockova (Univ
 ersity of Chicago) as part of CMO-Foundations of Objective Bayesian Method
 ology\n\n\nAbstract\nThis paper develops a Bayesian computational platform
  at the interface between posterior sampling and optimization in models wh
 ose marginal likelihoods are difficult to evaluate. Inspired by contrastiv
 e learning and Generative Adversarial Networks (GAN)\, we reframe the like
 lihood function estimation problem as a classification problem. Pitting a 
 Generator\, who simulates fake data\, against a Classifier\, who tries to 
 distinguish them from the real data\, one obtains likelihood (ratio) estim
 ators which can be plugged into the Metropolis-Hastings algorithm. The res
 ulting Markov chains generate\, at a steady state\, samples from an approx
 imate posterior whose asymptotic properties we characterize. Drawing upon 
 connections with empirical Bayes and Bayesian mis-specification\, we quant
 ify the convergence rate in terms of the contraction speed of the actual p
 osterior and the convergence rate of the Classifier.  Asymptotic normality
  results are also provided which justify the inferential potential of our 
 approach. We illustrate the usefulness of our  approach on examples which 
 have challenged for existing Bayesian likelihood-free approaches.\n
LOCATION:https://researchseminars.org/talk/CMO-21w5107/19/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Rajesh Ranganath (Courant Institute NYU\, USA)
DTSTART:20211201T184500Z
DTEND:20211201T193000Z
DTSTAMP:20260422T185331Z
UID:CMO-21w5107/20
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/CMO-21w5107/
 20/">Where did my Bayes Go?</a>\nby Rajesh Ranganath (Courant Institute NY
 U\, USA) as part of CMO-Foundations of Objective Bayesian Methodology\n\n\
 nAbstract\nI've spent time working on Bayesian methods\, especially scalab
 le computation. However\, my recent work has developed algorithms tailored
  to problems in healthcare that do not easily translate to standard Bayesi
 an computation. In this talk\, I will highlight two such methods\, one for
  survival analysis based on multiplayer games and another for building pre
 dictive models in the presence of spurious correlations. At the end\, I'll
  highlight thoughts on how Bayesian analysis might play a role in these pr
 oblems.\n
LOCATION:https://researchseminars.org/talk/CMO-21w5107/20/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Noirrit Chandra (The University of exas at Austin\, USA)
DTSTART:20211202T160000Z
DTEND:20211202T164500Z
DTSTAMP:20260422T185331Z
UID:CMO-21w5107/21
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/CMO-21w5107/
 21/">Bayesian Scalable Precision Factor Analysis for Massive Sparse Gaussi
 an Graphical Models</a>\nby Noirrit Chandra (The University of exas at Aus
 tin\, USA) as part of CMO-Foundations of Objective Bayesian Methodology\n\
 n\nAbstract\n"We propose a novel approach to estimating the precision matr
 ix of multivariate Gaussian data that relies on decomposing them into a lo
 w-rank and a diagonal component. Such decompositions are very popular for 
 modeling large covariance matrices as they admit a latent factor based rep
 resentation that allows easy inference. The same is however not true for p
 recision matrices due to the lack of computationally convenient representa
 tions which restricts inference to low-to-moderate dimensional problems. W
 e address this remarkable gap in the literature by building on a latent va
 riable representation for such decomposition for precision matrices. The c
 onstruction leads to an efficient Gibbs sampler that scales very well to h
 igh-dimensional problems far beyond the limits of the current state-of-the
 -art. The ability to efficiently explore the full posterior space also all
 ows the model uncertainty to be easily assessed. The decomposition crucial
 ly additionally allows us to adapt sparsity inducing priors to  shrink the
  insignificant entries of the precision matrix toward zero\, making the ap
 proach adaptable to high-dimensional small-sample-size sparse settings. Ex
 act zeros in the matrix encoding the underlying conditional independence g
 raph are then determined via a novel posterior false discovery rate contro
 l procedure. A near minimax optimal posterior concentration rate for estim
 ating precision matrices is attained by our method under mild regularity a
 ssumptions.\nWe evaluate the method's empirical performance through synthe
 tic experiments and illustrate its practical utility in data sets from two
  different application domains.\n
LOCATION:https://researchseminars.org/talk/CMO-21w5107/21/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Daniele Durante (Bocconi University\, Italy)
DTSTART:20211202T164500Z
DTEND:20211202T173000Z
DTSTAMP:20260422T185331Z
UID:CMO-21w5107/22
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/CMO-21w5107/
 22/">Advances in Bayesian inference for regression models with binary\, ca
 tegorical and partially-discretized data</a>\nby Daniele Durante (Bocconi 
 University\, Italy) as part of CMO-Foundations of Objective Bayesian Metho
 dology\n\n\nAbstract\nA broad class of models that routinely appear in sev
 eral fields of application can be expressed as partially or fully discreti
 zed Gaussian linear regressions. Besides including the classical Gaussian 
 response setting\, this class crucially encompasses probit\, multinomial p
 robit and tobit models\, among others\, and further includes key extension
 s to dynamic\, skewed and multivariate contexts. The relevance of such rep
 resentations has motivated decades of  research in the Bayesian field. The
  main reason for this active interest is that\, unlike for the Gaussian re
 sponse setting\, the posterior distribution induced by these models does n
 ot apparently belong to a known and tractable class\, under the commonly-a
 ssumed Gaussian priors. This has motivated the development of several alte
 rnative solutions for posterior inference relying either on sampling-based
  strategies or on deterministic approximations\, which\, however\, still e
 xperience scalability\, mixing and accuracy issues\, especially in high di
 mension. The scope of this talk is to review\, unify and extend recent adv
 ances in Bayesian inference and computation for such a class of models. To
  address this goal\, I will prove that the likelihoods induced by all thes
 e formulations crucially share a common analytical structure which implies
  conjugacy with a broad class of distributions\, namely the unified skew-n
 ormals (SUN)\, that generalize multivariate Gaussians to skewed contexts\,
  and include these variables as a special case. This result unifies and ex
 tends recent conjugacy properties for specific models within the class ana
 lyzed\, and opens new avenues for improved posterior inference\, under a b
 roader class of core formulations and prior distributions\, via novel clos
 ed-form expressions\, tractable Monte Carlo methods based on independent a
 nd identically distributed samples from the exact SUN posteriors\, and mor
 e accurate and scalable approximations from variational Bayes and expectat
 ion-propagation. These advantages are illustrated in extensive simulation 
 studies and applications\, and are expected to boost the routine-use of th
 ese such core Bayesian models\, while providing a novel framework for stud
 ying general theoretical properties and developing future extensions.\n
LOCATION:https://researchseminars.org/talk/CMO-21w5107/22/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Filippo Ascolani (Bocconi University\, Italy)
DTSTART:20211202T180000Z
DTEND:20211202T184500Z
DTSTAMP:20260422T185331Z
UID:CMO-21w5107/23
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/CMO-21w5107/
 23/">Trees of random probability measures and Bayesian nonparametric model
 ling</a>\nby Filippo Ascolani (Bocconi University\, Italy) as part of CMO-
 Foundations of Objective Bayesian Methodology\n\n\nAbstract\nWe introduce 
 a way to generate trees of random probability measures\, where the link be
 tween two nodes is given by a hierarchical procedure: starting from a comm
 on root\, each node of the tree is endowed with a random probability measu
 re\, whose baseline distribution is again random and given by the associat
 ed node in the previous layer.  The data can be observed at any node of th
 e tree and different branches may have different length: the split mechani
 sm can be also considered random or based on covariates of interest. When 
 the branches have the same length and the observations are linked only to 
 the leaves\, we recover the well known family of discrete hierarchical pro
 cesses We prove that\, if the distribution at each node is given by the no
 rmalization of a completely random measure (NRMI)\, the model is analytica
 lly tractable: conditional on a suitable latent structure\, the posterior 
 is still given by a deep NRMI. Furthermore\, the asymptotic behaviour of t
 he number of clusters is derived\, when either the sample size at a partic
 ular layer diverges or the number of levels grows. Finally\, the extension
  to kernel mixtures is discussed.\n
LOCATION:https://researchseminars.org/talk/CMO-21w5107/23/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Yang Ni (Texas A&M University\, USA)
DTSTART:20211202T184500Z
DTEND:20211202T193000Z
DTSTAMP:20260422T185331Z
UID:CMO-21w5107/24
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/CMO-21w5107/
 24/">Individualized Causal Discovery with Latent Trajectory Embedded Bayes
 ian Networks</a>\nby Yang Ni (Texas A&M University\, USA) as part of CMO-F
 oundations of Objective Bayesian Methodology\n\n\nAbstract\nBayesian netwo
 rks have been widely used for generating causal hypotheses from multivaria
 te data. Despite their popularity\, the vast majority of existing causal d
 iscovery approaches make the strong assumption of a (partially) homogeneou
 s sampling scheme. However\, such assumption can be seriously violated cau
 sing significant biases when the underlying population is inherently heter
 ogeneous. To explicitly account for the heterogeneity\, we propose a novel
  Bayesian network model\, termed BN-LTE\, that embeds the heterogeneous da
 ta onto a low-dimensional manifold and builds Bayesian networks conditiona
 l on the embedding. This new framework allows for more precise network inf
 erence by improving the estimation resolution from population level to obs
 ervation level (individualized causal models). Moreover\, while Bayesian n
 etworks are in general not identifiable with purely observational\, cross-
 sectional data due to Markov equivalence\, with the blessing of heterogene
 ity\, we prove that the proposed BN-LTE is uniquely identifiable under com
 mon causal assumptions. Through extensive experiments\, we demonstrate the
  superior performance of BN-LTE in discovering causal relationships as wel
 l as inferring observation-specific gene regulatory networks from observat
 ional data.\n
LOCATION:https://researchseminars.org/talk/CMO-21w5107/24/
END:VEVENT
BEGIN:VEVENT
SUMMARY:José Antonio Perusquía (University of Kent\, UK)
DTSTART:20211202T220000Z
DTEND:20211202T224500Z
DTSTAMP:20260422T185331Z
UID:CMO-21w5107/25
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/CMO-21w5107/
 25/">A Bayesian Approach to Anomaly Detection in Computer Systems: A Revie
 w</a>\nby José Antonio Perusquía (University of Kent\, UK) as part of CM
 O-Foundations of Objective Bayesian Methodology\n\n\nAbstract\nComputer sy
 stems are vast\, complex and dynamic objects that have become crucial in m
 odern life. To ensure their correct performance\, there is a need to effic
 iently detect vulnerabilities and anomalies that could shut them down with
  potentially catastrophic consequences. Nowadays\, there exist a wide numb
 er of classical and machine learning models used for such an important tas
 k. However\, these approaches lack the flexibility and the inherent probab
 ilistic characterisation of uncertainty that Bayesian statistics offer. Th
 at is why\, in recent years Bayesian anomaly detection models applied spec
 ifically to computer systems have gained considerable attention\, in parti
 cular in the field of cyber security. That is why in this talk we centre o
 ur attention on how these models have been used\, the specific challenges 
 and interesting areas of opportunity.\n
LOCATION:https://researchseminars.org/talk/CMO-21w5107/25/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Katherine Heller (Google Research)
DTSTART:20211202T224500Z
DTEND:20211202T233000Z
DTSTAMP:20260422T185331Z
UID:CMO-21w5107/26
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/CMO-21w5107/
 26/">Towards Trustworthy Machine Learning in Medicine and the Role of Unce
 rtainty</a>\nby Katherine Heller (Google Research) as part of CMO-Foundati
 ons of Objective Bayesian Methodology\n\n\nAbstract\nAs ML is increasingly
  used in society\, we need methods that we have confidence that we can rel
 y on\, particularly in the medical domain. In this talk I discuss 3 pieces
  of work\, the role uncertainty plays in understanding and combating issue
 s with generalization and bias\, and particular mitigations that we can ta
 ke into consideration.\n\n1) Sepsis Watch - I present a Gaussian Process (
 GP) + Recurrent Neural Network (RNN) model for predicting sepsis infection
 s in Emergency Department patients. I will discuss the benefit of uncertai
 nty given by the GP. I will then discuss the social context in introducing
  such a system into a hospital setting.\n\n2) Uncertainty and Electronic H
 ealth Records (EHR) - I will discuss Bayesian RNN models developed for mor
 tality prediction\, and the distinction between population level predictiv
 e performance and individual level predictive performance\, and its implic
 ations for bias.\n\n3) Underspecification and the credibility implications
  of hyperparameter choices in ML models -- I will discuss medical imaging 
 applications and how using the uncertainty of model performance conditione
 d on choice of hyperparameters can help identify situations in which metho
 ds may not generalize well outside the training domain.\n
LOCATION:https://researchseminars.org/talk/CMO-21w5107/26/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Mengyang Gu (University of California Santa Barbara\, USA)
DTSTART:20211203T000000Z
DTEND:20211203T004500Z
DTSTAMP:20260422T185331Z
UID:CMO-21w5107/27
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/CMO-21w5107/
 27/">Marginalization of latent variables for correlated data</a>\nby Mengy
 ang Gu (University of California Santa Barbara\, USA) as part of CMO-Found
 ations of Objective Bayesian Methodology\n\n\nAbstract\nWe will discuss ma
 rginalization of latent variables for correlated outcomes\, such as multip
 le time series\, spatio-temporal processes\, and computer simulations. We 
 first review the Kalman filter and its connection to Gaussian processes wi
 th Matern covariance. Then we discuss vector regressive models\, linear mo
 dels of coregionalization\, and their connections to Gaussian processes wi
 th product covariance. We show marginalizing correlated latent variables l
 eads to efficient estimation of model parameters and predictions. As an ex
 ample\, we will introduce generalized probabilistic principal component an
 alysis (GPPCA) to study the latent factor model for multiple correlated ou
 tcomes. Our method generalizes the previous probabilistic formulation of p
 rincipal component analysis (PPCA) by providing the closed-form maximum ma
 rginal likelihood estimator of the factor loadings and other parameters\, 
 where each factor is modeled by a Gaussian process. Lastly we will introdu
 ce efficient representation of Gaussian processes with product Matern cova
 riance and its applications on emulating massive computer simulations. We 
 will present numerical studies of simulated and real data that confirms go
 od predictive accuracy and computational efficiency of proposed approaches
 .\n
LOCATION:https://researchseminars.org/talk/CMO-21w5107/27/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Alan Riva-Palacio (IIMAS-UNAM\, Mexico)
DTSTART:20211203T004500Z
DTEND:20211203T013000Z
DTSTAMP:20260422T185331Z
UID:CMO-21w5107/28
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/CMO-21w5107/
 28/">Bayesian analysis of vectors of subordinators</a>\nby Alan Riva-Palac
 io (IIMAS-UNAM\, Mexico) as part of CMO-Foundations of Objective Bayesian 
 Methodology\n\n\nAbstract\nNon-decreasing additive processes\, also called
  subordinators\,  have many applications throughout mathematical modeling\
 ; for instance\, they have been quite used in risk and finance. Well known
  examples of subordinators are the stable\, gamma and compound Poisson pro
 cesses with positive jumps.  Extension to a multivariate setting for study
 ing heterogeneous data by considering vectors of subordinators can be perf
 ormed and has been studied in a frequentist setting. In this talk we will 
 discuss the challenges for the Bayesian analysis of models based on such v
 ectors of subordinators.\n
LOCATION:https://researchseminars.org/talk/CMO-21w5107/28/
END:VEVENT
END:VCALENDAR
