BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:Sergei Zuyev (Chalmers)
DTSTART:20240417T111500Z
DTEND:20240417T120000Z
DTSTAMP:20260422T155020Z
UID:gbgstats/49
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/gbgstats/49/
 ">Training Bayesian neural networks with measure optimisation algorithms</
 a>\nby Sergei Zuyev (Chalmers) as part of Gothenburg statistics seminar\n\
 nLecture held in MVL14.\n\nAbstract\nOn a high abstraction level\, a Bayes
 ian neural network (BNN) can be seen as a function\nof input data and thei
 r prior probability distribution which yields\,\namong other outputs\, the
 ir estimated posterior probability\ndistribution. This distribution is a r
 esult of optimisation of a\nchosen score function aiming to favour these p
 robability distributions\nwhich describe best the observed data and take i
 nto account the prior\ndistribution.\n\nInstead of constraint optimisation
  over the simplex of probability distributions\, it is typical to\nmap thi
 s simplex into Euclidean space\, for example with Softmax function or its 
 variants\, and then do optimisation  in the whole\nspace without constrain
 ts. It is\, however\, widely acknowledged that such mapping often suffers 
 from undesirable properties for optimisation and\nstability of the algorit
 hms. To counterfeit this\, a few regularisation procedures have been propo
 sed in the literature.\n\nInstead of  trying to modify the mapping approac
 h\, we suggest\nturning back to optimisation on the original simplex using
  recently\ndeveloped algorithms for constrained optimisation of functional
 s of measures. \nWe demonstrate that our algorithms run tens times faster\
 nthan the standard algorithms involving softmax mapping and lead to exact 
 solutions rather than to their approximations.\n
LOCATION:https://researchseminars.org/talk/gbgstats/49/
END:VEVENT
END:VCALENDAR
