BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:Tim Hoheisel (McGill University)
DTSTART:20221013T223000Z
DTEND:20221013T233000Z
DTSTAMP:20260513T193317Z
UID:SFUOR/1
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/SFUOR/1/">Th
 e Maximum Entropy on the Mean Method for Linear Inverse Problems (and beyo
 nd)</a>\nby Tim Hoheisel (McGill University) as part of PIMS-CORDS SFU Ope
 rations Research Seminar\n\nLecture held in ASB 10908.\n\nAbstract\nThe pr
 inciple of ‘maximum entropy’ states that the probability distribution 
 which best represents the current state of knowledge about a system is the
  one with largest entropy with respect to a given prior (data) distributio
 n. It was first formulated in the context of statistical physics in two se
 minal papers by E. T. Jaynes (Physical Review\, Series II. 1957)\, and thu
 s constitutes an information theoretic manifestation of Occam’s razor. W
 e bring the idea of maximum entropy to bear in the context of linear inver
 se problems in that we solve for the probability measure which is close to
  the (learned or chosen) prior and whose expectation has small residual wi
 th respect to the observation. Duality leads to tractable\, finite-dimensi
 onal (dual) problems. A core tool\, which we then show to be useful beyond
  the linear inverse problem setting\, is the ‘MEMM functional’: it is 
 an infimal projection of the Kullback- Leibler divergence and a linear equ
 ation\, which coincides with Cramér’s function (ubiquitous in the theor
 y of large deviations) in most cases\, and is paired in duality with the c
 umulant generating function of the prior measure. Numerical examples under
 line the efficacy of the presented framework.\n
LOCATION:https://researchseminars.org/talk/SFUOR/1/
END:VEVENT
END:VCALENDAR
