Relaxing the I.I.D. assumption: Adaptive mnimax optimal sequential prediction with expert advice
Jeffrey Negrea (University of Toronto)
Abstract: We consider sequential prediction with expert advice when the data are generated stochastically, but the distributions generating the data may vary arbitrarily among some constraint set. We quantify relaxations of the classical I.I.D. assumption in terms of possible constraint sets, with I.I.D. at one extreme, and an adversarial mechanism at the other. The Hedge algorithm, long known to be minimax optimal for in the adversarial regime, has recently been shown to also be minimax optimal in the I.I.D. setting. We show that Hedge is sub-optimal between these extremes, and present a new algorithm that is adaptively minimax optimal with respect to our relaxations of the I.I.D. assumption, without knowledge of which setting prevails.
bioinformaticsgame theoryinformation theorymachine learningneural and evolutionary computingclassical analysis and ODEsoptimization and controlstatistics theory
Audience: researchers in the topic
IAS Seminar Series on Theoretical Machine Learning
Series comments: Description: Seminar series focusing on machine learning. Open to all.
Register in advance at forms.gle/KRz8hexzxa5P4USr7 to receive Zoom link and password. Recordings of past seminars can be found at www.ias.edu/video-tags/seminar-theoretical-machine-learning
| Organizers: | Ke Li*, Sanjeev Arora |
| *contact for this listing |
