BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:Yu Bai (Salesforce Research)
DTSTART:20201028T170000Z
DTEND:20201028T180000Z
DTSTAMP:20260422T225701Z
UID:OneWorldML/2
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/OneWorldML/2
 /">How Important is the Train-Validation Split in Meta-Learning?</a>\nby Y
 u Bai (Salesforce Research) as part of One World Seminar Series on the  Ma
 thematics of Machine Learning\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/OneWorldML/2/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Ryan Murray (NC State University)
DTSTART:20201021T160000Z
DTEND:20201021T170000Z
DTSTAMP:20260422T225701Z
UID:OneWorldML/3
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/OneWorldML/3
 /">Consistency of Cheeger cuts: Total Variation\, Isoperimetry\, and Clust
 ering</a>\nby Ryan Murray (NC State University) as part of One World Semin
 ar Series on the  Mathematics of Machine Learning\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/OneWorldML/3/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Jonas Latz (University of Cambridge)
DTSTART:20201104T170000Z
DTEND:20201104T180000Z
DTSTAMP:20260422T225701Z
UID:OneWorldML/4
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/OneWorldML/4
 /">Analysis of Stochastic Gradient Descent in Continuous Time</a>\nby Jona
 s Latz (University of Cambridge) as part of One World Seminar Series on th
 e  Mathematics of Machine Learning\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/OneWorldML/4/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Zhengdao Chen (New York University)
DTSTART:20201111T170000Z
DTEND:20201111T180000Z
DTSTAMP:20260422T225701Z
UID:OneWorldML/5
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/OneWorldML/5
 /">A Dynamical Central Limit Theorem for Shallow Neural Networks</a>\nby Z
 hengdao Chen (New York University) as part of One World Seminar Series on 
 the  Mathematics of Machine Learning\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/OneWorldML/5/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Bamdad Hosseini (Caltech)
DTSTART:20201118T170000Z
DTEND:20201118T180000Z
DTSTAMP:20260422T225701Z
UID:OneWorldML/6
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/OneWorldML/6
 /">Conditional Sampling with Monotone GANs: Modifying Generative Models to
  Solve Inverse Problems</a>\nby Bamdad Hosseini (Caltech) as part of One W
 orld Seminar Series on the  Mathematics of Machine Learning\n\nAbstract: T
 BA\n
LOCATION:https://researchseminars.org/talk/OneWorldML/6/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Felix Voigtlaender (University of Vienna)
DTSTART:20201125T170000Z
DTEND:20201125T180000Z
DTSTAMP:20260422T225701Z
UID:OneWorldML/7
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/OneWorldML/7
 /">Neural network performance for classification problems with boundaries 
 of Barron class</a>\nby Felix Voigtlaender (University of Vienna) as part 
 of One World Seminar Series on the  Mathematics of Machine Learning\n\n\nA
 bstract\nWe study classification problems in which the distances between t
 he different classes are not necessarily positive\, but for which the boun
 daries between the classes are well-behaved. More precisely\, we assume th
 ese boundaries to be locally described by graphs of functions of Barron-cl
 ass. ReLU neural networks can approximate and estimate classification func
 tions of this type with rates independent of the ambient dimension. More f
 ormally\, three-layer networks with $N$ neurons can approximate such funct
 ions with $L^1$-error bounded by $O(N^{-1/2})$. Furthermore\, given $m$ tr
 aining samples from such a function\, and using ReLU networks of a suitabl
 e architecture as the hypothesis space\, any empirical risk minimizer has 
 generalization error bounded by $O(m^{-1/4})$. All implied constants depen
 d only polynomially on the input dimension. We also discuss the optimality
  of these rates. Our results mostly rely on the "Fourier-analytic" Barron 
 spaces that consist of functions with finite first Fourier moment. But sin
 ce several different function spaces have been dubbed "Barron spaces'' in 
 the recent literature\, we discuss how these spaces relate to each other. 
 We will see that they differ more than the existing literature suggests.\n
LOCATION:https://researchseminars.org/talk/OneWorldML/7/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Nadia Drenska (University of Minnesota)
DTSTART:20201209T170000Z
DTEND:20201209T180000Z
DTSTAMP:20260422T225701Z
UID:OneWorldML/8
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/OneWorldML/8
 /">A PDE Interpretation of Prediction with Expert Advice</a>\nby Nadia Dre
 nska (University of Minnesota) as part of One World Seminar Series on the 
  Mathematics of Machine Learning\n\n\nAbstract\nWe study the problem of pr
 ediction of binary sequences with expert advice in the online setting\, wh
 ich is a classic example of online machine learning. We interpret the bina
 ry sequence as the price history of a stock\, and view the predictor as an
  investor\, which converts the problem into a stock prediction problem. In
  this framework\, an investor\, who predicts the daily movements of a stoc
 k\, and an adversarial market\, who controls the stock\, play against each
  other over N turns. The investor combines the predictions of n ≥ 2 expe
 rts in order to make a decision about how much to invest at each turn\, an
 d aims to minimize their regret with respect to the best-performing expert
  at the end of the game. We consider the problem with history-dependent ex
 perts\, in which each expert uses the previous d days of history of the ma
 rket in making their predictions. The prediction problem is played (in par
 t) over a discrete graph called the d dimensional de Bruijn graph.\n\nWe f
 ocus on an appropriate continuum limit and using methods from optimal cont
 rol\, graph theory\, and partial differential equations\, we discuss strat
 egies for the investor and the adversarial market. We prove that the value
  function for this game\, rescaled appropriately\, converges as N → ∞ 
 at a rate of O(N−1/2)  (for C4 payoff functions) to the viscosity soluti
 on of a nonlinear degenerate parabolic PDE. It can be understood as the Ha
 milton-Jacobi-Issacs equation for the two-person game. As a result\, we ar
 e able to deduce asymptotically optimal strategies for the investor. \n\nT
 his is joint work with Robert Kohn and Jeff Calder.\n
LOCATION:https://researchseminars.org/talk/OneWorldML/8/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Ziwei Ji (University of Illinois)
DTSTART:20201216T170000Z
DTEND:20201216T180000Z
DTSTAMP:20260422T225701Z
UID:OneWorldML/9
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/OneWorldML/9
 /">The dual of the margin: improved analyses and rates for gradient descen
 t’s implicit bias</a>\nby Ziwei Ji (University of Illinois) as part of O
 ne World Seminar Series on the  Mathematics of Machine Learning\n\nAbstrac
 t: TBA\n\nThe implicit bias of gradient descent\, and specifically its mar
 gin maximization properties\, have arisen as a promising explanation for t
 he good generalization of deep networks. The purpose of this talk is to de
 monstrate the effectiveness of a dual problem to smoothed margin maximizat
 ion. Concretely\, this talk will develop this dual\, as well as a variety 
 of consequences in linear and nonlinear settings.\n\nIn the linear case\, 
 this dual perspective firstly will yield fast 1/t rates for margin maximiz
 ation and implicit bias. This is faster than any prior first-order hard-ma
 rgin SVM solver\, which achieves 1/sqrt{t} at best.\n\nSecondly\, the dual
  analysis also allows a characterization of the implicit bias\, even outsi
 de the standard setting of exponentially-tailed losses\; in this sense\, i
 t is gradient descent\, and not a particular loss structure which leads to
  implicit bias.\n\nIn the nonlinear case\, duality will enable the proof o
 f a gradient alignment property: asymptotically\, the parameters and their
  gradients become colinear. Although abstract\, this property in turn impl
 ies various existing and new margin maximization results.\n\nJoint work wi
 th Matus Telgarsky.\n
LOCATION:https://researchseminars.org/talk/OneWorldML/9/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Carola Bibiane Schönlieb (University of Cambridge)
DTSTART:20210113T170000Z
DTEND:20210113T180000Z
DTSTAMP:20260422T225701Z
UID:OneWorldML/10
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/OneWorldML/1
 0/">Machine Learned Regularization for Solving Inverse Problems</a>\nby Ca
 rola Bibiane Schönlieb (University of Cambridge) as part of One World Sem
 inar Series on the  Mathematics of Machine Learning\n\n\nAbstract\nInverse
  problems are about the reconstruction of an unknown physical quantity fro
 m indirect measurements. Most inverse problems of interest are ill-posed a
 nd require appropriate mathematical treatment for recovering meaningful so
 lutions. Regularization is one of the main mechanisms to turn inverse prob
 lems into well-posed ones by adding prior information about the unknown qu
 antity to the problem\, often in the form of assumed regularity of solutio
 ns. Classically\, such regularization approaches are handcrafted. Examples
  include Tikhonov regularization\, the total variation and several sparsit
 y-promoting regularizers such as the L1 norm of Wavelet coefficients of th
 e solution. While such handcrafted approaches deliver mathematically and c
 omputationally robust solutions to inverse problems\, providing a universa
 l approach to their solution\, they are also limited by our ability to mod
 el solution properties and to realise these regularization approaches comp
 utationally.\n\n\n\nRecently\, a new paradigm has been introduced to the r
 egularization of inverse problems\, which derives regularization approache
 s for inverse problems in a data driven way. Here\, regularization is not 
 mathematically modelled in the classical sense\, but modelled by highly ov
 er-parametrised models\, typically deep neural networks\, that are adapted
  to the inverse problems at hand by appropriately selected (and usually pl
 enty of) training data.\n\n\n\nIn this talk\, I will review some machine l
 earning based regularization techniques\, present some work on unsupervise
 d and deeply learned convex regularisers and their application to image re
 construction from tomographic and blurred measurements\, and finish by dis
 cussing some open mathematical problems.\n
LOCATION:https://researchseminars.org/talk/OneWorldML/10/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Melanie Weber (Princeton University)
DTSTART:20210120T170000Z
DTEND:20210120T180000Z
DTSTAMP:20260422T225701Z
UID:OneWorldML/11
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/OneWorldML/1
 1/">Geometric Methods for Machine Learning and Optimization</a>\nby Melani
 e Weber (Princeton University) as part of One World Seminar Series on the 
  Mathematics of Machine Learning\n\n\nAbstract\nMany machine learning appl
 ications involve non-Euclidean data\, such as graphs\, strings or matrices
 . In such cases\, exploiting Riemannian geometry can deliver algorithms th
 at are computationally superior to standard (Euclidean) nonlinear programm
 ing approaches. This observation has resulted in an increasing interest in
  Riemannian methods in the optimization and machine learning community.\n\
 nIn the first part of the talk\, we consider the task of learning a robust
  classifier in hyperbolic space. Such spaces have received a surge of inte
 rest for representing large-scale\, hierarchical data\, due to the fact th
 at they achieve better representation accuracy with lower dimensions. We p
 resent the first theoretical guarantees for the (robust) large-margin lear
 ning problem in hyperbolic space and discuss conditions under which hyperb
 olic methods are guaranteed to surpass the performance of their Euclidean 
 counterparts. In the second part\, we introduce Riemannian Frank-Wolfe (RF
 W) methods for constraint optimization on manifolds. Here\, the goal of th
 e theoretical analysis is two-fold: We first show that RFW converges at a 
 nonasymptotic sublinear rate\, recovering the best-known guarantees for it
 s Euclidean counterpart. Secondly\, we discuss how to implement the method
  efficiently on matrix manifolds. Finally\, we consider applications of RF
 W to the computation of Riemannian centroids and Wasserstein barycenters\,
  which are crucial subroutines in many machine learning methods.\n\nBased 
 on joint work with Suvrit Sra (MIT) and Manzil Zaheer\, Ankit Singh Rawat\
 , Aditya Menon and Sanjiv Kumar (all Google Research).\n
LOCATION:https://researchseminars.org/talk/OneWorldML/11/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Nathaniel Trask
DTSTART:20210127T170000Z
DTEND:20210127T180000Z
DTSTAMP:20260422T225701Z
UID:OneWorldML/12
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/OneWorldML/1
 2/">Structure preservation and convergence in scientific machine learning<
 /a>\nby Nathaniel Trask as part of One World Seminar Series on the  Mathem
 atics of Machine Learning\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/OneWorldML/12/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Andrea Bertozzi
DTSTART:20210203T170000Z
DTEND:20210203T180000Z
DTSTAMP:20260422T225701Z
UID:OneWorldML/13
DESCRIPTION:by Andrea Bertozzi as part of One World Seminar Series on the 
  Mathematics of Machine Learning\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/OneWorldML/13/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Andrea Agazzi (Duke University)
DTSTART:20210210T170000Z
DTEND:20210210T180000Z
DTSTAMP:20260422T225701Z
UID:OneWorldML/14
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/OneWorldML/1
 4/">Convergence and optimality of single-layer neural networks for reinfor
 cement learning</a>\nby Andrea Agazzi (Duke University) as part of One Wor
 ld Seminar Series on the  Mathematics of Machine Learning\n\nAbstract: TBA
 \n
LOCATION:https://researchseminars.org/talk/OneWorldML/14/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Frederic Koehler
DTSTART:20210217T170000Z
DTEND:20210217T180000Z
DTSTAMP:20260422T225701Z
UID:OneWorldML/15
DESCRIPTION:by Frederic Koehler as part of One World Seminar Series on the
   Mathematics of Machine Learning\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/OneWorldML/15/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Bubacarr Bah
DTSTART:20210224T170000Z
DTEND:20210224T180000Z
DTSTAMP:20260422T225701Z
UID:OneWorldML/16
DESCRIPTION:by Bubacarr Bah as part of One World Seminar Series on the  Ma
 thematics of Machine Learning\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/OneWorldML/16/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Nathaniel Trask
DTSTART:20210303T170000Z
DTEND:20210303T180000Z
DTSTAMP:20260422T225701Z
UID:OneWorldML/17
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/OneWorldML/1
 7/">Structure preservation and convergence in scientific machine learning<
 /a>\nby Nathaniel Trask as part of One World Seminar Series on the  Mathem
 atics of Machine Learning\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/OneWorldML/17/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Boris Hanin
DTSTART:20210310T170000Z
DTEND:20210310T180000Z
DTSTAMP:20260422T225701Z
UID:OneWorldML/18
DESCRIPTION:by Boris Hanin as part of One World Seminar Series on the  Mat
 hematics of Machine Learning\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/OneWorldML/18/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Rachel Ward
DTSTART:20210317T170000Z
DTEND:20210317T180000Z
DTSTAMP:20260422T225701Z
UID:OneWorldML/19
DESCRIPTION:by Rachel Ward as part of One World Seminar Series on the  Mat
 hematics of Machine Learning\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/OneWorldML/19/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Jeff Calder
DTSTART:20210324T170000Z
DTEND:20210324T180000Z
DTSTAMP:20260422T225701Z
UID:OneWorldML/20
DESCRIPTION:by Jeff Calder as part of One World Seminar Series on the  Mat
 hematics of Machine Learning\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/OneWorldML/20/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Nicolas Garcia Trillos (Wisconsin Madison)
DTSTART:20210505T160000Z
DTEND:20210505T170000Z
DTSTAMP:20260422T225701Z
UID:OneWorldML/21
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/OneWorldML/2
 1/">Adversarial Classification\, Optimal Transport\, and Geometric Flows</
 a>\nby Nicolas Garcia Trillos (Wisconsin Madison) as part of One World Sem
 inar Series on the  Mathematics of Machine Learning\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/OneWorldML/21/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Clarice Poon (University of Bath)
DTSTART:20210519T160000Z
DTEND:20210519T170000Z
DTSTAMP:20260422T225701Z
UID:OneWorldML/22
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/OneWorldML/2
 2/">Smooth bilevel programming for sparse regularisation</a>\nby Clarice P
 oon (University of Bath) as part of One World Seminar Series on the  Mathe
 matics of Machine Learning\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/OneWorldML/22/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Robert Nowak (University of Wisconsin-Madison)
DTSTART:20211013T160000Z
DTEND:20211013T170000Z
DTSTAMP:20260422T225701Z
UID:OneWorldML/23
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/OneWorldML/2
 3/">TBC</a>\nby Robert Nowak (University of Wisconsin-Madison) as part of 
 One World Seminar Series on the  Mathematics of Machine Learning\n\nAbstra
 ct: TBA\n
LOCATION:https://researchseminars.org/talk/OneWorldML/23/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Christoph Schwab (ETH Zürich)
DTSTART:20211201T170000Z
DTEND:20211201T180000Z
DTSTAMP:20260422T225701Z
UID:OneWorldML/24
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/OneWorldML/2
 4/">Deep Learning in High Dimension: Neural Network Approximation of Analy
 tic Maps of Gaussians</a>\nby Christoph Schwab (ETH Zürich) as part of On
 e World Seminar Series on the  Mathematics of Machine Learning\n\n\nAbstra
 ct\nFor artificial deep neural networks with ReLU activation\,\nwe prove n
 ew expression rate bounds for\nparametric\, analytic functions where\nthe 
 parameter dimension could be infinite.\nApproximation rates are in mean sq
 uare on the unbounded\nparameter range with respect to product gaussian me
 asure.\nApproximation rate bounds are free from the CoD\, and\ndetermined 
 by summability of Wiener-Hermite PC expansion coefficients.\nSufficient co
 nditions for summability are quantified holomorphy\non products of strips 
 in the complex domain.\nApplications comprise DNN expression rate bounds o
 f deep-NNs\nfor response surfaces of elliptic PDEs with log-gaussian\nrand
 om field inputs\, and for the posterior densities of the\ncorresponding Ba
 yesian inverse problems.\nVariants of proofs which are constructive are ou
 tlined.\n\n(joint work with Jakob Zech\, University of Heidelberg\, German
 y\,\n and with Dinh Dung and Nguyen Van Kien\, Hanoi\, Vietnam)\n
LOCATION:https://researchseminars.org/talk/OneWorldML/24/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Houman Owhadi (Caltech)
DTSTART:20220420T160000Z
DTEND:20220420T170000Z
DTSTAMP:20260422T225701Z
UID:OneWorldML/25
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/OneWorldML/2
 5/">Computational Graph Completion</a>\nby Houman Owhadi (Caltech) as part
  of One World Seminar Series on the  Mathematics of Machine Learning\n\nAb
 stract: TBA\n
LOCATION:https://researchseminars.org/talk/OneWorldML/25/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Stephan Wäldchen (TU Berlin)
DTSTART:20220427T160000Z
DTEND:20220427T170000Z
DTSTAMP:20260422T225701Z
UID:OneWorldML/26
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/OneWorldML/2
 6/">Explaining Neural Network Classifiers: Hurdles and Progress</a>\nby St
 ephan Wäldchen (TU Berlin) as part of One World Seminar Series on the  Ma
 thematics of Machine Learning\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/OneWorldML/26/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Hongyang Zhang
DTSTART:20220504T160000Z
DTEND:20220504T170000Z
DTSTAMP:20260422T225701Z
UID:OneWorldML/27
DESCRIPTION:by Hongyang Zhang as part of One World Seminar Series on the  
 Mathematics of Machine Learning\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/OneWorldML/27/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Matthew Colbrook (University of Cambridge)
DTSTART:20220511T160000Z
DTEND:20220511T170000Z
DTSTAMP:20260422T225701Z
UID:OneWorldML/28
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/OneWorldML/2
 8/">Smale’s 18th Problem and the Barriers of Deep Learning</a>\nby Matth
 ew Colbrook (University of Cambridge) as part of One World Seminar Series 
 on the  Mathematics of Machine Learning\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/OneWorldML/28/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Denny Wu (University of Toronto)
DTSTART:20220914T160000Z
DTEND:20220914T170000Z
DTSTAMP:20260422T225701Z
UID:OneWorldML/29
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/OneWorldML/2
 9/">High-dimensional asymptotics of feature learning in the early phase of
  NN training</a>\nby Denny Wu (University of Toronto) as part of One World
  Seminar Series on the  Mathematics of Machine Learning\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/OneWorldML/29/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Gal Vardi (Toyota Technological Institute at Chicago)
DTSTART:20220921T160000Z
DTEND:20220921T170000Z
DTSTAMP:20260422T225701Z
UID:OneWorldML/30
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/OneWorldML/3
 0/">Implications of the implicit bias in neural networks</a>\nby Gal Vardi
  (Toyota Technological Institute at Chicago) as part of One World Seminar 
 Series on the  Mathematics of Machine Learning\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/OneWorldML/30/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Sophie Langer (University of Twente)
DTSTART:20221012T160000Z
DTEND:20221012T170000Z
DTSTAMP:20260422T225701Z
UID:OneWorldML/31
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/OneWorldML/3
 1/">Circumventing the curse of dimensionality with deep neural networks</a
 >\nby Sophie Langer (University of Twente) as part of One World Seminar Se
 ries on the  Mathematics of Machine Learning\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/OneWorldML/31/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Peter Richtarik (KAUST)
DTSTART:20221005T160000Z
DTEND:20221005T170000Z
DTSTAMP:20260422T225701Z
UID:OneWorldML/32
DESCRIPTION:by Peter Richtarik (KAUST) as part of One World Seminar Series
  on the  Mathematics of Machine Learning\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/OneWorldML/32/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Johannes Brandstetter (Microsoft)
DTSTART:20221109T170000Z
DTEND:20221109T180000Z
DTSTAMP:20260422T225701Z
UID:OneWorldML/33
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/OneWorldML/3
 3/">Towards a New Generation of Neural PDE Surrogates</a>\nby Johannes Bra
 ndstetter (Microsoft) as part of One World Seminar Series on the  Mathemat
 ics of Machine Learning\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/OneWorldML/33/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Simone Brugiapaglia (Concordia University)
DTSTART:20221116T170000Z
DTEND:20221116T180000Z
DTSTAMP:20260422T225701Z
UID:OneWorldML/34
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/OneWorldML/3
 4/">he Mathematical Foundations of Deep Learning: From Rating Impossibilit
 y to Practical Existence Theorems</a>\nby Simone Brugiapaglia (Concordia U
 niversity) as part of One World Seminar Series on the  Mathematics of Mach
 ine Learning\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/OneWorldML/34/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Francis Bach (Ecole Normale Superieure)
DTSTART:20221130T170000Z
DTEND:20221130T180000Z
DTSTAMP:20260422T225701Z
UID:OneWorldML/35
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/OneWorldML/3
 5/">Information Theory Through Kernel Methods</a>\nby Francis Bach (Ecole 
 Normale Superieure) as part of One World Seminar Series on the  Mathematic
 s of Machine Learning\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/OneWorldML/35/
END:VEVENT
END:VCALENDAR
