BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:Michael Freedman (Harvard)
DTSTART:20240910T204500Z
DTEND:20240910T220000Z
DTSTAMP:20260423T052456Z
UID:MathPic/130
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/MathPic/130/
 ">What can ML learn from the proof of the Kolmogorov-Arnold theorem</a>\nb
 y Michael Freedman (Harvard) as part of Mathematical Picture Language Semi
 nar\n\nLecture held in Jefferson 356 and Zoom https://harvard.zoom.us/j/77
 9283357?pwd=MitXVm1pYUlJVzZqT3lwV2pCT1ZUQT09.\n\nAbstract\nThe Kolmogorov-
 Arnold representation theorem shows that even very shallow\, non-linear ne
 ural nets can express general continuous multivariate functions. I will be
 gin by giving a proof.  The theorem has often been regarded as "irrelevant
 " to machine learning because of the unrealistic precision required in its
  representation of Real numbers. I agree with this criticism but will pres
 ent another path to ML-relevancy - not of the statement but of the proof.\
 n\nPasscode: 657361\n
LOCATION:https://researchseminars.org/talk/MathPic/130/
END:VEVENT
END:VCALENDAR
