BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:Hariharan Narayanan (Tata Institute for Fundamental Research)
DTSTART:20210224T160000Z
DTEND:20210224T170000Z
DTSTAMP:20260414T173948Z
UID:BUGeom/17
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/BUGeom/17/">
 Testing the manifold hypothesis and fitting a manifold of large reach to n
 oisy data</a>\nby Hariharan Narayanan (Tata Institute for Fundamental Rese
 arch) as part of Boston University Geometry/Physics Seminar\n\nLecture hel
 d in Zoom meeting ID: 974 5641 9902.\n\nAbstract\nThe hypothesis that high
  dimensional data tend to lie in the vicinity of a low dimensional manifol
 d is the basis of manifold learning. \nWe will discuss a joint work with C
 harles Fefferman and Sanjoy Mitter on testing the manifold hypothesis. We 
 will outline an algorithm (with accompanying complexity guarantees) for fi
 tting a manifold to an unknown probability distribution supported in a sep
 arable Hilbert space\, only using i.i.d samples from that distribution.\nW
 e also give a solution based on joint work with Charles Fefferman\, Sergei
  Ivanov and Matti Lassas to the following question from manifold learning.
 \nSuppose data belonging to a high dimensional Euclidean space is sampled 
 independently\, identically at random\, from a measure supported on a d di
 mensional twice differentiable embedded manifold M\, and corrupted by Gaus
 sian noise with small standard deviation sigma. How can we produce a manif
 old M_o whose Hausdorff distance to M is small and whose reach (normal inj
 ectivity radius) is not much smaller than the reach of M? We show how to p
 roduce a manifold within O(sigma^2) of M in Hausdorf distance\, whose reac
 h is smaller than that of M by a factor of no more than O(d^6).\n
LOCATION:https://researchseminars.org/talk/BUGeom/17/
END:VEVENT
END:VCALENDAR
