BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:Nick Watters (MIT)
DTSTART:20251017T153000Z
DTEND:20251017T163000Z
DTSTAMP:20260422T171804Z
UID:CompMath/20
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/CompMath/20/
 ">How can we understand the neural basis of thought?</a>\nby Nick Watters 
 (MIT) as part of Relatorium seminar\n\n\nAbstract\nNeuroscience is undergo
 ing a technological revolution\, a “Moore’s Law” for neural recordin
 g that is allowing us to measure the activity of the brain at ever-increas
 ing resolution. However\, simply recording neural activity does not tell u
 s how the brain works. To understand how the brain works\, we must constru
 ct models that connect neural activity to interpretable principles of thou
 ght. This modeling becomes increasingly important as we tackle more abstra
 ct\, cognitive types of thought that arise from the coordinated activity o
 f large populations of neurons. In this talk\, I’ll discuss approaches t
 o modeling such large-scale neural activity. I’ll focus primarily on one
  cognitive domain: Our ability to predict the kinematics of moving objects
 . We use this ability regularly in daily life\, from catching a ball to cr
 ossing a busy street. I’ll present neural data recorded from subjects pr
 edicting the kinematics of moving objects\, introduce a modeling paradigm 
 for interpreting this data\, and discuss the implications of the neural la
 tent variables this modeling effort reveals. I’ll conclude by sharing an
  optimistic outlook on the future of systems neuroscience and speculation 
 about potential implications for artificial intelligence.\n\nSpeaker bio: 
 Nick Watters is a postdoctoral associate at MIT studying the neural basis 
 of cognition and motor control in the Jazayeri lab\, where he was a PhD st
 udent beforehand. Prior to joining MIT\, he worked at Google DeepMind as a
  research engineer\, studying unsupervised visual structure-learning and s
 ample-efficient reinforcement learning. Prior to joining DeepMind\, he was
  an undergraduate at Harvard studying math\, computer science\, and neurob
 iology.\n\nModerator: This talk is moderated by Ted Theodosopoulos. Ted is
  a mathematician who\, after working for years in academia and industry\, 
 transitioned to teaching at the pre-college level sixteen years ago\, the 
 last eight at Nueva\, where he teaches math and economics. Ted’s researc
 h background is in the area of interacting stochastic systems\, with parti
 cular applications in biology and economics.\n
LOCATION:https://researchseminars.org/talk/CompMath/20/
END:VEVENT
END:VCALENDAR
