BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:Audrey Durand (IID\, Université Laval\, Canada)
DTSTART:20220707T160000Z
DTEND:20220707T170000Z
DTSTAMP:20260423T003253Z
UID:MPML/82
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/MPML/82/">In
 teractive learning for Neurosciences - Between Simulation and Reality</a>\
 nby Audrey Durand (IID\, Université Laval\, Canada) as part of Mathematic
 s\, Physics and Machine Learning (IST\, Lisbon)\n\n\nAbstract\nLearning a 
 behaviour to conduct a given task can be achieved by interacting with the 
 the environment. This is the crux of reinforcement learning (RL)\, where a
 n (automated) agent learns to solve a problem through an iterative trial-a
 nd-error process. More specifically\, an RL agent can interact with the en
 vironment and learn from these interactions by observing a feedback on the
  goal task. Therefore\, these methods typically require to be able to inte
 rvene on the environment and make (possibly a very large number of) mistak
 es. Although this can be a limiting factor in some applications\, simple R
 L settings\, such as bandit settings\, can still host a variety of problem
 s for interactively learning behaviours. In other situations\, simulation 
 might be the key.\n\nIn this talk\, we will show that RL can be used to fo
 rmulate and tackle data acquisition (imaging) problems in neurosciences. W
 e will see how bandit methods can be used to optimize super-resolution ima
 ging by learning on real devices through an actual empirical process. We w
 ill also see how simulation can be leveraged to learn more sequential deci
 sion making strategies. These applications highlight the potential of RL t
 o support expert users on difficult task and enable new discoveries.\n
LOCATION:https://researchseminars.org/talk/MPML/82/
END:VEVENT
END:VCALENDAR
