BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:Valentino Delle Rose (Centro Nacional de Inteligencia Artificial\,
  Chile)
DTSTART:20230221T140000Z
DTEND:20230221T150000Z
DTSTAMP:20260423T005733Z
UID:CTA/90
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/CTA/90/">Com
 putable PAC learning</a>\nby Valentino Delle Rose (Centro Nacional de Inte
 ligencia Artificial\, Chile) as part of Computability theory and applicati
 ons\n\n\nAbstract\nPAC (probably approximately correct) learning is a fram
 ework for the theoretical analysis of machine learning\, proposed by Valia
 nt in 1984. Intuitively\, we consider a certain class of hypotheses: a lea
 rner for this class receives a finite amount of random samples of an unkno
 wn objective function and\, with high probability\, outputs a function whi
 ch approximates the objective function no worse than any hypothesis in the
  given class.\nThe fundamental theorem of statistical learning characteriz
 es the existence of a learner for a certain class of hypotheses with a com
 binatorial property of the class itself\, namely the finiteness of its so-
 called VC dimension (Vapnik and Chervonenkis\, 1971\; Blumer et al.\, 1989
 ).\nHowever\, all this theory deals with learners when simply considered a
 s functions\, without taking into account any effectiveness aspect: what d
 oes it happen\, then\, when we only focus on learners which can be impleme
 nted by some algorithmic procedure?\nTo investigate this question\, Agarwa
 l et al. (2020) introduced computable PAC (CPAC) learning\, where both the
  learner and the functions it outputs are required to be computable. Their
  key observation is that\, in this new framework\, the finiteness of VC di
 mension does no longer suffice to ensure the existence of a computable lea
 rner. Moreover\, this computable setting is affected by certain aspects wh
 ich do not make any difference in classical PAC learning: for example\, wh
 ether the learner is required to be proper (i.e. its range must be contain
 ed in the class of hypotheses to be learned) or allowed to be improper (i.
 e. it can output any function)\, or whether the number of samples the lear
 ner needs to actually learn the class is bounded by a computable function.
 \nBut which of these aspects actually lead to different versions of comput
 able learnability? In this talk\, we will provide a landscape of the field
 \, based on previous work of Agarwal et al. (2020)\, Sterkenburg (2022) an
 d recent joint work with Alexander Kozachinskiy\, Cristóbal Rojas and Tom
 asz Steifer (Pontificia Universidad Católica de Chile).\n
LOCATION:https://researchseminars.org/talk/CTA/90/
END:VEVENT
END:VCALENDAR
