BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:Ramyaa (New Mexico Tech)
DTSTART:20220818T180000Z
DTEND:20220818T190000Z
DTSTAMP:20260423T021310Z
UID:OLS/95
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/OLS/95/">Adv
 ances in Differentiable Program Learning</a>\nby Ramyaa (New Mexico Tech) 
 as part of Online logic seminar\n\n\nAbstract\nInductive Logic Programming
  (ILP) is a subfield of Artificial Intelligence that learns Logic Programs
  for a concept from positive and negative examples of the concept.\nLearni
 ng Logic Programs allow for interpretability\, can benefit from background
  knowledge\, and require small training set. However\, traditional ILP tec
 hniques are not noise-tolerant\, and do not scale well to large/high-dimen
 sional domains. In recent years\, there have been several attempts to use 
 differentiable representations of logic programs and learn them using grad
 ient descent based techniques. This talk introduces these attempts\, and o
 ur efforts at extending them to learn logic programs with negations and hi
 gher-order logic programs.\n\nIn both cases\, considerable care is needed 
 from a theoretical standpoint. Negation should be restricted to avoid para
 doxical scenarios. We learned logic programs with stratified negation (in 
 the style of Datalog). Anti-unification (i.e.\, generalization) of arbitra
 ry higher-order terms is not unique. We learned second order logic program
 s that are generalizations of first order programs.\n
LOCATION:https://researchseminars.org/talk/OLS/95/
END:VEVENT
END:VCALENDAR
