BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:Le Quoc Tung (ENS Lyon)
DTSTART:20230726T090000Z
DTEND:20230726T100000Z
DTSTAMP:20260423T035414Z
UID:CompAlg/22
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/CompAlg/22/"
 >Algorithmic and theoretical aspects of sparse deep neural networks</a>\nb
 y Le Quoc Tung (ENS Lyon) as part of Machine Learning Seminar\n\n\nAbstrac
 t\nSparse deep neural networks offer a compelling practical opportunity to
  reduce the cost of training\, inference and storage\, which are growing e
 xponentially in the state of the art of deep learning. In this presentatio
 n\, we will introduce an approach to study sparse deep neural networks thr
 ough the lens of another related problem: sparse matrix factorization\, i.
 e.\, the problem of approximating a (dense) matrix by the product of (mult
 iple) sparse factors. In particular\, we identify and investigate in detai
 l some theoretical and algorithmic aspects of a variant of sparse matrix f
 actorization named fixed support matrix factorization (FSMF) in which the 
 set of non-zero entries of sparse factors are known. Several fundamental q
 uestions of sparse deep neural networks such as the existence of optimal s
 olutions of the training problem or topological properties of its function
  space can be addressed using the results of (FSMF). In addition\, by appl
 ying the results of (FSMF)\, we also study butterfly parametrization\, an 
 approach that consists of replacing (large) weight matrices with the produ
 cts of extremely sparse and structured ones in sparse deep neural networks
 .\n
LOCATION:https://researchseminars.org/talk/CompAlg/22/
END:VEVENT
END:VCALENDAR
