BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:Alexandre d'Aspremont (ENS\, CNRS)
DTSTART:20200501T150000Z
DTEND:20200501T160000Z
DTSTAMP:20260423T004657Z
UID:sss/3
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/sss/3/">Naiv
 e feature selection: Sparsity in naive Bayes</a>\nby Alexandre d'Aspremont
  (ENS\, CNRS) as part of Stochastics and Statistics Seminar Series\n\n\nAb
 stract\nDue to its linear complexity\, naive Bayes classification remains 
 an attractive supervised learning method\, especially in very large-scale 
 settings. We propose a sparse version of naive Bayes\, which can be used f
 or feature selection. This leads to a combinatorial maximum-likelihood pro
 blem\, for which we provide an exact solution in the case of binary data\,
  or a bound in the multinomial case. We prove that our bound becomes tight
  as the marginal contribution of additional features decreases. Both binar
 y and multinomial sparse models are solvable in time almost linear in prob
 lem size\, representing a very small extra relative cost compared to the c
 lassical naive Bayes. Numerical experiments on text data show that the nai
 ve Bayes feature selection method is as statistically effective as state-o
 f-the-art feature selection methods such as recursive feature elimination\
 , l1-penalized logistic regression and LASSO\, while being orders of magni
 tude faster. For a large data set\, having more than with 1.6 million trai
 ning points and about 12 million features\, and with a non-optimized CPU i
 mplementation\, our sparse naive Bayes model can be trained in less than 1
 5 seconds.  Authors: A. Askari\, A. d’Aspremont\, L. El Ghaoui.\n
LOCATION:https://researchseminars.org/talk/sss/3/
END:VEVENT
END:VCALENDAR
