Naive feature selection: Sparsity in naive Bayes
Alexandre d'Aspremont (École Normale Supérieure Paris (ENS))
Abstract: Due to its linear complexity, naive Bayes classification remains an attractive supervised learning method, especially in very large-scale settings. We propose a sparse version of naive Bayes, which can be used for feature selection. This leads to a combinatorial maximum-likelihood problem, for which we provide an exact solution in the case of binary data, or a bound in the multinomial case. We prove that our bound becomes tight as the marginal contribution of additional features decreases. Both binary and multinomial sparse models are solvable in time almost linear in problem size, representing a very small extra relative cost compared to the classical naive Bayes. Numerical experiments on text data show that the naive Bayes feature selection method is as statistically effective as state-of-the-art feature selection methods such as recursive feature elimination, l1-penalized logistic regression and LASSO, while being orders of magnitude faster. For a large data set, having more than with 1.6 million training points and about 12 million features, and with a non-optimized CPU implementation, our sparse naive Bayes model can be trained in less than 15 seconds.
The talk is based on joint work with Armin Askari and Laurent El Ghaoui that can be found at arxiv.org/abs/1905.09884
optimization and control
Audience: researchers in the topic
Comments: the address and password of the zoom room of the seminar are sent by e-mail on the mailinglist of the seminar one day before each talk
One World Optimization seminar
Series comments: Description: Online seminar on optimization and related areas
The address and password of the zoom room of the seminar are sent by e-mail on the mailinglist of the seminar one day before each talk
Organizers: | Sorin-Mihai Grad*, Radu Ioan Boț, Shoham Sabach, Mathias Staudigl |
*contact for this listing |