On the fundamental role of sparsity in machine learning
Anna Golubeva (MIT)
Abstract: Sparsity and neural-network pruning have become indispensable tools in applied machine learning to alleviate the computational demands of ever larger models. While the number of empirical works in this field has exploded in recent years, bringing out a variety of pruning techniques, finding sparse solutions at initialization remains a challenge. Moreover, a theoretical understanding of the very existence of sparse solutions in neural networks is lacking. In this talk, I will discuss the most interesting open questions in this field and present some of our recent work combining theoretical and experimental approaches to tackle them.
HEP - phenomenologyHEP - theorymathematical physics
Audience: researchers in the topic
Series comments: Description: Weekly research seminar of the NHETC at Rutgers University
Livestream link is available on the webpage.
Organizers: | Christina Pettola*, Sung Hak Lim, Vivek Saxena*, Erica DiPaola* |
*contact for this listing |