Leveraging implicit bias to improve efficiencies in training and fine-tuning ML models

Michael Munn (Google)

14-Nov-2024, 13:30-14:30 (14 months ago)

Abstract: In classical statistical learning theory, the bias variance tradeoff describes the relationship between the complexity of a model and the accuracy of its predictions on new data. In short, simpler models are preferable to more complex ones and, in practice, we often employ many techniques to control the model complexity. However, the best way to correctly measure the complexity of modern machine learning models remains an open question. In this talk, we will discuss the notion of geometric complexity and present some of our previous research which aims to address this fundamental problem. We'll also discuss current and future work which leverages this insight to devise strategies for more efficient model pre-training and fine-tuning.

Mathematics

Audience: researchers in the topic


IUT Mathematics Research Seminars (IMRS)

Series comments: All researchers are welcomed!

Organizer: Sajjad Lakzian*
*contact for this listing

Export talk to