Deeply Uncertain: (how) can we make deep learning tools trustworthy for scientific measurements?

Brian Nord (Fermilab)

04-May-2021, 17:00-18:00 (3 years ago)

Abstract: Artificial Intelligence (AI) --- including machine learning and deep learning --- refers to a set of techniques that rely primarily on the data itself for the construction of a quantitative model. AI has been in development for about three quarters of a century, but there has been a recent resurgence in research and applications. This current (third) wave of AI progress is marked by extraordinary results --- for example, in image analysis, language translation, and machine automation. Despite the modest definition of AI, its potential to disrupt technologies, economies, society, and even science is often presented as unmatched in modern times. However, along with the promise of AI, there are significant challenges to overcome to reach a degree of reliability that is on par with more traditional modeling methods. In particular, uncertainty quantification metrics derived from deep neural networks are yet to be made physically interpretable. For example, when one uses a convolutional neural network to measure values from an image (e.g., regression for galaxy properties), the error estimates do not necessarily match those from an MCMC likelihood fit. In this presentation, I will discuss the landscape of uncertainty quantification in deep learning, as well as some computational experiments in a physical context that demonstrate a mismatch between errors derived directly from deep learning methods and those derived through traditional error propagation. Before we can apply deep learning tools confidently for the direct measurement of physical properties, we’ll need statistically robust error estimation methods.

HEP - phenomenologyHEP - theorymathematical physics

Audience: researchers in the topic


NHETC Seminar

Series comments: Description: Weekly research seminar of the NHETC at Rutgers University

Livestream link is available on the webpage.

Organizers: Christina Pettola*, Sung Hak Lim, Vivek Saxena*, Erica DiPaola*
*contact for this listing

Export talk to