Deep Learning (Part 1)

Jakub Malinowski (Dioscuri Centre in Topological Data Analysis)

Mon Mar 2, 11:30-13:30 (2 months from now)
Lecture held in Room 1 at the IMPAS, Room 1.14 at the Institute of Informatics (University of Gdańsk).

Abstract: This first session introduces the fundamental concepts and motivations behind deep learning. We begin with a discussion of why and when deep learning can outperform traditional statistical methods - especially for large, high-dimensional data. Next, we explore the architecture of neural networks: from simple single-layer networks to multilayer (deep) networks. Key learning mechanisms - including backpropagation, regularization, and stochastic gradient descent (SGD) - will be explained intuitively and with math as appropriate. We will also review practical considerations (e.g., network tuning, overfitting, capacity control), providing Python code examples to illustrate how deep networks are defined and trained in a real-world context.

Computer scienceMathematics

Audience: general audience


Basic Notions and Applied Topology Seminar

Organizer: Julian Brüggemann
Curator: John Rick*
*contact for this listing

Export talk to