BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:Henry Adams (Colorado State University)
DTSTART:20210825T133000Z
DTEND:20210825T143000Z
DTSTAMP:20260422T212929Z
UID:Danger2021/1
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/Danger2021/1
 /">Applied topology: from global to local</a>\nby Henry Adams (Colorado St
 ate University) as part of DANGER: Data\, Numbers\, and Geometry\n\n\nAbst
 ract\nThrough the use of examples\, I will explain one way in which applie
 d topology has evolved since the birth of persistent homology in the early
  2000s. The first applications of topology to data emphasized the global s
 hape of a dataset\, such as the three-circle model for 3 x 3 pixel patches
  from natural images\, or the configuration space of the cyclo-octane mole
 cule\, which is a sphere with a Klein bottle attached via two circles of s
 ingularity. More recently\, persistent homology is being used to measure t
 he local geometry of data. How do you vectorize geometry for use in machin
 e learning problems? Persistent homology\, and its vectorization technique
 s including persistence landscapes and persistence images\, provide popula
 r techniques for incorporating geometry in machine learning. I will survey
  applications arising from machine learning tasks in agent-based modeling\
 , shape recognition\, materials science\, and biology.\n
LOCATION:https://researchseminars.org/talk/Danger2021/1/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Sonja Petrovic (Illinois Institute of Technology)
DTSTART:20210825T143000Z
DTEND:20210825T153000Z
DTSTAMP:20260422T212929Z
UID:Danger2021/2
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/Danger2021/2
 /">Learning in commutative algebra & models for random algebraic structure
 s</a>\nby Sonja Petrovic (Illinois Institute of Technology) as part of DAN
 GER: Data\, Numbers\, and Geometry\n\n\nAbstract\nA commutative algebraist
 's interest in randomness has many facets\, of which this talk highlights 
 two. Namely\, we will discuss 1) how to use basic statistics and learning 
 for improving Buchberger's algorithm and 2) how to generate samples of ide
 als in a `controlled' way. The two topics\, based on joint work with vario
 us collaborators and students\, form a two-step process in learning on alg
 ebraic structures\, designed with the aim of avoiding the 'danger zone' of
  blind machine learning over uninteresting distributions. \nFor learning\,
  we show that a multiple linear regression model built from a set of easy-
 to-compute ideal generator statistics can predict the number of polynomial
  additions somewhat well\, better than an uninformed model\, and better th
 an regression models built on some intuitive commutative algebra invariant
 s that are more difficult to compute. We also train a simple recursive neu
 ral network that outperforms these linear models. Our work serves as a pro
 of of concept\, demonstrating that predicting the number of polynomial add
 itions in Buchberger's algorithm is a feasible problem from the point of v
 iew of machine learning.\nAs a first example of sampling\, we present rand
 om monomial ideals\, using which we prove theorems about the probability d
 istributions\, expectations and thresholds for events involving monomial i
 deals with given Hilbert function\, Krull dimension\, first graded Betti n
 umbers\, and present several experimentally-backed conjectures about regul
 arity\, projective dimension\, strong genericity\, and Cohen-Macaulayness 
 of random monomial ideals. The models for monomial ideals can be used as a
  basis for generating other types of algebraic objects\, and proving exist
 ence of desired properties.\n
LOCATION:https://researchseminars.org/talk/Danger2021/2/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Siu-Cheong Lau (Boston University)
DTSTART:20210825T160000Z
DTEND:20210825T170000Z
DTSTAMP:20260422T212929Z
UID:Danger2021/3
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/Danger2021/3
 /">Deep learning over the moduli space of quiver representations</a>\nby S
 iu-Cheong Lau (Boston University) as part of DANGER: Data\, Numbers\, and 
 Geometry\n\n\nAbstract\nIt is interesting to observe that neural network i
 n machine learning has a similar basic setup as quiver representation theo
 ry. In this talk\, I will build an algebro-geometric formulation of a comp
 uting machine\, which is well-defined over the moduli space of representat
 ions.  I will also explain a uniformization between spherical\, Euclidean 
 and hyperbolic moduli of framed quiver representations\, and construct a l
 earning algorithm over these moduli spaces.\n
LOCATION:https://researchseminars.org/talk/Danger2021/3/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Riccardo Finotello (CEA Paris-Saclay)
DTSTART:20210826T120000Z
DTEND:20210826T130000Z
DTSTAMP:20260422T212929Z
UID:Danger2021/4
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/Danger2021/4
 /">Algebraic geometry and computer vision: inception neural network for Ca
 labi-Yau manifolds</a>\nby Riccardo Finotello (CEA Paris-Saclay) as part o
 f DANGER: Data\, Numbers\, and Geometry\n\n\nAbstract\nComputing topologic
 al properties of Calabi-Yau manifolds is\, in general\, a challenging math
 ematical task: traditional methods lead to complicated algorithms\, withou
 t expressions in closed form in most cases. At the same time\, recent year
 s have witnessed the rising use of deep learning as a method for explorati
 on of large sets of data\, to learn their patterns and properties. This is
  specifically interesting when it comes to unravel complicated geometrical
  structures\, as it is a central issue both in mathematics and theoretical
  physics\, as well as in the development of trustworthy AI methods. Motiva
 ted by their distinguished role in string theory for the study of compacti
 fications\, we compute the Hodge numbers of Complete Intersection Calabi-Y
 au (CICY) manifolds using deep neural networks. Specifically\, we introduc
 e new regression architectures\, inspired by Google's Inception network an
 d multi-task learning\, which leverage the theoretical knowledge on the in
 puts with recent advancements in AI.  This shows the potential of deep lea
 rning to learn from geometrical data\, and it proves the versatility of ar
 chitectures developed in different contexts\, which may therefore find the
 ir way in theoretical physics and mathematics for exploration and inferenc
 e.\n
LOCATION:https://researchseminars.org/talk/Danger2021/4/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Kyu-Hwan Lee (University of Connecticut)
DTSTART:20210826T133000Z
DTEND:20210826T143000Z
DTSTAMP:20260422T212929Z
UID:Danger2021/5
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/Danger2021/5
 /">Applications of machine learning to data from number theory</a>\nby Kyu
 -Hwan Lee (University of Connecticut) as part of DANGER: Data\, Numbers\, 
 and Geometry\n\n\nAbstract\nIn this talk\, we apply machine learning techn
 iques to various data from the L-functions and modular forms database (LMF
 DB) and show that a machine can be trained to distinguish objects in numbe
 r theory according to their standard invariants. The applications in this 
 talk will include class numbers of quadratic number fields\, ranks of elli
 ptic curves\, Sato-Tate groups of genus 2 curves. This is joint work with 
 Yang-Hui He and Thomas Oliver.\n
LOCATION:https://researchseminars.org/talk/Danger2021/5/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Roozbeh Yousefadeh (Yale University)
DTSTART:20210826T143000Z
DTEND:20210826T153000Z
DTSTAMP:20260422T212929Z
UID:Danger2021/6
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/Danger2021/6
 /">Deep learning generalization\, extrapolation\, over-parameterization an
 d decision boundaries</a>\nby Roozbeh Yousefadeh (Yale University) as part
  of DANGER: Data\, Numbers\, and Geometry\n\n\nAbstract\nDeep neural netwo
 rks have achieved great success\, most notably in learning to classify ima
 ges. Yet\, the phenomenon of learning images is not well understood\, and 
 generalization of deep networks is considered a mystery. Recent studies ha
 ve explained the generalization of deep networks within the framework of i
 nterpolation. In this talk\, we will see that the task of classifying imag
 es requires extrapolation capability\, and interpolation by itself is not 
 adequate to understand functional task of deep networks. We study image cl
 assification datasets in the pixel space\, the internal representations of
  images learned throughout the layers of trained networks\, and also in th
 e low-dimensional feature space that one can derive using wavelets/shearle
 ts. We show that in all these spaces\, image classification remains an ext
 rapolation task to a moderate (yet considerable) degree outside the convex
  hull of training set. From the mathematical perspective\, a deep learning
  image classifier is a function that partitions its domain and assigns a c
 lass to each partition. Partitions are defined by decision boundaries and 
 so is the model. Therefore\, the extensions of decision boundaries outside
  the convex hull of training set are crucial in model's generalization. Fr
 om this perspective\, over-parameterization is a necessary condition for t
 he ability to control the extensions of decision boundaries\, a novel way 
 of explaining why deep networks need to be over-parameterized. I will also
  present a homotopy algorithm for computing points on the decision boundar
 ies of deep networks\, and finally\, I will explain how we can leverage th
 e decision boundaries to audit and debug ML models used in social applicat
 ions.\n
LOCATION:https://researchseminars.org/talk/Danger2021/6/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Ruriko Yoshida (Naval Postgraduate School)
DTSTART:20210826T160000Z
DTEND:20210826T170000Z
DTSTAMP:20260422T212929Z
UID:Danger2021/7
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/Danger2021/7
 /">Tree topologies along a tropical line segment</a>\nby Ruriko Yoshida (N
 aval Postgraduate School) as part of DANGER: Data\, Numbers\, and Geometry
 \n\n\nAbstract\nTropical geometry with the max-plus algebra has been appli
 ed to statistical learning models over the spaces of phylogenetic trees be
 cause geometry with the tropical metric over tree spaces has some nice pro
 perties such as convexity in terms of the tropical metric.  One of the cha
 llenges in applications of tropical geometry to tree spaces is the difficu
 lty interpreting outcomes of statistical models with the tropical metric. 
  This talk focuses on combinatorics of tree topologies along a tropical li
 ne segment\, an intrinsic geodesic with the tropical metric\, between two 
 phylogenetic trees over the tree space and we show some properties of a tr
 opical line segment between two trees.  Specifically\, we show that a prob
 ability of a tropical line segment of two randomly chosen trees going thro
 ugh the origin (the star tree) is zero and we also show that if two given 
 trees differ only one nearest neighbor interchange (NNI) move\, then the t
 ree topology of a tree in the tropical line segment between them is the sa
 me tree topology of one of these given two trees with possible zero branch
  lengths.\n
LOCATION:https://researchseminars.org/talk/Danger2021/7/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Minhyong Kim (Warwick)
DTSTART:20210825T121500Z
DTEND:20210825T131500Z
DTSTAMP:20260422T212929Z
UID:Danger2021/8
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/Danger2021/8
 /">How hard is it to learn a mathematical structure?</a>\nby Minhyong Kim 
 (Warwick) as part of DANGER: Data\, Numbers\, and Geometry\n\nAbstract: TB
 A\n
LOCATION:https://researchseminars.org/talk/Danger2021/8/
END:VEVENT
END:VCALENDAR
