BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:Andrea Bertozzi (UCLA)
DTSTART:20200423T183000Z
DTEND:20200423T193000Z
DTSTAMP:20260422T225721Z
UID:OneWorldMINDS/1
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/OneWorldMIND
 S/1/">Epidemic modeling – basics and challenges</a>\nby Andrea Bertozzi 
 (UCLA) as part of One World MINDS seminar\n\n\nAbstract\nI will review bas
 ics of epidemic modeling including eponential growth\, compartmental model
 s and self-exciting point process models.  I will illustrate how such mode
 ls have been used in the past for previous pandemics and what the challeng
 es are for forecasting the current COVID-19 pandemic.  I will show some ex
 amples of fitting of data to US states and what one can do with those resu
 lts.  Overall\, model prediction has a degree of uncertainty especially wi
 th early time data and with many unknowns.\n
LOCATION:https://researchseminars.org/talk/OneWorldMINDS/1/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Thomas Strohmer (UC Davis)
DTSTART:20200430T183000Z
DTEND:20200430T193000Z
DTSTAMP:20260422T225721Z
UID:OneWorldMINDS/2
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/OneWorldMIND
 S/2/">Pandemics\, Privacy\, and Paradoxes - Why We need a new paradigm for
  data science and AI</a>\nby Thomas Strohmer (UC Davis) as part of One Wor
 ld MINDS seminar\n\n\nAbstract\nPioneered by giant internet corporations a
 nd powered by machine learning\, a new economic system is emerging that pu
 shes for relentless data capture and analysis\, usually without users' con
 sent. Surveillance capitalism pursues the exploitation and control of huma
 n nature\, thereby threatening our social fabric. To counter these develop
 ments\, we need to rethink the role of data science and artificial intelli
 gence. We must urgently develop a new paradigm of what data is. This urgen
 cy is aggravated by the current pandemic\, which amplifies fundamental par
 adoxes underlying data science and AI. I will argue that the key lies in u
 nderstanding the trialectic nature of data\, the careful balance of which 
 will be key to tackling the aforementioned disturbing developments\, while
  still reaping the benefits of data science and AI. Based on this trialect
 ic nature\, I will draw consequences for the role of mathematics in data s
 cience and indicate how mathematicians can directly contribute to a more j
 ust digital revolution.\n
LOCATION:https://researchseminars.org/talk/OneWorldMINDS/2/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Anna Gilbert (University of Michigan)
DTSTART:20200507T183000Z
DTEND:20200507T193000Z
DTSTAMP:20260422T225721Z
UID:OneWorldMINDS/3
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/OneWorldMIND
 S/3/">Metric representations: Algorithms and geometry</a>\nby Anna Gilbert
  (University of Michigan) as part of One World MINDS seminar\n\n\nAbstract
 \nGiven a set of distances amongst points\, determining what metric repres
 entation is most “consistent” with the input distances or the metric t
 hat best captures the relevant geometric features of the data is a key ste
 p in many machine learning algorithms. In this talk\, we focus on 3 specif
 ic metric constrained problems\, a class of optimization problems with met
 ric constraints: metric nearness (Brickell et al. (2008))\, weighted corre
 lation clustering on general graphs (Bansal et al. (2004))\, and metric le
 arning (Bellet et al. (2013)\; Davis et al. (2007)).\n\nBecause of the lar
 ge number of constraints in these problems\, however\, these and other res
 earchers have been forced to restrict either the kinds of metrics learned 
 or the size of the problem that can be solved. We provide an algorithm\, P
 ROJECT AND FORGET\, that uses Bregman projections with cutting planes\, to
  solve metric constrained problems with many (possibly exponentially) ineq
 uality constraints. We also prove that our algorithm converges to the glob
 al optimal solution. Additionally\, we show that the optimality error deca
 ys asymptotically at an exponential rate. We show that using our method we
  can solve large problem instances of three types of metric constrained pr
 oblems\, out-performing all state of the art methods with respect to CPU t
 imes and problem sizes.\n\nFinally\, we discuss the adaptation of PROJECT 
 AND FORGET to specific types of metric constraints\, namely tree and hyper
 bolic metrics.\n
LOCATION:https://researchseminars.org/talk/OneWorldMINDS/3/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Ilya Razenshteyn (Microsoft Research)
DTSTART:20200514T183000Z
DTEND:20200514T193000Z
DTSTAMP:20260422T225721Z
UID:OneWorldMINDS/4
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/OneWorldMIND
 S/4/">Scalable Nearest Neighbor Search for Optimal Transport</a>\nby Ilya 
 Razenshteyn (Microsoft Research) as part of One World MINDS seminar\n\n\nA
 bstract\nThe Optimal Transport (aka Wasserstein) distance is an increasing
 ly popular similarity measure for structured data domains\, such as images
  or text documents. This raises the necessity for fast nearest neighbor se
 arch with respect to this distance\, a problem that poses a substantial co
 mputational bottleneck for various tasks on massive datasets. In this talk
 \, I will discuss fast tree-based approximation algorithms for searching n
 earest neighbors with respect to the Wasserstein-1 distance. I will start 
 with describing a standard tree-based technique\, known as QuadTree\, whic
 h has been previously shown to obtain good results. Then I'll introduce a 
 variant of this algorithm\, called FlowTree\, and show that it achieves be
 tter accuracy\, both in theory and in practice. In particular\, the accura
 cy of FlowTree is in line with previous high-accuracy methods\, while its 
 running time is much faster.\n\nThe talk is based on a joint work with Art
 urs Backurs\, Yihe Dong\, Piotr Indyk and Tal Wagner. The paper can be fou
 nd at https://arxiv.org/abs/1910.04126 and the code -- at https://github.c
 om/ilyaraz/ot_estimators\n
LOCATION:https://researchseminars.org/talk/OneWorldMINDS/4/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Daniel Spielman (Yale)
DTSTART:20200521T183000Z
DTEND:20200521T193000Z
DTSTAMP:20260422T225721Z
UID:OneWorldMINDS/5
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/OneWorldMIND
 S/5/">Balancing covariates in randomized experiments using the Gram–Schm
 idt walk</a>\nby Daniel Spielman (Yale) as part of One World MINDS seminar
 \n\n\nAbstract\nIn randomized experiments\, such as a medical trials\, we 
 randomly assign the treatment\, such as a drug or a placebo\, that each ex
 perimental subject receives. Randomization can help us accurately estimate
  the difference in treatment effects with high probability. We also know t
 hat we want the two groups to be similar: ideally the two groups would be 
 similar in every statistic we can measure beforehand. Recent advances in a
 lgorithmic discrepancy theory allow us to divide subjects into groups with
  similar statistics.\n\nBy exploiting the recent Gram-Schmidt Walk algorit
 hm of Bansal\, Dadush\, Garg\, and Lovett\, we can obtain random assignmen
 ts of low discrepancy. These allow us to obtain more accurate estimates of
  treatment effects when the information we measure about the subjects is p
 redictive\, while also bounding the worst-case behavior when it is not.\n\
 nWe will explain the experimental design problem we address\, the Gram-Sch
 midt walk algorithm\, and the major ideas behind our analyses. This is joi
 nt work with Chris Harshaw\, Fredrik Sävje\, and Peng Zhang.\n
LOCATION:https://researchseminars.org/talk/OneWorldMINDS/5/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Ronald Coifman (Yale)
DTSTART:20200528T183000Z
DTEND:20200528T193000Z
DTSTAMP:20260422T225721Z
UID:OneWorldMINDS/6
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/OneWorldMIND
 S/6/">The Analytic Geometries of Data</a>\nby Ronald Coifman (Yale) as par
 t of One World MINDS seminar\n\n\nAbstract\nWe will describe methodologies
  to build data geometries designed to simultaneously analyze and process d
 ata bases.  The different geometries or affinity metrics arise naturally a
 s we learn to contextualize and conceptualize. I.e\; relate data regions\,
  and data features (which we extend to data tensors).  Moreover we generat
 e tensorial multiscale structures.  \n\nWe will indicate connection to ana
 lysis by deep nets and describe applications to modeling observations of d
 ynamical systems\, from stochastic molecular dynamics to calcium imaging o
 f brain activity.\n
LOCATION:https://researchseminars.org/talk/OneWorldMINDS/6/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Ben Adcock (Simon Fraser University)
DTSTART:20200604T183000Z
DTEND:20200604T193000Z
DTSTAMP:20260422T225721Z
UID:OneWorldMINDS/7
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/OneWorldMIND
 S/7/">The troublesome kernel: instabilities in deep learning for inverse p
 roblems</a>\nby Ben Adcock (Simon Fraser University) as part of One World 
 MINDS seminar\n\n\nAbstract\nDue to their stunning success in traditional 
 machine learning applications such as classification\, techniques based on
  deep learning have recently begun to be actively investigated for problem
 s in computational science and engineering. One of the key areas at the fo
 refront of this trend is inverse problems\, and specifically\, inverse pro
 blems in imaging. The last few years have witnessed the emergence of many 
 neural network-based algorithms for important imaging modalities such as M
 RI and X-ray CT. These claim to achieve competitive\, and sometimes even s
 uperior\, performance to current state-of-the-art techniques.\n\nHowever\,
  there is a problem. Techniques based on deep learning are typically unsta
 ble. For example\, small perturbations in the data can lead to a myriad of
  artefacts in the recovered images. Such artifacts can be hard to dismiss 
 as obviously unphysical\, meaning that this phenomenon has potentially ser
 ious consequences for the safe deployment of deep learning in practice. In
  this talk\, I will first showcase the instability phenomenon empirically 
 in a range of examples. I will then focus on its mathematical underpinning
 s\, the consequences of these insights when it comes to potential remedies
 \, and the future possibilities for computing genuinely stable neural netw
 orks for inverse problems in imaging.\n\nThis is joint work with Vegard An
 tun\, Nina M. Gottschling\, Anders C. Hansen\, Clarice Poon\, and Francesc
 o Renna\n\nPapers:\n\nhttps://www.pnas.org/content/early/2020/05/08/190737
 7117\n\nhttps://arxiv.org/abs/2001.01258\n
LOCATION:https://researchseminars.org/talk/OneWorldMINDS/7/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Jelani Nelson (UC Berkeley)
DTSTART:20200611T183000Z
DTEND:20200611T193000Z
DTSTAMP:20260422T225721Z
UID:OneWorldMINDS/8
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/OneWorldMIND
 S/8/">terminal dimensionality reduction in Euclidean space</a>\nby Jelani 
 Nelson (UC Berkeley) as part of One World MINDS seminar\n\n\nAbstract\nThe
  Johnson-Lindenstrauss lemma states that for any $X$ a subset of $R^d$ wit
 h $|X| = n$ and for any epsilon\, there exists a map $f:X\\to R^m$ for $m 
 = O(\\log n / \\epsilon^2)$ such that: for all $x \\in X$\, for all $y \\i
 n X$\, $(1-\\epsilon)|x - y|_2 \\le |f(x) - f(y)|_2 \\le (1+\\epsilon)|x -
  y|_2$. We show that this statement can be strengthened. In particular\, t
 he above claim holds true even if "for all $y \\in X$" is replaced with "f
 or all $y \\in R^d$". Joint work with Shyam Narayanan.\n
LOCATION:https://researchseminars.org/talk/OneWorldMINDS/8/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Gitta Jutyniok (TU Berlin)
DTSTART:20200618T183000Z
DTEND:20200618T193000Z
DTSTAMP:20260422T225721Z
UID:OneWorldMINDS/9
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/OneWorldMIND
 S/9/">Understanding Deep Neural Networks: From Generalization to Interpret
 ability</a>\nby Gitta Jutyniok (TU Berlin) as part of One World MINDS semi
 nar\n\n\nAbstract\nDeep neural networks have recently seen an impressive c
 omeback with applications both in the public sector and the sciences. Howe
 ver\, despite their outstanding success\, a comprehensive theoretical foun
 dation of deep neural networks is still missing.\n\nFor deriving a theoret
 ical understanding of deep neural networks\, one main goal is to analyze t
 heir generalization ability\, i.e. their performance on unseen data sets. 
 In case of graph convolutional neural networks\, which are today heavily u
 sed\, for instance\, for recommender systems\, already the generalization 
 capability to signals on graphs unseen in the training set\, typically coi
 ned transferability\, was not rigorously analyzed. In this talk\, we will 
 prove that spectral graph convolutional neural networks are indeed transfe
 rable\, thereby also debunking a common misconception about this type of g
 raph convolutional neural networks.\n\nIf such theoretical approaches fail
  or if one is just given a trained neural network without knowledge of how
  it was trained\, interpretability approaches become necessary. Those aim 
 to "break open the black box" in the sense of identifying those features f
 rom the input\, which are most relevant for the observed output. Aiming to
  derive a theoretically founded approach to this problem\, we introduced a
  novel approach based on rate-distortion theory coined Rate-Distortion Exp
 lanation (RDE)\, which not only provides state-of-the-art explanations\, b
 ut in addition allows first theoretical insights into the complexity of su
 ch problems. In this talk we will discuss this approach and show that it a
 lso gives a precise mathematical meaning to the previously vague term of r
 elevant parts of the input.\n
LOCATION:https://researchseminars.org/talk/OneWorldMINDS/9/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Richard Baraniuk (Rice University)
DTSTART:20200625T183000Z
DTEND:20200625T193000Z
DTSTAMP:20260422T225721Z
UID:OneWorldMINDS/10
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/OneWorldMIND
 S/10/">Affine spline insights into deep learning</a>\nby Richard Baraniuk 
 (Rice University) as part of One World MINDS seminar\n\n\nAbstract\nWe bui
 ld a rigorous bridge between deep networks (DNs) and approximation theory 
 via spline functions and operators. Our key result is that a large class o
 f DNs can be written as a composition of max-affine spline operators (MASO
 s)\, which provide a powerful portal through which to view and analyze the
 ir inner workings. For instance\, conditioned on the input signal\, the ou
 tput of a MASO DN can be written as a simple affine transformation of the 
 input. This implies that a DN constructs a set of signal-dependent\, class
 -specific templates against which the signal is compared via a simple inne
 r product\; we explore the links to the classical theory of optimal classi
 fication via matched filters and the effects of data memorization. Going f
 urther\, we propose a simple penalty term that can be added to the cost fu
 nction of any DN learning algorithm to force the templates to be orthogona
 l with each other\; this leads to significantly improved classification pe
 rformance and reduced overfitting with no change to the DN architecture. T
 he spline partition of the input signal space that is implicitly induced b
 y a MASO directly links DNs to the theory of vector quantization (VQ) and 
 K-means clustering\, which opens up new geometric avenue to study how DNs 
 organize signals in a hierarchical fashion. To validate the utility of the
  VQ interpretation\, we develop and validate a new distance metric for sig
 nals and images that quantifies the difference between their VQ encodings.
 \n
LOCATION:https://researchseminars.org/talk/OneWorldMINDS/10/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Stéphane Malla (École Normale Supérieure)
DTSTART:20200702T183000Z
DTEND:20200702T193000Z
DTSTAMP:20260422T225721Z
UID:OneWorldMINDS/11
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/OneWorldMIND
 S/11/">Beyond sparsity: Non-linear harmonic analysis with phase for deep n
 etworks</a>\nby Stéphane Malla (École Normale Supérieure) as part of On
 e World MINDS seminar\n\n\nAbstract\nUnderstanding the properties of deep 
 neural networks is not just about applying standard harmonic analysis tool
 s with a bit of optimization. It is shaking our understanding of non-linea
 r harmonic analysis and opening new horizons. By considering complex image
  generation and classification problems of different complexities\, I will
  show that sparsity is not always the answer and phase plays an important 
 role to capture important structures including symmetries within multiscal
 e representations. This talk will raise more questions than answers.\n
LOCATION:https://researchseminars.org/talk/OneWorldMINDS/11/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Holger Rauhut (RWTH Aachen University)
DTSTART:20200709T183000Z
DTEND:20200709T193000Z
DTSTAMP:20260422T225721Z
UID:OneWorldMINDS/12
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/OneWorldMIND
 S/12/">Convergence of gradient flows for learning deep linear neural netwo
 rks</a>\nby Holger Rauhut (RWTH Aachen University) as part of One World MI
 NDS seminar\n\n\nAbstract\nLearning neural networks amounts to minimizing 
 a loss function over given training data. Often gradient descent algorithm
 s are used for this task\, but their convergence properties are not yet we
 ll-understood. In order to make progress we consider the simplified settin
 g of linear networks optimized via gradient flows. We show that such gradi
 ent flow defined with respect to the layers (factors) can be reinterpreted
  as a Riemannian gradient flow on the manifold of rank-$r$ matrices in cer
 tain cases. The gradient flow always converges to a critical point of the 
 underlying loss functional and\, for almost all initializations\, it conve
 rges to a global minimum on the manifold of rank-$k$ matrices for some $k$
 .\n
LOCATION:https://researchseminars.org/talk/OneWorldMINDS/12/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Caroline Uhler (MIT)
DTSTART:20200716T183000Z
DTEND:20200716T193000Z
DTSTAMP:20260422T225721Z
UID:OneWorldMINDS/13
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/OneWorldMIND
 S/13/">Multi-domain data integration: from observations to mechanistic ins
 ights</a>\nby Caroline Uhler (MIT) as part of One World MINDS seminar\n\n\
 nAbstract\nMassive data collection holds the promise of a better understan
 ding of complex phenomena and ultimately\, of better decisions. An excitin
 g opportunity in this regard stems from the growing availability of pertur
 bation / intervention data (manufacturing\, advertisement\, education\, ge
 nomics\, etc.). In order to obtain mechanistic insights from such data\, a
  major challenge is the integration of different data modalities (video\, 
 audio\, interventional\, observational\, etc.). Using genomics and in part
 icular the problem of identifying drugs for the repurposing against COVID-
 19 as an example\, I will first discuss our recent work on coupling autoen
 coders in the latent space to integrate and translate between data of very
  different modalities such as sequencing and imaging. I will then present 
 a framework for integrating observational and interventional data for caus
 al structure discovery and characterize the causal relationships that are 
 identifiable from such data. We end by a theoretical analysis of autoencod
 ers linking overparameterization to memorization. In particular\, I will c
 haracterize the implicit bias of overparameterized autoencoders and show t
 hat such networks trained using standard optimization methods implement as
 sociative memory. Collectively\, our results have major implications for p
 lanning and learning from interventions in various application domains.\n
LOCATION:https://researchseminars.org/talk/OneWorldMINDS/13/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Tamara Kolda (Sandia National Laboratories)
DTSTART:20200723T183000Z
DTEND:20200723T193000Z
DTSTAMP:20260422T225721Z
UID:OneWorldMINDS/14
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/OneWorldMIND
 S/14/">Practical Leverage-Based Sampling for Low-Rank Tensor Decomposition
 </a>\nby Tamara Kolda (Sandia National Laboratories) as part of One World 
 MINDS seminar\n\n\nAbstract\nConventional algorithms for finding low-rank 
 canonical polyadic (CP) tensor decompositions are unwieldy for large spars
 e tensors. The CP decomposition can be computed by solving a sequence of o
 verdetermined least problems with special Khatri-Rao structure. In this wo
 rk\, we present an application of randomized algorithms to fitting the CP 
 decomposition of sparse tensors\, solving a significantly smaller sampled 
 least squares problem at each iteration with probabilistic guarantees on t
 he approximation errors. Prior work has shown that sketching is effective 
 in the dense case\, but the prior approach cannot be applied to the sparse
  case because a fast Johnson-Lindenstrauss transform (e.g.\, using a fast 
 Fourier transform) must be applied in each mode\, causing the sparse tenso
 r to become dense. Instead\, we perform sketching through leverage score s
 ampling\, crucially relying on the fact that the structure of the Khatri-R
 ao product allows sampling from overestimates of the leverage scores witho
 ut forming the full product or the corresponding probabilities. Naive appl
 ication of leverage score sampling is ineffective because we often have ca
 ses where a few scores are quite large\, so we propose a novel hybrid of d
 eterministic and random leverage-score sampling which consistently yields 
 improved fits. Numerical results on real-world large-scale tensors show th
 e method is significantly faster than competing methods without sacrificin
 g accuracy. This is joint work with Brett Larsen\, Stanford University.\n
LOCATION:https://researchseminars.org/talk/OneWorldMINDS/14/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Mary Wootters (Stanford University)
DTSTART:20200730T183000Z
DTEND:20200730T193000Z
DTSTAMP:20260422T225721Z
UID:OneWorldMINDS/15
DESCRIPTION:by Mary Wootters (Stanford University) as part of One World MI
 NDS seminar\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/OneWorldMINDS/15/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Tselil Schramm (MIT)
DTSTART:20200806T183000Z
DTEND:20200806T193000Z
DTSTAMP:20260422T225721Z
UID:OneWorldMINDS/16
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/OneWorldMIND
 S/16/">Reconciling Statistical Queries and the Low Degree Likelihood Ratio
 </a>\nby Tselil Schramm (MIT) as part of One World MINDS seminar\n\n\nAbst
 ract\nIn many high-dimensional statistics problems\, we observe informatio
 n-computation tradeoffs: given access to more data\, statistical estimatio
 n and inference tasks require fewer computational resources. Though this p
 henomenon is ubiquitous\, we lack rigorous evidence that it is inherent. I
 n the current day\, to prove that a statistical estimation task is computa
 tionally intractable\, researchers must prove lower bounds against each ty
 pe of algorithm\, one by one\, resulting in a "proliferation of lower boun
 ds". We scientists dream of a more general theory which unifies these lowe
 r bounds and explains computational intractability in an algorithm-indepen
 dent way.\n\nIn this talk\, I will make one small step towards realizing t
 his dream. I will demonstrate general conditions under which two popular f
 rameworks yield the same information-computation tradeoffs for high-dimens
 ional hypothesis testing: the first being statistical queries in the "SDA"
  framework\, and the second being hypothesis testing with low-degree hypot
 hesis tests\, also known as the low-degree-likelihood ratio. Our equivalen
 ce theorems capture numerous well-studied high-dimensional learning proble
 ms: sparse PCA\, tensor PCA\, community detection\, planted clique\, and m
 ore.\n\nBased on joint work with Matthew Brennan\, Guy Bresler\, Samuel B.
  Hopkins and Jerry Li.\n
LOCATION:https://researchseminars.org/talk/OneWorldMINDS/16/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Dustin Mixon (Ohio State)
DTSTART:20200813T183000Z
DTEND:20200813T193000Z
DTSTAMP:20260422T225721Z
UID:OneWorldMINDS/17
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/OneWorldMIND
 S/17/">Ingredients matter: Quick and easy recipes for estimating clusters\
 , manifolds\, and epidemics</a>\nby Dustin Mixon (Ohio State) as part of O
 ne World MINDS seminar\n\n\nAbstract\nData science resembles the culinary 
 arts in the sense that better ingredients allow for better results. We con
 sider three instances of this phenomenon. First\, we estimate clusters in 
 graphs\, and we find that more signal allows for faster estimation. Here\,
  "signal" refers to having more edges within planted communities than acro
 ss communities. Next\, in the context of manifolds\, we find that an infor
 mative prior allows for estimates of lower error. In particular\, we apply
  the prior that the unknown manifold enjoys a large\, unknown symmetry gro
 up. Finally\, we consider the problem of estimating parameters in epidemio
 logical models\, where we find that a certain diversity of data allows one
  to design estimation algorithms with provable guarantees. In this case\, 
 data diversity refers to certain combinatorial features of the social netw
 ork. Joint work with Jameson Cahill\, Charles Clum\, Hans Parshall\, and K
 aiying Xie.\n
LOCATION:https://researchseminars.org/talk/OneWorldMINDS/17/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Helmut Bölcskei (ETH Zürich)
DTSTART:20200820T183000Z
DTEND:20200820T193000Z
DTSTAMP:20260422T225721Z
UID:OneWorldMINDS/18
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/OneWorldMIND
 S/18/">Fundamental limits of learning in deep neural networks</a>\nby Helm
 ut Bölcskei (ETH Zürich) as part of One World MINDS seminar\n\n\nAbstrac
 t\nWe develop a theory that allows to characterize the fundamental limits 
 of learning in deep neural networks. Concretely\, we consider Kolmogorov-o
 ptimal approximation through deep neural networks with the guiding theme b
 eing a relation between the epsilon-entropy of the hypothesis class to be 
 learned and the complexity of the approximating network in terms of connec
 tivity and memory requirements for storing the network topology and the qu
 antized weights and biases. The theory we develop educes remarkable univer
 sality properties of deep networks. Specifically\, deep networks can Kolmo
 gorov-optimally learn essentially any hypothesis class. In addition\, we f
 ind that deep networks provide exponential approximation accuracy—i.e.\,
  the approximation error decays exponentially in the number of non-zero we
 ights in the network—of widely different functions including the multipl
 ication operation\, polynomials\, sinusoidal functions\, general smooth fu
 nctions\, and even one-dimensional oscillatory textures and fractal functi
 ons such as the Weierstrass function\, both of which do not have any known
  methods achieving exponential approximation accuracy. We also show that i
 n the approximation of sufficiently smooth functions finite-width deep net
 works require strictly smaller connectivity than finite-depth wide network
 s. We conclude with an outlook on the further role our theory could play.\
 n
LOCATION:https://researchseminars.org/talk/OneWorldMINDS/18/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Nir Sochen (University of Tel Aviv)
DTSTART:20200827T183000Z
DTEND:20200827T193000Z
DTSTAMP:20260422T225721Z
UID:OneWorldMINDS/19
DESCRIPTION:by Nir Sochen (University of Tel Aviv) as part of One World MI
 NDS seminar\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/OneWorldMINDS/19/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Daniel Potts (TU Chemnitz)
DTSTART:20200903T183000Z
DTEND:20200903T193000Z
DTSTAMP:20260422T225721Z
UID:OneWorldMINDS/20
DESCRIPTION:by Daniel Potts (TU Chemnitz) as part of One World MINDS semin
 ar\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/OneWorldMINDS/20/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Rima Alifari (September 10:  Rima Alifari ()
DTSTART:20200910T183000Z
DTEND:20200910T193000Z
DTSTAMP:20260422T225721Z
UID:OneWorldMINDS/21
DESCRIPTION:by Rima Alifari (September 10:  Rima Alifari () as part of One
  World MINDS seminar\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/OneWorldMINDS/21/
END:VEVENT
END:VCALENDAR
