BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:Juergen Jost (MPI MIS)
DTSTART:20211025T140000Z
DTEND:20211025T144500Z
DTSTAMP:20260422T185417Z
UID:CMO-21w5239/1
DESCRIPTION:by Juergen Jost (MPI MIS) as part of CMO-Bound-Geometry & Lear
 ning from Data\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/CMO-21w5239/1/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Anna Seigal (Harvard University)
DTSTART:20211025T150000Z
DTEND:20211025T154500Z
DTSTAMP:20260422T185417Z
UID:CMO-21w5239/2
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/CMO-21w5239/
 2/">Groups and symmetries in Gaussian graphical models</a>\nby Anna Seigal
  (Harvard University) as part of CMO-Bound-Geometry & Learning from Data\n
 \n\nAbstract\nWe can use groups and symmetries to define new statistical m
 odels\, and to investigate them. In this talk\, I will discuss two familie
 s of multivariate Gaussian models:\n1. RDAG models: graphical models on di
 rected graphs with coloured vertices and edges\,\n2. Gaussian group models
 : multivariate Gaussian models that are parametrised by a group.\nI will f
 ocus on maximum likelihood estimation\, an optimisation problem to obtain 
 parameters in the model that best fit observed data. For RDAG models and G
 aussian group models\, the existence of the maximum likelihood estimate re
 lates to linear algebra conditions and to stability notions from invariant
  theory. This talk is based on joint work with Carlos Améndola\, Kathlén
  Kohn\, Visu Makam\, and Philipp Reichenbach.\n
LOCATION:https://researchseminars.org/talk/CMO-21w5239/2/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Shantanu Joshi (UCLA)
DTSTART:20211025T160000Z
DTEND:20211025T164500Z
DTSTAMP:20260422T185417Z
UID:CMO-21w5239/3
DESCRIPTION:by Shantanu Joshi (UCLA) as part of CMO-Bound-Geometry & Learn
 ing from Data\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/CMO-21w5239/3/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Nancy Arana-Daniel (Universidad de Guadalajara)
DTSTART:20211025T180000Z
DTEND:20211025T184500Z
DTSTAMP:20260422T185417Z
UID:CMO-21w5239/4
DESCRIPTION:by Nancy Arana-Daniel (Universidad de Guadalajara) as part of 
 CMO-Bound-Geometry & Learning from Data\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/CMO-21w5239/4/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Benjamin Sanchez-Lengeling (Google Research)
DTSTART:20211025T190000Z
DTEND:20211025T194500Z
DTSTAMP:20260422T185417Z
UID:CMO-21w5239/5
DESCRIPTION:by Benjamin Sanchez-Lengeling (Google Research) as part of CMO
 -Bound-Geometry & Learning from Data\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/CMO-21w5239/5/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Sophie Achard (CNRS University of Grenoble)
DTSTART:20211026T140000Z
DTEND:20211026T144500Z
DTSTAMP:20260422T185417Z
UID:CMO-21w5239/6
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/CMO-21w5239/
 6/">Learning from brain data</a>\nby Sophie Achard (CNRS University of Gre
 noble) as part of CMO-Bound-Geometry & Learning from Data\n\n\nAbstract\nN
 oninvasive neuroimaging of the brain while functioning is providing\nvery 
 promising data sets to study the complex organisation of brain\nareas. It 
 is not only possible to identify responses of brain areas to a\ncognitive 
 stimulus but also to model the interactions between brain\nareas.  The  hu
 man brain can be modelled as a network or graph where\nbrain areas are nod
 es of the graph and interactions of pairs are the\nedges of the graph. The
  brain connectivity networks is small-world with\na combination of segrega
 tion and integration characteristics. In this\ntalk\, I will present recen
 t advances to understand and compare brain\ndata using learning approaches
 . A particular focus on the reliability of\nthe methods will be given. Fin
 ally\, examples on various pathologies will\nhighlight the possible altera
 tions and resilience of the brain network.\n
LOCATION:https://researchseminars.org/talk/CMO-21w5239/6/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Nihat Ay (TUHH)
DTSTART:20211026T150000Z
DTEND:20211026T154500Z
DTSTAMP:20260422T185417Z
UID:CMO-21w5239/7
DESCRIPTION:by Nihat Ay (TUHH) as part of CMO-Bound-Geometry & Learning fr
 om Data\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/CMO-21w5239/7/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Pratik Chaudhari (University of Pennsylvania)
DTSTART:20211026T160000Z
DTEND:20211026T164500Z
DTSTAMP:20260422T185417Z
UID:CMO-21w5239/8
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/CMO-21w5239/
 8/">(Towards the) Foundations of Small Data</a>\nby Pratik Chaudhari (Univ
 ersity of Pennsylvania) as part of CMO-Bound-Geometry & Learning from Data
 \n\n\nAbstract\nThe relevant limit for machine learning is not N → infin
 ity but\ninstead N → 0. The human visual system is proof that it is poss
 ible to\nlearn categories with extremely few samples. This talk will discu
 ss\nsteps towards building such systems and it is structured in three\npar
 ts. The first part will discuss algorithms to adapt representations\nof de
 ep networks to new categories with few labeled data. The second\npart will
  discuss when such adaptation works well and while doing so\,\nit will dev
 elop a method to compute the information-theoretically\noptimal distance b
 etween two learning tasks. The third part will\ndiscuss tools to learn tas
 ks that are "far away" from each other and\nwill point to new methods for 
 multi-task and continual learning.\n\nThis talk will discuss results from 
 the following papers.\n1. An Information-Geometric Distance on the Space o
 f Tasks. Yansong\nGao\, Pratik Chaudhari. ICML 2021. https://arxiv.org/abs
 /2011.00613.\nCode: https://github.com/Yansongga/An-Information-Geometric-
 Distance-on-the-Space-of-Tasks\n2. Boosting a Model Zoo for Multi-Task and
  Continual Learning. Rahul\nRamesh\, Pratik Chaudhari. https://arxiv.org/a
 bs/2106.03027. Code:\nhttps://github.com/rahul13ramesh/MultitTask_ModelZoo
 \n
LOCATION:https://researchseminars.org/talk/CMO-21w5239/8/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Ruriko Yoshida (Naval Postgraduate School))
DTSTART:20211026T180000Z
DTEND:20211026T184500Z
DTSTAMP:20260422T185417Z
UID:CMO-21w5239/9
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/CMO-21w5239/
 9/">Tree Topologies along a Tropical Line Segment ↓</a>\nby Ruriko Yoshi
 da (Naval Postgraduate School)) as part of CMO-Bound-Geometry & Learning f
 rom Data\n\n\nAbstract\nTropical geometry with the max-plus algebra has be
 en applied to statistical learning models over tree spaces because geometr
 y with the tropical metric over tree spaces has some nice properties such 
 as convexity in terms of the tropical metric.  One of the challenges in ap
 plications of tropical geometry to tree spaces is the difficulty interpret
 ing outcomes of statistical models with the tropical metric.  We focus on 
 combinatorics of tree topologies along a tropical line segment\, an intrin
 sic geodesic with the tropical metric\, between two phylogenetic trees ove
 r the tree space and we show some properties of a tropical line segment be
 tween two trees.  Specifically we show that a probability of a tropical li
 ne segment of two randomly chosen trees going through the origin (the star
  tree) is zero if the number of leave is greater than four\, and we also s
 how that if two given trees differ only one nearest neighbor interchange (
 NNI) move\, then the tree topology of a tree in the tropical line segment 
 between them is the same tree topology of one of these given two trees wit
 h  possible zero branch lengths.  This is joint work with Shelby Cox.\n
LOCATION:https://researchseminars.org/talk/CMO-21w5239/9/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Jun Zhang (University of Michigan))
DTSTART:20211026T190000Z
DTEND:20211026T194500Z
DTSTAMP:20260422T185417Z
UID:CMO-21w5239/10
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/CMO-21w5239/
 10/">Information Geometry: A Tutorial</a>\nby Jun Zhang (University of Mic
 higan)) as part of CMO-Bound-Geometry & Learning from Data\n\n\nAbstract\n
 Information Geometry is the differential geometric study of the set of all
  probability distributions on a given sample space\, modeled as a differen
 tiable manifold where each point represents one probability distribution w
 ith its parameter serving as local coordinates. Such manifold is equipped 
 with a natural Riemannian metric (Fisher-Rao metric) and a family of affin
 e connections (alpha-connections) that define parallel transport of score 
 functions as tangent vectors. Starting from the motivating example of the 
 family of univariate normal distribution on a continuous support and of th
 e probability simplex as a family on discrete support\, I will explain how
  divergence functions (or contrast functions) measuring directed distance 
 on a manifold\, e.g.\, Kullback-Leibler divergence\, Bregman divergence\, 
 f-divergence\, etc. are tied to Legendre duality and convex analysis\, and
  how they in turn generate the underlying dualistic geometry of the what i
 s known as the “statistical manifold”. The case of maximum entropy (or
  minimum divergence) inference will be highlighted\, since it is linked to
  the exponential family and the dually-flat (Hessian) geometric structure\
 , the simplest and the most well-understood example of information geometr
 y. If time permits\, I will introduce new development including the state-
 of-the-art understanding of deformation models\, in which generalized entr
 opy (for instance\, Tsallis entropy\, Renyi entropy\, phi-entropy) replace
 s Shannon entropy and deformed divergence replaces KL and Bregman divergen
 ces. Deformed exponential families reveal an “escort statistics” and 
 “gauge freedom” that is buried in the standard exponential family. Thi
 s tutorial attempts to give a gentle introduction to information geometry 
 to a non-geometric audience.\n
LOCATION:https://researchseminars.org/talk/CMO-21w5239/10/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Yalbi Itzel Balderas-Martinez (Instituto Nacional de Enfermedades 
 Respiratorias)
DTSTART:20211026T213000Z
DTEND:20211026T223000Z
DTSTAMP:20260422T185417Z
UID:CMO-21w5239/11
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/CMO-21w5239/
 11/">Panel: AI & Public Institutions</a>\nby Yalbi Itzel Balderas-Martinez
  (Instituto Nacional de Enfermedades Respiratorias) as part of CMO-Bound-G
 eometry & Learning from Data\n\n\nAbstract\nA conversation with public act
 ors and stake-holders\, with a focus on AI in use cases in Mexican Public 
 Institutions (Government\, Science Planning\, and Healthcare). With Dr. Ed
 uardo Ulises Moya\, Dra. Paola Villareal\, and Dra. Yalbi Itzel Balderas M
 artinez.\n
LOCATION:https://researchseminars.org/talk/CMO-21w5239/11/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Maks Ovsjanikov (LIX Ecole Polytechnique)
DTSTART:20211027T140000Z
DTEND:20211027T144500Z
DTSTAMP:20260422T185417Z
UID:CMO-21w5239/12
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/CMO-21w5239/
 12/">Efficient learning on curved surfaces via diffusion</a>\nby Maks Ovsj
 anikov (LIX Ecole Polytechnique) as part of CMO-Bound-Geometry & Learning 
 from Data\n\n\nAbstract\nIn this talk I will describe several approaches f
 or learning on curved surfaces\, represented as point clouds or triangle m
 eshes. I will first give a brief overview of geodesic convolutional neural
  networks (GCNNs) and their variants and then present a recent approach th
 at replaces this paradigm with an efficient framework that is based on dif
 fusion. The key properties of this approach is that it avoids potentially 
 error-prone and costly operations\, such as local patch discretization wit
 h robust and efficient building blocks that are based on learned diffusion
  and gradient computation. I will then show several applications\, ranging
  from RNA surface segmentation to non-rigid shape correspondence\, while h
 ighlighting the invariance of this technique to sampling and triangle mesh
  structure.\n
LOCATION:https://researchseminars.org/talk/CMO-21w5239/12/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Xavier Pennec (Université Côte d'Azur and INRIA)
DTSTART:20211027T150000Z
DTEND:20211027T154500Z
DTSTAMP:20260422T185417Z
UID:CMO-21w5239/13
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/CMO-21w5239/
 13/">Curvature effects in Geometric statistics : empirical Frechet mean an
 d parallel transport accuracy</a>\nby Xavier Pennec (Université Côte d'A
 zur and INRIA) as part of CMO-Bound-Geometry & Learning from Data\n\n\nAbs
 tract\nTwo fundamental tools for statistics on objects living in non-linea
 r manifolds are the Fréchet mean and the parallel transport. We present i
 n this talk new results based on Gavrilov's tensorial series expansions al
 low us to quantify the accuracy of these two fundamental tools and to put 
 forward the impact of the manifold curvature.\n\nA central limit theorem f
 or the empirical Fréchet mean was established in Riemannian manifolds by 
 Bhattacharya & Patrangenaru in 2005. We propose an asymptotic development 
 valid in Riemannian and affine cases which better explain the role of the 
 curvature in the concentration of the empirical Fréchet mean towards the 
 population mean with a finite number of samples. We also establish a new n
 on-asymptotic (small sample) expansion in high concentration conditions wh
 ich shows a statistical bias on the empirical mean in the direction of the
  average gradient of the curvature. These curvature effects become importa
 nt with large curvature and can drastically modify the estimation of the m
 ean. They could partly explain the phenomenon of sticky means recently put
  into evidence in stratified spaces with negative curvature\, and smeary m
 eans in positive curvature.\n\nParallel transport is a second major tool\,
  for instant to transport longitudinal deformation trajectories from each 
 individuals towards a template brain shape before for performing group-wis
 e statistics in longitudinal analyses. More generally\, parallel transport
  should be the natural geometric formulation for domain adaptation in mach
 ine learning in non-linear spaces. In previous works\, we have build on th
 e Schild's ladder principle to engineer a more symmetric discrete parallel
  transport scheme based on iterated geodesic parallelograms\, called pole 
 ladder. This scheme is surprisingly exact in only one step on symmetric sp
 aces\, which makes it quite interesting for many applications involving si
 mple symmetric manifolds. For general manifolds\, Schild's and pole ladder
 s were thought to be of first order with respect to the number of steps\, 
 similarly to other schemes based on Jacobi fields. However\, the literatur
 e was lacking a real convergence performance analysis when the scheme is i
 terated. We show that pole ladder naturally converges with quadratic speed
 \, and that Schild's ladder can be modified to perform identically even wh
 en geodesics are approximated by numerical schemes. This contrasts with Ja
 cobi fields approximations that are bound to linear convergence. The extra
  computational cost of ladder methods is thus easily compensated by a dras
 tic reduction of the number of steps needed to achieve the requested accur
 acy.\n\n\n* Xavier Pennec. Curvature effects on the empirical mean in Riem
 annian and affine Manifolds: a non-asymptotic high concentration expansion
  in the small-sample regime. Note: Working paper or preprint\, June 2019. 
 ARXIV : 1906.07418\n* Nicolas Guigui and Xavier Pennec. Numerical Accuracy
  of Ladder Schemes for Parallel Transport on Manifolds. Foundations of Com
 putational Mathematics\, June 2021. ARXIV : 2007.07585\n
LOCATION:https://researchseminars.org/talk/CMO-21w5239/13/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Chris Connell (Indiana University Bloomington)
DTSTART:20211027T160000Z
DTEND:20211027T164500Z
DTSTAMP:20260422T185417Z
UID:CMO-21w5239/14
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/CMO-21w5239/
 14/">Tensor decomposition based network embedding algorithms for predictio
 n tasks on dynamic networks.</a>\nby Chris Connell (Indiana University Blo
 omington) as part of CMO-Bound-Geometry & Learning from Data\n\n\nAbstract
 \nClassical network embeddings create a low dimensional representation of 
 the learned relationships between features across nodes. Such embeddings a
 re important for tasks such as link prediction and node classification. We
  consider low dimensional embeddings of “dynamic networks” -- a family
  of time varying networks where there exist both temporal and spatial link
  relationships between nodes. We present novel embedding methods for a dyn
 amic network based on higher order tensor decompositions for tensorial rep
 resentations of the dynamic network. Our embeddings are analogous to certa
 in classical spectral embedding methods for static networks. We demonstrat
 e the effectiveness of our approach by comparing our algorithms' performan
 ce on the link prediction task against an array of current baseline method
 s across three distinct real-world dynamic networks. Finally\, we provide 
 a mathematical rationale for this effectiveness in the regime of small inc
 remental changes. This is joint work with Yang Wang.\n
LOCATION:https://researchseminars.org/talk/CMO-21w5239/14/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Nina Miolane (UC Santa Barbara)
DTSTART:20211027T180000Z
DTEND:20211027T184500Z
DTSTAMP:20260422T185417Z
UID:CMO-21w5239/15
DESCRIPTION:by Nina Miolane (UC Santa Barbara) as part of CMO-Bound-Geomet
 ry & Learning from Data\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/CMO-21w5239/15/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Katy Craig (University of California Santa Barbara)
DTSTART:20211027T190000Z
DTEND:20211027T194500Z
DTSTAMP:20260422T185417Z
UID:CMO-21w5239/16
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/CMO-21w5239/
 16/">A Blob Method for Diffusion and Applications to Sampling and Two Laye
 r Neural Networks</a>\nby Katy Craig (University of California Santa Barba
 ra) as part of CMO-Bound-Geometry & Learning from Data\n\n\nAbstract\nGive
 n a desired target distribution and an initial guess of that distribution\
 , composed of finitely many samples\, what is the best way to evolve the l
 ocations of the samples so that they accurately represent the desired dist
 ribution? A classical solution to this problem is to allow the samples to 
 evolve according to Langevin dynamics\, a stochastic particle method for t
 he Fokker-Planck equation. In today’s talk\, I will contrast this classi
 cal approach with a deterministic particle method corresponding to the por
 ous medium equation. This method corresponds exactly to the mean-field dyn
 amics of training a two layer neural network for a radial basis function a
 ctivation function. We prove that\, as the number of samples increases and
  the variance of the radial basis function goes to zero\, the particle met
 hod converges to a bounded entropy solution of the porous medium equation.
  As a consequence\, we obtain both a novel method for sampling probability
  distributions as well as insight into the training dynamics of two layer 
 neural networks in the mean field regime.\n
LOCATION:https://researchseminars.org/talk/CMO-21w5239/16/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Alex Cloninger (University of California San Diego)
DTSTART:20211028T140000Z
DTEND:20211028T144500Z
DTSTAMP:20260422T185417Z
UID:CMO-21w5239/17
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/CMO-21w5239/
 17/">Learning with Optimal Transport</a>\nby Alex Cloninger (University of
  California San Diego) as part of CMO-Bound-Geometry & Learning from Data\
 n\n\nAbstract\nDiscriminating between distributions is an important proble
 m in a number of scientific fields. This motivated the introduction of Lin
 ear Optimal Transportation (LOT)\, which has a number of benefits when it 
 comes to speed of computation and to determining classification boundaries
 . We characterize a number of settings in which the LOT embeds families of
  distributions into a space in which they are linearly separable. This is 
 true in arbitrary dimensions\, and for families of distributions generated
  through a variety of actions on a fixed distribution.  We also establish 
 results on discrete spaces using Entropically Regularized Optimal Transpor
 t\, and establish results about active learning with a small number of lab
 els in the space of LOT embeddings.  This is joint work with Caroline Moos
 mueller (UCSD).\n
LOCATION:https://researchseminars.org/talk/CMO-21w5239/17/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Ron Kimmel (Technion-Israel Institute of Technology)
DTSTART:20211028T150000Z
DTEND:20211028T154500Z
DTSTAMP:20260422T185417Z
UID:CMO-21w5239/18
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/CMO-21w5239/
 18/">On Geometry and Learning</a>\nby Ron Kimmel (Technion-Israel Institut
 e of Technology) as part of CMO-Bound-Geometry & Learning from Data\n\n\nA
 bstract\nGeometry means understanding in the sense that it involves findin
 g the most basic invariants or Ockham’s razor explanation for a given ph
 enomenon. At the other end\, modern Machine Learning has little to do with
  explanation or interpretation of solutions to a given problem.\nI’ll tr
 y to give some examples about the relation between learning and geometry\,
  focusing on learning geometry\, starting with the most basic notion of pl
 anar shape invariants\, efficient distance computation on surfaces\,  and 
 treating surfaces as metric spaces within a deep learning framework. I wil
 l introduce some links between these two seemingly orthogonal philosophica
 l directions.\n
LOCATION:https://researchseminars.org/talk/CMO-21w5239/18/
END:VEVENT
BEGIN:VEVENT
SUMMARY:David Alvarez Melis (Microsoft Research\, MIT\, Harvard)
DTSTART:20211028T180000Z
DTEND:20211028T184500Z
DTSTAMP:20260422T185417Z
UID:CMO-21w5239/19
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/CMO-21w5239/
 19/">Principled Data Manipulation with Optimal Transport</a>\nby David Alv
 arez Melis (Microsoft Research\, MIT\, Harvard) as part of CMO-Bound-Geome
 try & Learning from Data\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/CMO-21w5239/19/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Elizabeth Gross (University of Hawaii at Manoa)
DTSTART:20211028T190000Z
DTEND:20211028T194500Z
DTSTAMP:20260422T185417Z
UID:CMO-21w5239/20
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/CMO-21w5239/
 20/">Learning phylogenetic networks using invariants</a>\nby Elizabeth Gro
 ss (University of Hawaii at Manoa) as part of CMO-Bound-Geometry & Learnin
 g from Data\n\n\nAbstract\nPhylogenetic networks provide a means of descri
 bing the evolutionary history of sets of species believed to have undergon
 e hybridization or gene flow during the course of their evolution. The mut
 ation process for a set of such species can be modeled as a Markov process
  on a phylogenetic network. Previous work has shown that a site-pattern pr
 obability distributions from a Jukes-Cantor phylogenetic network model mus
 t satisfy certain algebraic invariants. As a corollary\, aspects of the ph
 ylogenetic network are theoretically identifiable from site-pattern freque
 ncies. In practice\, because of the probabilistic nature of sequence evolu
 tion\, the phylogenetic network invariants will rarely be satisfied\, even
  for data generated under the model. Thus\, using network invariants for i
 nferring phylogenetic networks requires some means of interpreting the res
 iduals\, or deviations from zero\, when observed site-pattern frequencies 
 are substituted into the invariants. In this work\, we propose a machine l
 earning algorithm utilizing invariants to infer small\, level-one phylogen
 etic networks. Given a data set\, the algorithm is trained on model data t
 o learn the patterns of residuals corresponding to different network struc
 tures to classify the network that produced the data.  This is joint work 
 with Travis Barton\, Colby Long\, and Joseph Rusinko.\n
LOCATION:https://researchseminars.org/talk/CMO-21w5239/20/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Eliza O'Reilly (Caltech)
DTSTART:20211028T204500Z
DTEND:20211028T213000Z
DTSTAMP:20260422T185417Z
UID:CMO-21w5239/21
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/CMO-21w5239/
 21/">Random Tessellation Features and Forests</a>\nby Eliza O'Reilly (Calt
 ech) as part of CMO-Bound-Geometry & Learning from Data\n\n\nAbstract\nThe
  Mondrian process in machine learning is a recursive partition of space wi
 th random axis-aligned cuts used to build random forests and Laplace kerne
 l approximations.  The construction allows for efficient online algorithms
 \, but the restriction to axis-aligned cuts does not capture dependencies 
 between features. By viewing the Mondrian as a special case of the stable 
 under iterated (STIT) process in stochastic geometry\, we resolve open que
 stions about the generalization of cut directions. We utilize the theory o
 f stationary random tessellations to show that STIT processes approximate 
 a large class of stationary kernels and STIT forests achieve minimax rates
  for Lipschitz functions (forests and trees) and C^2 functions (forests on
 ly). This work opens many new questions at the novel intersection of stoch
 astic geometry and machine learning. Based on joint work with Ngoc Tran.\n
LOCATION:https://researchseminars.org/talk/CMO-21w5239/21/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Ilke Demir (Intel)
DTSTART:20211028T213000Z
DTEND:20211028T221500Z
DTSTAMP:20260422T185417Z
UID:CMO-21w5239/22
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/CMO-21w5239/
 22/">Panel\; AI & Industry</a>\nby Ilke Demir (Intel) as part of CMO-Bound
 -Geometry & Learning from Data\n\n\nAbstract\nA conversation with several 
 actors and researchers about their roles in AI & Industry\, with Ilke Demi
 r (Intel)\, Juan Carlos Catana (HP Labs Mx) and David Alvarez Melis (Micro
 soft Research).\n
LOCATION:https://researchseminars.org/talk/CMO-21w5239/22/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Nina Otter (Queen Mary University London)
DTSTART:20211029T150000Z
DTEND:20211029T154500Z
DTSTAMP:20260422T185417Z
UID:CMO-21w5239/23
DESCRIPTION:by Nina Otter (Queen Mary University London) as part of CMO-Bo
 und-Geometry & Learning from Data\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/CMO-21w5239/23/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Facundo Memoli (The Ohio State University)
DTSTART:20211029T160000Z
DTEND:20211029T164500Z
DTSTAMP:20260422T185417Z
UID:CMO-21w5239/24
DESCRIPTION:by Facundo Memoli (The Ohio State University) as part of CMO-B
 ound-Geometry & Learning from Data\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/CMO-21w5239/24/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Soledad Villar (Johns Hopkins University)
DTSTART:20211029T180000Z
DTEND:20211029T184500Z
DTSTAMP:20260422T185417Z
UID:CMO-21w5239/25
DESCRIPTION:by Soledad Villar (Johns Hopkins University) as part of CMO-Bo
 und-Geometry & Learning from Data\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/CMO-21w5239/25/
END:VEVENT
END:VCALENDAR
