BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:Jinglai Li (University of Birmingham)
DTSTART;VALUE=DATE-TIME:20200707T120000Z
DTEND;VALUE=DATE-TIME:20200707T130000Z
DTSTAMP;VALUE=DATE-TIME:20240329T054926Z
UID:DSCSS/1
DESCRIPTION:Title: Ma
ximum conditional entropy Hamiltonian Monte Carlo sampler\nby Jinglai
Li (University of Birmingham) as part of Data Science and Computational St
atistics Seminar\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/DSCSS/1/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Jinming Duan (University of Birmingham)
DTSTART;VALUE=DATE-TIME:20200714T130000Z
DTEND;VALUE=DATE-TIME:20200714T140000Z
DTSTAMP;VALUE=DATE-TIME:20240329T054926Z
UID:DSCSS/2
DESCRIPTION:Title: Ca
rdiac Magnetic Resonance Image Segmentation with Anatomical Knowledge\
nby Jinming Duan (University of Birmingham) as part of Data Science and Co
mputational Statistics Seminar\n\n\nAbstract\nThis talk focuses on segment
ation of cardiac magnetic resonance (CMR) images from both healthy and pat
hological subjects. Specifically\, we will propose three different approac
hes that explicitly consider geometry (anatomy) information of the heart.\
n\nFirst\, we introduce a novel deep level set method\, which explicitly c
onsiders the image features learned from a deep neural network. To this en
d\, we estimate joint probability maps over both region and edge locations
in CMR images using a fully convolutional network. Due to the distinct mo
rphology of pulmonary hypertension (PH) hearts\, these probability maps ca
n then be incorporated in a single nested level set optimisation framework
to achieve multi-region segmentation with high efficiency. We show result
s on CMR cine images and demonstrate that the proposed method leads to sub
stantial improvements for CMR image segmentation in PH patients.\n\nSecond
\, we propose a multi-task deep learning approach with atlas propagation t
o develop a shape-refined bi-ventricular segmentation pipeline for short-a
xis CMR volumetric images. The pipeline combines the computational advanta
ge of 2.5D FCNs networks and the capability of addressing 3D spatial consi
stency without compromising segmentation accuracy. A refinement step is in
troduced for overcoming image artefacts (e.g.\, due to different breath-ho
ld positions and large slice thickness)\, which preclude the creation of a
natomically meaningful 3D cardiac shapes. Extensive numerical experiments
on the two large datasets show that our method is robust and capable of pr
oducing accurate\, high-resolution\, and anatomically smooth bi-ventricula
r 3D models\, despite the presence of artefacts in input CMR volumes.\n\nL
astly\, accelerating the CMR acquisition is essential. However\, reconstru
cting high-quality images from accelerated CMR acquisition is a nontrivial
problem. As such\, I will show how deep neural networks can be developed
to bypass the usual image reconstruction stage. The method applies shape p
rior knowledge through an auto-encoder. Due to the prior knowledge\, we im
proved both the CMR acquisition time and segmentation accuracy.\n
LOCATION:https://researchseminars.org/talk/DSCSS/2/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Wei Zhang (Zuse Institute Berlin)
DTSTART;VALUE=DATE-TIME:20200721T120000Z
DTEND;VALUE=DATE-TIME:20200721T130000Z
DTSTAMP;VALUE=DATE-TIME:20240329T054926Z
UID:DSCSS/3
DESCRIPTION:Title: Re
cent developments of Monte Carlo sampling strategies for probability distr
ibutions on submanifolds\nby Wei Zhang (Zuse Institute Berlin) as part
of Data Science and Computational Statistics Seminar\n\n\nAbstract\nMonte
Carlo sampling for probability distributions on submanifolds is involved
in many applications in molecular dynamics\, statistical mechanics and Bay
esian computation. In this talk\, I will talk about two types of Monte Ca
rlo schemes that are developed in recent years. The first type of schemes
is based on the ergodicity of stochastic differential equations (SDEs) on
submanifolds and is asymptotically unbiased as the step-size vanishes. The
second type of schemes consists of Markov chain Monte Carlo (MCMC) algori
thms that are unbiased when finite step-sizes are used. I will discuss the
role of projections onto submanifolds\, as well as the necessity of the s
o-called "reversibility check'' step in MCMC schemes on submanifolds that
is first pointed out by Goodman\, Holmes-Cerfon and Zappa. During the talk
\, I will illustrate both types of schemes with some numerical examples.\n
LOCATION:https://researchseminars.org/talk/DSCSS/3/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Long Tran-Thanh (University of Warwick)
DTSTART;VALUE=DATE-TIME:20200728T120000Z
DTEND;VALUE=DATE-TIME:20200728T130000Z
DTSTAMP;VALUE=DATE-TIME:20240329T054926Z
UID:DSCSS/4
DESCRIPTION:Title: On
COPs\, Bandits\, and AI for Good\nby Long Tran-Thanh (University of W
arwick) as part of Data Science and Computational Statistics Seminar\n\n\n
Abstract\nIf you have a question about this talk\, please contact Hong Duo
ng.\n\nIn the recent years there has been an increasing interest in applyi
ng techniques from artificial intelligence (AI) to tackle societal and env
ironmental challenges\, ranging from climate change and natural disasters\
, to food safety and disease spread. These efforts are typically known und
er the name AI for Good. While many research work in this area have been f
ocusing on designing machine learning algorithms to learn new insights/pre
dict future events from previously collected data\, there is another domai
n where AI has been found to be useful\, namely: resource allocation and d
ecision making. In particular\, a key step in addressing societal/environm
ental challenges is to efficiently allocate a set of sparse resources to m
itigate the problem(s). For example\, in the case of wildfire\, a decision
maker has to adaptively and sequentially allocate a limited number of fir
efighting units to stop the spread of the fire as soon as possible. Anothe
r example comes from the problem of housing management for people in need\
, where a limited number of housing units have to be allocated to applican
ts in an online manner over time.\n\nWhile sequential resource allocation
can be often casted as (online) combinatorial optimisation problems (COPs)
\, they can differ from the standard COPs when the decision maker has to p
erform under uncertainty (e.g.\, the value of the action is not known in a
dvance\, or future events are unknown at the decision making stage). In th
e presence of such uncertainty\, a popular tool from the decision making l
iterature\, called multi-armed bandits\, comes in handy. In this talk\, I
will demonstrate how to efficiently combine COPs with bandit models to tac
kle some AI for Good problems. In particular\, I first show how to combine
knapsack models with combinatorial bandits to efficiently allocate firefi
ghting units and drones to mitigate wildfires. In the second part of the t
alk\, I will demonstrate how interval scheduling\, paired up with blocking
bandits\, can be a useful approach as a housing assignment method for peo
ple in need.\n\nShort bio of the speaker:\n\nLong is a Hungarian-Vietnames
e computer scientist at the University of Warwick\, UK\, where he is curre
ntly an Associate Professor. He obtained his PhD in Computer Science from
Southampton in 2012\, under the supervision of Nick Jennings and Alex Roge
rs. Long has been doing active research in a number of key areas of Artifi
cial Intelligence and multi-agent systems\, mainly focusing on multi-armed
bandits\, game theory\, and incentive engineering\, and their application
s to crowdsourcing\, human-agent learning\, and AI for Good. He has publis
hed more than 60 papers at top AI conferences (AAAI\, AAMAS \, ECAI\, IJCA
I \, NeurIPS\, UAI ) and journals (JAAMAS\, AIJ )\, and have received a nu
mber of national/international awards\, such as:\n\n(i) BCS /CPHC Best Com
puter Science PhD Dissertation Award (2012/13) – Honourable Mention\; (i
i) ECCAI /EurAI Best Artificial Intelligence Dissertation Award (2012/13)
– Honourable Mention\; (iii) AAAI Outstanding Paper Award (2012) – Hon
ourable Mention (out of more than 1000 submissions)\; and (iv) ECAI Best S
tudent Paper Award (2012)- Runner-Up (out of more than 600 submissions). (
v) IJCAI 2019 Early Career Spotlight Talk – invited\n\nLong currently se
rves as a board member (2018-2024) of the IFAAMAS Directory Board\, the ma
in international governing body of the International Federation for Autono
mous Agents and Multiagent Systems\, a major sub-field of the AI community
. He is also the local chair of the AAMAS 2021 conference\, which will be
held in London\, UK.\n
LOCATION:https://researchseminars.org/talk/DSCSS/4/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Xin Tong (National University of Singapore)
DTSTART;VALUE=DATE-TIME:20200804T130000Z
DTEND;VALUE=DATE-TIME:20200804T140000Z
DTSTAMP;VALUE=DATE-TIME:20240329T054926Z
UID:DSCSS/5
DESCRIPTION:Title: Ca
n algorithms collaborate? The replica exchange method\nby Xin Tong (Na
tional University of Singapore) as part of Data Science and Computational
Statistics Seminar\n\n\nAbstract\nGradient descent (GD) is known to conver
ge quickly for convex objective functions\, but it can be trapped at local
minima. On the other hand\, Langevin dynamics (LD) can explore the state
space and find global minima\, but in order to give accurate estimates\, L
D needs to run with a small discretization step size and weak stochastic f
orce\, which in general slow down its convergence. This paper shows that t
hese two algorithms can ``collaborate” through a simple exchange mechani
sm\, in which they swap their current positions if LD yields a lower objec
tive function. This idea can be seen as the singular limit of the replica-
exchange technique from the sampling literature. We show that this new alg
orithm converges to the global minimum linearly with high probability\, as
suming the objective function is strongly convex in a neighborhood of the
unique global minimum. By replacing gradients with stochastic gradients\,
and adding a proper threshold to the exchange mechanism\, our algorithm ca
n also be used in online settings. We further verify our theoretical resul
ts through some numerical experiments\, and observe superior performance o
f the proposed algorithm over running GD or LD alone.\n
LOCATION:https://researchseminars.org/talk/DSCSS/5/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Matthias Sachs (Duke University)
DTSTART;VALUE=DATE-TIME:20200811T120000Z
DTEND;VALUE=DATE-TIME:20200811T130000Z
DTSTAMP;VALUE=DATE-TIME:20240329T054926Z
UID:DSCSS/6
DESCRIPTION:Title: No
n-reversible Markov chain Monte Carlo for sampling of districting maps
\nby Matthias Sachs (Duke University) as part of Data Science and Computat
ional Statistics Seminar\n\nAbstract: TBA\n\nFollowing the 2010 census exc
essive Gerrymandering (i.e.\, the design of electoral districting maps in
such a way that outcomes are tilted in favor of a certain political power/
party) has become an increasingly prevalent practice in several US states.
Recent approaches to quantify the degree of such partisan districting use
a random ensemble of districting plans which are drawn from a prescribed
probability distribution that adheres to certain non-partisan criteria. In
this talk I will discuss the construction of non-reversible Markov chain
Monte-Carlo (MCMC) methods for sampling of such districting plans as insta
nces of what we term the Mixed skewed Metropolis-Hastings algorithm (MSMH)
—a novel construction of non-reversible Markov chains which relies on a
generalization of what is commonly known as skew detailed balance.\n
LOCATION:https://researchseminars.org/talk/DSCSS/6/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Yunwen Lei (University of Kaiserslautern)
DTSTART;VALUE=DATE-TIME:20200818T120000Z
DTEND;VALUE=DATE-TIME:20200818T130000Z
DTSTAMP;VALUE=DATE-TIME:20240329T054926Z
UID:DSCSS/7
DESCRIPTION:Title: St
atistical Learning by Stochastic Gradient Descent\nby Yunwen Lei (Univ
ersity of Kaiserslautern) as part of Data Science and Computational Statis
tics Seminar\n\n\nAbstract\nStochastic gradient descent (SGD) has become t
he workhorse behind many machine learning problems. Optimization and estim
ation errors are two contradictory factors responsible for the prediction
behavior of SGD . In this talk\, we report our generalization analysis of
SGD by considering simultaneously the optimization and estimation errors.
We remove some restrictive assumptions in the literature and significantly
improve the existing generalization bounds. Our results help to understan
d how to stop SGD early to get a best generalization performance.\n
LOCATION:https://researchseminars.org/talk/DSCSS/7/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Andrew Duncan (Imperial College London)
DTSTART;VALUE=DATE-TIME:20200825T120000Z
DTEND;VALUE=DATE-TIME:20200825T130000Z
DTSTAMP;VALUE=DATE-TIME:20240329T054926Z
UID:DSCSS/8
DESCRIPTION:Title: On
the geometry of Stein variational gradient descent\nby Andrew Duncan
(Imperial College London) as part of Data Science and Computational Statis
tics Seminar\n\n\nAbstract\nBayesian inference problems require sampling o
r approximating high-dimensional probability distributions. The focus of t
his talk is on the recently introduced Stein variational gradient descent
methodology\, a class of algorithms that rely on iterated steepest descent
steps with respect to a reproducing kernel Hilbert space norm. This const
ruction leads to interacting particle systems\, the mean-field limit of wh
ich is a gradient flow on the space of probability distributions equipped
with a certain geometrical structure. We leverage this viewpoint to shed s
ome light on the convergence properties of the algorithm\, in particular a
ddressing the problem of choosing a suitable positive definite kernel func
tion. Our analysis leads us to considering certain singular kernels with a
djusted tails. This is joint work with N. Nusken (U. of Potsdam) and L. Sz
pruch (U. Edinburgh).\n
LOCATION:https://researchseminars.org/talk/DSCSS/8/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Nikolas Nüsken (University of Potsdam)
DTSTART;VALUE=DATE-TIME:20200901T120000Z
DTEND;VALUE=DATE-TIME:20200901T130000Z
DTSTAMP;VALUE=DATE-TIME:20240329T054926Z
UID:DSCSS/9
DESCRIPTION:Title: So
lving high-dimensional Hamilton-Jacobi-Bellman PDEs using neural networks:
perspectives from the theory of controlled diffusions and measures on pat
h space\nby Nikolas Nüsken (University of Potsdam) as part of Data Sc
ience and Computational Statistics Seminar\n\n\nAbstract\nThe first part o
f this presentation will review connections between problems in the optima
l control of diffusion processes\, Hamilton-Jacobi-Bellman equations and f
orward-backward SDEs\, having in mind applications in rare event simulatio
n and stochastic filtering. The second part will explain a recent approach
based on divergences between probability measures on path space and varia
tional inference that can be used to construct appropriate loss functions
in a machine learning framework. This is joint work with Lorenz Richter.\n
LOCATION:https://researchseminars.org/talk/DSCSS/9/
END:VEVENT
BEGIN:VEVENT
SUMMARY:The Anh Han (Teesside University)
DTSTART;VALUE=DATE-TIME:20201027T150000Z
DTEND;VALUE=DATE-TIME:20201027T160000Z
DTSTAMP;VALUE=DATE-TIME:20240329T054926Z
UID:DSCSS/10
DESCRIPTION:Title: T
o Regulate or Not: A Social Dynamics Analysis of an Idealised Artificial I
ntelligence Race\nby The Anh Han (Teesside University) as part of Data
Science and Computational Statistics Seminar\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/DSCSS/10/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Panayiota Touloupou (University of Birmingham)
DTSTART;VALUE=DATE-TIME:20201110T150000Z
DTEND;VALUE=DATE-TIME:20201110T160000Z
DTSTAMP;VALUE=DATE-TIME:20240329T054926Z
UID:DSCSS/11
DESCRIPTION:Title: S
calable inference for epidemic models with individual level data.\nby
Panayiota Touloupou (University of Birmingham) as part of Data Science and
Computational Statistics Seminar\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/DSCSS/11/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Wil Ward (University of Sheffield)
DTSTART;VALUE=DATE-TIME:20201124T150000Z
DTEND;VALUE=DATE-TIME:20201124T160000Z
DTSTAMP;VALUE=DATE-TIME:20240329T054926Z
UID:DSCSS/12
DESCRIPTION:Title: G
aussian processes techniques for non-linear multidimensional dynamical sys
tems\nby Wil Ward (University of Sheffield) as part of Data Science an
d Computational Statistics Seminar\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/DSCSS/12/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Dennis Sun (California Polytechnic State University)
DTSTART;VALUE=DATE-TIME:20201208T150000Z
DTEND;VALUE=DATE-TIME:20201208T160000Z
DTSTAMP;VALUE=DATE-TIME:20240329T054926Z
UID:DSCSS/13
DESCRIPTION:by Dennis Sun (California Polytechnic State University) as par
t of Data Science and Computational Statistics Seminar\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/DSCSS/13/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Boumediene Hamzi (Imperial College London)
DTSTART;VALUE=DATE-TIME:20201217T150000Z
DTEND;VALUE=DATE-TIME:20201217T160000Z
DTSTAMP;VALUE=DATE-TIME:20240329T054926Z
UID:DSCSS/14
DESCRIPTION:Title: M
achine Learning and Dynamical Systems meet in Reproducing Kernel Hilbert S
paces\nby Boumediene Hamzi (Imperial College London) as part of Data S
cience and Computational Statistics Seminar\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/DSCSS/14/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Lequan Yu (Stanford University)
DTSTART;VALUE=DATE-TIME:20210202T160000Z
DTEND;VALUE=DATE-TIME:20210202T170000Z
DTSTAMP;VALUE=DATE-TIME:20240329T054926Z
UID:DSCSS/15
DESCRIPTION:Title: M
edical Image Analysis with Data-efficient Learning\nby Lequan Yu (Stan
ford University) as part of Data Science and Computational Statistics Semi
nar\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/DSCSS/15/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Daniel Sanz-Alonso (University of Chicago)
DTSTART;VALUE=DATE-TIME:20210216T150000Z
DTEND;VALUE=DATE-TIME:20210216T160000Z
DTSTAMP;VALUE=DATE-TIME:20240329T054926Z
UID:DSCSS/16
DESCRIPTION:by Daniel Sanz-Alonso (University of Chicago) as part of Data
Science and Computational Statistics Seminar\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/DSCSS/16/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Manfred Opper (University of Birmingham)
DTSTART;VALUE=DATE-TIME:20210302T150000Z
DTEND;VALUE=DATE-TIME:20210302T160000Z
DTSTAMP;VALUE=DATE-TIME:20240329T054926Z
UID:DSCSS/17
DESCRIPTION:by Manfred Opper (University of Birmingham) as part of Data Sc
ience and Computational Statistics Seminar\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/DSCSS/17/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Oanh Nguyen (University of Illinois at Urbana-Champaign)
DTSTART;VALUE=DATE-TIME:20210316T150000Z
DTEND;VALUE=DATE-TIME:20210316T160000Z
DTSTAMP;VALUE=DATE-TIME:20240329T054926Z
UID:DSCSS/18
DESCRIPTION:Title: R
oots of random functions\nby Oanh Nguyen (University of Illinois at Ur
bana-Champaign) as part of Data Science and Computational Statistics Semin
ar\n\n\nAbstract\nRandom functions are linear combinations of deterministi
c functions using independent random coefficients. Several important examp
les are the Kac polynomial\, Weyl polynomial\, and random orthogonal polyn
omials. Random functions appear naturally in physics and approximation the
ory and remain mysterious despite decades of intensive research. We will p
resent our approaches via the local universality method to study questions
about the roots. As one of the applications\, we prove that the number of
real roots of a wide class of random polynomials satisfies the Central Li
mit Theorem.\n
LOCATION:https://researchseminars.org/talk/DSCSS/18/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Franca Hoffmann (University of Bonn)
DTSTART;VALUE=DATE-TIME:20210427T120000Z
DTEND;VALUE=DATE-TIME:20210427T130000Z
DTSTAMP;VALUE=DATE-TIME:20240329T054926Z
UID:DSCSS/19
DESCRIPTION:Title: K
alman-Wasserstein Gradient Flows\nby Franca Hoffmann (University of Bo
nn) as part of Data Science and Computational Statistics Seminar\n\n\nAbst
ract\nWe study a class of interacting particle systems that may be used fo
r optimization. By considering the mean-field limit one obtains a nonlinea
r Fokker-Planck equation. This equation exhibits a gradient structure in p
robability space\, based on a modified Wasserstein distance which reflects
particle correlations: the Kalman-Wasserstein metric. This setting gives
rise to a methodology for calibrating and quantifying uncertainty for para
meters appearing in complex computer models which are expensive to run\, a
nd cannot readily be differentiated. This is achieved by connecting the in
teracting particle system to ensemble Kalman methods for inverse problems.
This is joint work with Alfredo Garbuno-Inigo (Caltech)\, Wuchen Li (UCLA
) and Andrew Stuart (Caltech).\n
LOCATION:https://researchseminars.org/talk/DSCSS/19/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Furqan Aziz (University of Birmingham)
DTSTART;VALUE=DATE-TIME:20210525T140000Z
DTEND;VALUE=DATE-TIME:20210525T150000Z
DTSTAMP;VALUE=DATE-TIME:20240329T054926Z
UID:DSCSS/20
DESCRIPTION:Title: B
acktrackless walks on a graph\nby Furqan Aziz (University of Birmingha
m) as part of Data Science and Computational Statistics Seminar\n\n\nAbstr
act\nThe aim of this talk is to explore the use and applications of backtr
ackless walks on a graph. We will discuss how the backtrackless walks and
the coefficients of the reciprocal of the Ihara zeta function\, which are
related to the frequencies of prime cycles in the graph\, can be used to i
mplement graph kernels. We will further present explicit methods for compu
ting the eigensystem of the edge-based Laplacian of a graph. This reveals
a connection between the eigenfunctions of the edge-based Laplacian and bo
th the classical random walk and the backtrackless random walk on a graph.
The definition of edge-based Laplacian allows us to define and implement
more complex partial differential equations on graphs such as the second o
rder wave equation.\n
LOCATION:https://researchseminars.org/talk/DSCSS/20/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Aretha Teckentrup (University of Edinburgh)
DTSTART;VALUE=DATE-TIME:20210608T140000Z
DTEND;VALUE=DATE-TIME:20210608T150000Z
DTSTAMP;VALUE=DATE-TIME:20240329T054926Z
UID:DSCSS/21
DESCRIPTION:Title: C
onvergence\, Robustness and Flexibility of Gaussian Process Regression
\nby Aretha Teckentrup (University of Edinburgh) as part of Data Science a
nd Computational Statistics Seminar\n\n\nAbstract\nWe are interested in th
e task of estimating an unknown function from a set of point evaluations.
In this context\, Gaussian process regression is often used as a Bayesian
inference procedure. However\, hyper-parameters appearing in the mean and
covariance structure of the Gaussian process prior\, such as smoothness of
the function and typical length scales\, are often unknown and learnt fro
m the data\, along with the posterior mean and covariance.\n\nIn the first
part of the talk\, we will study the robustness of Gaussian process regre
ssion with respect to mis-specification of the hyper-parameters\, and prov
ide a convergence analysis of the method applied to a fixed\, unknown func
tion of interest [1].\n\nIn the second part of the talk\, we discuss deep
Gaussian processes as a class of flexible non-stationary prior distributio
ns [2].\n\n[1] A.L. Teckentrup. Convergence of Gaussian process regression
with estimated hyper-parameters and applications in Bayesian inverse prob
lems. SIAM/ASA Journal on Uncertainty Quantification\, 8(4)\, p. 1310-1337
\, 2020.\n\n[2] M.M. Dunlop\, M.A. Girolami\, A.M. Stuart\, A.L. Teckentru
p. How deep are deep Gaussian processes? Journal of Machine Learning Resea
rch\, 19(54)\, 1-46\, 2018.\n
LOCATION:https://researchseminars.org/talk/DSCSS/21/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Yulong Lu (University of Massachusetts)
DTSTART;VALUE=DATE-TIME:20210629T140000Z
DTEND;VALUE=DATE-TIME:20210629T150000Z
DTSTAMP;VALUE=DATE-TIME:20240329T054926Z
UID:DSCSS/22
DESCRIPTION:Title: A
priori generalization error analysis of neural network methods for solvin
g high dimensional elliptic PDEs\nby Yulong Lu (University of Massachu
setts) as part of Data Science and Computational Statistics Seminar\n\n\nA
bstract\nNeural network-based machine learning methods\, including the mos
t notably deep learning have achieved extraordinary successes in numerous
fields. Despite the rapid development of learning algorithms based on neur
al networks\, their mathematical analysis is far from understood. In parti
cular\, it has been a big mystery that neural network-based machine learni
ng methods work extremely well for solving high dimensional problems.\n\nI
n this talk\, we will demonstrate the power of neural network methods for
solving high dimensional elliptic PDEs. Specifically\, we will discuss an
a priori generalization error analysis of the Deep Ritz Method for solving
two classes of high dimensional Schrödinger problems: the stationary Sch
rödinger equation and the ground state of Schrödinger operator. Assumin
g the exact solution or the ground state lies in a low-complexity function
space called spectral Barron space\, we show that the convergence rate of
the generalization error is independent of dimension. We also develop a n
ew regularity theory for the PDEs of consideration on the spectral Barron
space. This can be viewed as an analog of the classical Sobolev regularity
theory for PDEs.\n
LOCATION:https://researchseminars.org/talk/DSCSS/22/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Neil Chada (King Abdullah University of Science and Technology)
DTSTART;VALUE=DATE-TIME:20210706T140000Z
DTEND;VALUE=DATE-TIME:20210706T150000Z
DTSTAMP;VALUE=DATE-TIME:20240329T054926Z
UID:DSCSS/23
DESCRIPTION:Title: U
nbiased Inference for Discretely observed Hidden Markov Model Diffusions
a>\nby Neil Chada (King Abdullah University of Science and Technology) as
part of Data Science and Computational Statistics Seminar\n\n\nAbstract\nW
e develop a Bayesian inference method for diffusions observed discretely a
nd with noise\, which is free of discretisation bias. Unlike existing unbi
ased inference methods\, our method does not rely on exact simulation tech
niques. Instead\, our method uses standard time-discretised approximations
of diffusions\, such as the Euler—Maruyama scheme. Our approach is base
d on particle marginal Metropolis—Hastings\, a particle filter\, randomi
sed multilevel Monte Carlo\, and importance sampling type correction of ap
proximate Markov chain Monte Carlo. The resulting estimator leads to infer
ence without a bias from the time-discretisation as the number of Markov c
hain iterations increases. We give convergence results and recommend alloc
ations for algorithm inputs. Our method admits a straightforward paralleli
sation\, and can be computationally efficient. The user-friendly approach
is illustrated in three examples\, where the underlying diffusion is an Or
nstein—Uhlenbeck process\, a geometric Brownian motion\, and a 2d non-re
versible Langevin equation.\n
LOCATION:https://researchseminars.org/talk/DSCSS/23/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Allen Hart (University of Bath)
DTSTART;VALUE=DATE-TIME:20210720T140000Z
DTEND;VALUE=DATE-TIME:20210720T150000Z
DTSTAMP;VALUE=DATE-TIME:20240329T054926Z
UID:DSCSS/24
DESCRIPTION:Title: E
cho state networks applied to market making problems\nby Allen Hart (U
niversity of Bath) as part of Data Science and Computational Statistics Se
minar\n\n\nAbstract\nIn this talk\, we discuss how a special type of recur
rent neural network called an Echo State Network (ESN) can be applied to s
upervised learning problems involving time series. We train the ESN using
linear regression\, and despite the training process being entirely linear
\, the ESN retains the universal approximation property.\n\nWe discuss bri
efly how an ESN can be used to solve supervised learning problems\, before
moving onto the more complex problem of reinforcement learning. We demons
trate the theory by applying the ESN to a simple market making problem tha
t appears in mathematical finance.\n
LOCATION:https://researchseminars.org/talk/DSCSS/24/
END:VEVENT
END:VCALENDAR