BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:Jinglai Li (University of Birmingham)
DTSTART;VALUE=DATE-TIME:20200707T120000Z
DTEND;VALUE=DATE-TIME:20200707T130000Z
DTSTAMP;VALUE=DATE-TIME:20200812T030334Z
UID:DSCSS/1
DESCRIPTION:Title: Maximum conditional entropy Hamiltonian Monte Carlo sam
pler\nby Jinglai Li (University of Birmingham) as part of Data Science and
Computational Statistics Seminar\n\nAbstract: TBA\n
END:VEVENT
BEGIN:VEVENT
SUMMARY:Jinming Duan (University of Birmingham)
DTSTART;VALUE=DATE-TIME:20200714T130000Z
DTEND;VALUE=DATE-TIME:20200714T140000Z
DTSTAMP;VALUE=DATE-TIME:20200812T030334Z
UID:DSCSS/2
DESCRIPTION:Title: Cardiac Magnetic Resonance Image Segmentation with Anat
omical Knowledge\nby Jinming Duan (University of Birmingham) as part of Da
ta Science and Computational Statistics Seminar\n\n\nAbstract\nThis talk f
ocuses on segmentation of cardiac magnetic resonance (CMR) images from bot
h healthy and pathological subjects. Specifically\, we will propose three
different approaches that explicitly consider geometry (anatomy) informati
on of the heart.\n\nFirst\, we introduce a novel deep level set method\, w
hich explicitly considers the image features learned from a deep neural ne
twork. To this end\, we estimate joint probability maps over both region a
nd edge locations in CMR images using a fully convolutional network. Due t
o the distinct morphology of pulmonary hypertension (PH) hearts\, these pr
obability maps can then be incorporated in a single nested level set optim
isation framework to achieve multi-region segmentation with high efficienc
y. We show results on CMR cine images and demonstrate that the proposed me
thod leads to substantial improvements for CMR image segmentation in PH pa
tients.\n\nSecond\, we propose a multi-task deep learning approach with at
las propagation to develop a shape-refined bi-ventricular segmentation pip
eline for short-axis CMR volumetric images. The pipeline combines the comp
utational advantage of 2.5D FCNs networks and the capability of addressing
3D spatial consistency without compromising segmentation accuracy. A refi
nement step is introduced for overcoming image artefacts (e.g.\, due to di
fferent breath-hold positions and large slice thickness)\, which preclude
the creation of anatomically meaningful 3D cardiac shapes. Extensive numer
ical experiments on the two large datasets show that our method is robust
and capable of producing accurate\, high-resolution\, and anatomically smo
oth bi-ventricular 3D models\, despite the presence of artefacts in input
CMR volumes.\n\nLastly\, accelerating the CMR acquisition is essential. Ho
wever\, reconstructing high-quality images from accelerated CMR acquisitio
n is a nontrivial problem. As such\, I will show how deep neural networks
can be developed to bypass the usual image reconstruction stage. The metho
d applies shape prior knowledge through an auto-encoder. Due to the prior
knowledge\, we improved both the CMR acquisition time and segmentation acc
uracy.\n
END:VEVENT
BEGIN:VEVENT
SUMMARY:Wei Zhang (Zuse Institute Berlin)
DTSTART;VALUE=DATE-TIME:20200721T120000Z
DTEND;VALUE=DATE-TIME:20200721T130000Z
DTSTAMP;VALUE=DATE-TIME:20200812T030334Z
UID:DSCSS/3
DESCRIPTION:Title: Recent developments of Monte Carlo sampling strategies
for probability distributions on submanifolds\nby Wei Zhang (Zuse Institut
e Berlin) as part of Data Science and Computational Statistics Seminar\n\n
\nAbstract\nMonte Carlo sampling for probability distributions on submanif
olds is involved in many applications in molecular dynamics\, statistical
mechanics and Bayesian computation. In this talk\, I will talk about two
types of Monte Carlo schemes that are developed in recent years. The first
type of schemes is based on the ergodicity of stochastic differential equ
ations (SDEs) on submanifolds and is asymptotically unbiased as the step-s
ize vanishes. The second type of schemes consists of Markov chain Monte Ca
rlo (MCMC) algorithms that are unbiased when finite step-sizes are used. I
will discuss the role of projections onto submanifolds\, as well as the n
ecessity of the so-called "reversibility check'' step in MCMC schemes on s
ubmanifolds that is first pointed out by Goodman\, Holmes-Cerfon and Zappa
. During the talk\, I will illustrate both types of schemes with some nume
rical examples.\n
END:VEVENT
BEGIN:VEVENT
SUMMARY:Long Tran-Thanh (University of Warwick)
DTSTART;VALUE=DATE-TIME:20200728T120000Z
DTEND;VALUE=DATE-TIME:20200728T130000Z
DTSTAMP;VALUE=DATE-TIME:20200812T030334Z
UID:DSCSS/4
DESCRIPTION:Title: On COPs\, Bandits\, and AI for Good\nby Long Tran-Thanh
(University of Warwick) as part of Data Science and Computational Statist
ics Seminar\n\n\nAbstract\nIf you have a question about this talk\, please
contact Hong Duong.\n\nIn the recent years there has been an increasing i
nterest in applying techniques from artificial intelligence (AI) to tackle
societal and environmental challenges\, ranging from climate change and n
atural disasters\, to food safety and disease spread. These efforts are ty
pically known under the name AI for Good. While many research work in this
area have been focusing on designing machine learning algorithms to learn
new insights/predict future events from previously collected data\, there
is another domain where AI has been found to be useful\, namely: resource
allocation and decision making. In particular\, a key step in addressing
societal/environmental challenges is to efficiently allocate a set of spar
se resources to mitigate the problem(s). For example\, in the case of wild
fire\, a decision maker has to adaptively and sequentially allocate a limi
ted number of firefighting units to stop the spread of the fire as soon as
possible. Another example comes from the problem of housing management fo
r people in need\, where a limited number of housing units have to be allo
cated to applicants in an online manner over time.\n\nWhile sequential res
ource allocation can be often casted as (online) combinatorial optimisatio
n problems (COPs)\, they can differ from the standard COPs when the decisi
on maker has to perform under uncertainty (e.g.\, the value of the action
is not known in advance\, or future events are unknown at the decision mak
ing stage). In the presence of such uncertainty\, a popular tool from the
decision making literature\, called multi-armed bandits\, comes in handy.
In this talk\, I will demonstrate how to efficiently combine COPs with ban
dit models to tackle some AI for Good problems. In particular\, I first sh
ow how to combine knapsack models with combinatorial bandits to efficientl
y allocate firefighting units and drones to mitigate wildfires. In the sec
ond part of the talk\, I will demonstrate how interval scheduling\, paired
up with blocking bandits\, can be a useful approach as a housing assignme
nt method for people in need.\n\nShort bio of the speaker:\n\nLong is a Hu
ngarian-Vietnamese computer scientist at the University of Warwick\, UK\,
where he is currently an Associate Professor. He obtained his PhD in Compu
ter Science from Southampton in 2012\, under the supervision of Nick Jenni
ngs and Alex Rogers. Long has been doing active research in a number of ke
y areas of Artificial Intelligence and multi-agent systems\, mainly focusi
ng on multi-armed bandits\, game theory\, and incentive engineering\, and
their applications to crowdsourcing\, human-agent learning\, and AI for Go
od. He has published more than 60 papers at top AI conferences (AAAI\, AAM
AS \, ECAI\, IJCAI \, NeurIPS\, UAI ) and journals (JAAMAS\, AIJ )\, and h
ave received a number of national/international awards\, such as:\n\n(i) B
CS /CPHC Best Computer Science PhD Dissertation Award (2012/13) – Honour
able Mention\; (ii) ECCAI /EurAI Best Artificial Intelligence Dissertation
Award (2012/13) – Honourable Mention\; (iii) AAAI Outstanding Paper Awa
rd (2012) – Honourable Mention (out of more than 1000 submissions)\; and
(iv) ECAI Best Student Paper Award (2012)- Runner-Up (out of more than 60
0 submissions). (v) IJCAI 2019 Early Career Spotlight Talk – invited\n\n
Long currently serves as a board member (2018-2024) of the IFAAMAS Directo
ry Board\, the main international governing body of the International Fede
ration for Autonomous Agents and Multiagent Systems\, a major sub-field of
the AI community. He is also the local chair of the AAMAS 2021 conference
\, which will be held in London\, UK.\n
END:VEVENT
BEGIN:VEVENT
SUMMARY:Xin Tong (National University of Singapore)
DTSTART;VALUE=DATE-TIME:20200804T130000Z
DTEND;VALUE=DATE-TIME:20200804T140000Z
DTSTAMP;VALUE=DATE-TIME:20200812T030334Z
UID:DSCSS/5
DESCRIPTION:Title: Can algorithms collaborate? The replica exchange method
\nby Xin Tong (National University of Singapore) as part of Data Science a
nd Computational Statistics Seminar\n\n\nAbstract\nGradient descent (GD) i
s known to converge quickly for convex objective functions\, but it can be
trapped at local minima. On the other hand\, Langevin dynamics (LD) can e
xplore the state space and find global minima\, but in order to give accur
ate estimates\, LD needs to run with a small discretization step size and
weak stochastic force\, which in general slow down its convergence. This p
aper shows that these two algorithms can ``collaborate” through a simple
exchange mechanism\, in which they swap their current positions if LD yie
lds a lower objective function. This idea can be seen as the singular limi
t of the replica-exchange technique from the sampling literature. We show
that this new algorithm converges to the global minimum linearly with high
probability\, assuming the objective function is strongly convex in a nei
ghborhood of the unique global minimum. By replacing gradients with stocha
stic gradients\, and adding a proper threshold to the exchange mechanism\,
our algorithm can also be used in online settings. We further verify our
theoretical results through some numerical experiments\, and observe super
ior performance of the proposed algorithm over running GD or LD alone.\n
END:VEVENT
BEGIN:VEVENT
SUMMARY:Matthias Sachs (Duke University)
DTSTART;VALUE=DATE-TIME:20200811T120000Z
DTEND;VALUE=DATE-TIME:20200811T130000Z
DTSTAMP;VALUE=DATE-TIME:20200812T030334Z
UID:DSCSS/6
DESCRIPTION:Title: Non-reversible Markov chain Monte Carlo for sampling of
districting maps\nby Matthias Sachs (Duke University) as part of Data Sci
ence and Computational Statistics Seminar\n\nAbstract: TBA\n\nFollowing th
e 2010 census excessive Gerrymandering (i.e.\, the design of electoral dis
tricting maps in such a way that outcomes are tilted in favor of a certain
political power/party) has become an increasingly prevalent practice in s
everal US states. Recent approaches to quantify the degree of such partisa
n districting use a random ensemble of districting plans which are drawn f
rom a prescribed probability distribution that adheres to certain non-part
isan criteria. In this talk I will discuss the construction of non-reversi
ble Markov chain Monte-Carlo (MCMC) methods for sampling of such districti
ng plans as instances of what we term the Mixed skewed Metropolis-Hastings
algorithm (MSMH)—a novel construction of non-reversible Markov chains w
hich relies on a generalization of what is commonly known as skew detailed
balance.\n
END:VEVENT
BEGIN:VEVENT
SUMMARY:Yunwen Lei (University of Kaiserslautern)
DTSTART;VALUE=DATE-TIME:20200818T120000Z
DTEND;VALUE=DATE-TIME:20200818T130000Z
DTSTAMP;VALUE=DATE-TIME:20200812T030334Z
UID:DSCSS/7
DESCRIPTION:Title: Statistical Learning by Stochastic Gradient Descent\nby
Yunwen Lei (University of Kaiserslautern) as part of Data Science and Com
putational Statistics Seminar\n\n\nAbstract\nStochastic gradient descent (
SGD) has become the workhorse behind many machine learning problems. Optim
ization and estimation errors are two contradictory factors responsible fo
r the prediction behavior of SGD . In this talk\, we report our generaliza
tion analysis of SGD by considering simultaneously the optimization and es
timation errors. We remove some restrictive assumptions in the literature
and significantly improve the existing generalization bounds. Our results
help to understand how to stop SGD early to get a best generalization perf
ormance.\n
END:VEVENT
BEGIN:VEVENT
SUMMARY:Andrew Duncan (Imperial College London)
DTSTART;VALUE=DATE-TIME:20200825T120000Z
DTEND;VALUE=DATE-TIME:20200825T130000Z
DTSTAMP;VALUE=DATE-TIME:20200812T030334Z
UID:DSCSS/8
DESCRIPTION:Title: On the geometry of Stein variational gradient descent\n
by Andrew Duncan (Imperial College London) as part of Data Science and Com
putational Statistics Seminar\n\n\nAbstract\nBayesian inference problems r
equire sampling or approximating high-dimensional probability distribution
s. The focus of this talk is on the recently introduced Stein variational
gradient descent methodology\, a class of algorithms that rely on iterated
steepest descent steps with respect to a reproducing kernel Hilbert space
norm. This construction leads to interacting particle systems\, the mean-
field limit of which is a gradient flow on the space of probability distri
butions equipped with a certain geometrical structure. We leverage this vi
ewpoint to shed some light on the convergence properties of the algorithm\
, in particular addressing the problem of choosing a suitable positive def
inite kernel function. Our analysis leads us to considering certain singul
ar kernels with adjusted tails. This is joint work with N. Nusken (U. of P
otsdam) and L. Szpruch (U. Edinburgh).\n
END:VEVENT
BEGIN:VEVENT
SUMMARY:Nikolas Nüsken (University of Potsdam)
DTSTART;VALUE=DATE-TIME:20200901T120000Z
DTEND;VALUE=DATE-TIME:20200901T130000Z
DTSTAMP;VALUE=DATE-TIME:20200812T030334Z
UID:DSCSS/9
DESCRIPTION:Title: Solving high-dimensional Hamilton-Jacobi-Bellman PDEs u
sing neural networks: perspectives from the theory of controlled diffusion
s and measures on path space\nby Nikolas Nüsken (University of Potsdam) a
s part of Data Science and Computational Statistics Seminar\n\n\nAbstract\
nThe first part of this presentation will review connections between probl
ems in the optimal control of diffusion processes\, Hamilton-Jacobi-Bellma
n equations and forward-backward SDEs\, having in mind applications in rar
e event simulation and stochastic filtering. The second part will explain
a recent approach based on divergences between probability measures on pat
h space and variational inference that can be used to construct appropriat
e loss functions in a machine learning framework. This is joint work with
Lorenz Richter.\n
END:VEVENT
END:VCALENDAR