BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:János Pach (Alfred Renyi Institute\, Budapest\, Hungary and MIPT\
, Moscow\, Russia)
DTSTART;VALUE=DATE-TIME:20201010T120000Z
DTEND;VALUE=DATE-TIME:20201010T124000Z
DTSTAMP;VALUE=DATE-TIME:20240329T064826Z
UID:MLC_Workshop_Moscow/1
DESCRIPTION:Title: Bounded VC-dimension and Ramsey-type questions\nby János
Pach (Alfred Renyi Institute\, Budapest\, Hungary and MIPT\, Moscow\, Russ
ia) as part of Machine Learning and Combinatorics Workshop\n\n\nAbstract\n
There is a growing body of results in Ramsey theory which provide better b
ounds or stronger conclusions under the additional assumption of bounded V
C-dimension. After giving a short survey of results of this type\, we prov
e the following theorem which settles a special case of the famous Schur-E
rdős problem. There exists a constant $c=c(d)$ such that for any $m$-colo
ring of the edges of a complete graph with at least $2^{cm}$ vertices\, fo
r which the family of the neighborhoods of the vertices in each color has
VC-dimension at most $d$\, has a monochromatic triangle. This result is be
st possible up to the value of $c$.\n\nJoint work with Jacob Fox and Andre
w Suk.\n
LOCATION:https://researchseminars.org/talk/MLC_Workshop_Moscow/1/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Omri Ben-Eliezer (Harvard University\, US)
DTSTART;VALUE=DATE-TIME:20201010T124500Z
DTEND;VALUE=DATE-TIME:20201010T132500Z
DTSTAMP;VALUE=DATE-TIME:20240329T064826Z
UID:MLC_Workshop_Moscow/2
DESCRIPTION:Title: Adversarially Robust Streaming Algorithms\nby Omri Ben-Eli
ezer (Harvard University\, US) as part of Machine Learning and Combinatori
cs Workshop\n\n\nAbstract\nStreaming algorithms are traditionally divided
into two categories: deterministic algorithms and randomized ones. These t
wo types of algorithms exhibit a tradeoff between robustness and efficienc
y: on one hand\, deterministic algorithms always return a correct output\,
but for many important streaming problems\, efficient deterministic algor
ithms do not exist. On the other hand\, efficient randomized algorithms ar
e known for a very wide range of problems\, but their correctness typicall
y relies on the assumption that the stream is fixed in advance\, which is
commonly unrealistic in practice.\n\nIn this talk\, I will focus on a midd
le ground that combines both robustness and efficiency: adversarially robu
st algorithms\, whose output is correct with high probability even when th
e stream elements are adaptively chosen as a function of previous outputs.
This regime has received surprisingly little attention until recently\, a
nd many intriguing problems are still open. In particular\, I shall discus
s the robustness of random sampling\, mention its close connections to com
binatorial online learning\, and if time permits\, present generic techniq
ues to transform “standard” randomized streaming algorithms for variou
s problems into robust ones. \n\nBased on joint works with Noga Alon\, Yuv
al Dagan\, Rajesh Jayaram\, Shay Moran\, Moni Naor\, David Woodruff\, and
Eylon Yogev.\n
LOCATION:https://researchseminars.org/talk/MLC_Workshop_Moscow/2/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Wolfgang Mulzer (Freie Universität Berlin\, Gemrany)
DTSTART;VALUE=DATE-TIME:20201010T133000Z
DTEND;VALUE=DATE-TIME:20201010T141000Z
DTSTAMP;VALUE=DATE-TIME:20240329T064826Z
UID:MLC_Workshop_Moscow/3
DESCRIPTION:Title: Asymmetric Convex Intersection Testing (ACIT)\nby Wolfgang
Mulzer (Freie Universität Berlin\, Gemrany) as part of Machine Learning
and Combinatorics Workshop\n\n\nAbstract\nLet $P$ be a set of $n$ points a
nd $H$ a set of $m$ halfspaces in $d$ dimensions. We denote by $ch(P)$ the
polytope obtained by taking the convex hull of $P$\, and by $fh(H)$ the p
olytope obtained by taking the intersection of the halfspaces in $H$. Our
goal is to decide whether the intersection of $H$ and the convex hull of P
are disjoint. Even though ACIT is a natural variant of classic LP-type pr
oblems that have been studied at length in the literature\, and despite it
s applications in the analysis of high-dimensional data sets\, it appears
that the problem has not been studied before.\n\nWe discuss how known appr
oaches can be used to attack the ACIT problem\, and we provide a very simp
le strategy that leads to a deterministic algorithm\, linear on $n$ and $m
$\, whose running time depends reasonably on the dimension $d$.\n\nBased o
n joined work with Luis Barba.\n
LOCATION:https://researchseminars.org/talk/MLC_Workshop_Moscow/3/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Open problem session
DTSTART;VALUE=DATE-TIME:20201010T150000Z
DTEND;VALUE=DATE-TIME:20201010T160000Z
DTSTAMP;VALUE=DATE-TIME:20240329T064826Z
UID:MLC_Workshop_Moscow/4
DESCRIPTION:Title: Open problems session\nby Open problem session as part of
Machine Learning and Combinatorics Workshop\n\n\nAbstract\nDuring the open
problem session\, all participants are welcome to propose an open problem
related to the topic of the workshop.\n
LOCATION:https://researchseminars.org/talk/MLC_Workshop_Moscow/4/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Shay Moran (Technion\, Haifa\, Israel)
DTSTART;VALUE=DATE-TIME:20201010T160000Z
DTEND;VALUE=DATE-TIME:20201010T164000Z
DTSTAMP;VALUE=DATE-TIME:20240329T064826Z
UID:MLC_Workshop_Moscow/5
DESCRIPTION:Title: On the expressiveness of comparison queries\nby Shay Moran
(Technion\, Haifa\, Israel) as part of Machine Learning and Combinatorics
Workshop\n\n\nAbstract\nComparisons are a classical and well studied algo
rithmic tool that is used in a variety of contexts and applications. We wi
ll discuss two manifestations of the expressiveness of these queries in ma
chine learning and algorithms (a more detailed overview is given below). B
oth manifestations are based on the notion of “inference dimension” wh
ich can be viewed as another instance of the fruitful link between machine
learning and discrete mathematics – a link dating back to the discovery
of the VC dimension. \n\n1. Active classification with comparison queries
[Kane\, Lovett\, Moran\, Zhang\, FOCS ’17] — Active learning is a mo
del for semi-supervised learning that captures situations in which unlabel
ed data is abundant and manually labelling it is expensive. We consider an
extension of active learning in which the learning algorithm may ask the
annotator to compare the distances of two examples from the boundary of th
eir label-class. For example\, in a recommendation system application (say
for restaurants)\, the annotator may be asked whether she liked or dislik
ed a specific restaurant (a label query)\; or which one of two restaurants
did she like more (a comparison query). We prove that the usage of compar
ison queries leads to an exponential improvement in the query complexity o
f some well studied problems. Specifically\, for the class of half-spaces\
, we show that under natural assumptions\, such as large margin or bounded
bit-description of the input examples\, it is possible to reveal all the
labels of a sample of size n using approximately $O(\\log n)$ queries.\n\n
2. Nearly optimal linear decision trees for $k$-SUM and related problems [
Kane\, Lovett\, Moran\, JACM ‘19] — We use the above framework to cons
truct linear decision trees for a variety of decision problems in combinat
orics and discrete geometry. For example\, for any k\, we construct linear
decision trees that solve the $k$-SUM problem on $n$ elements using $O(n
k \\log n)$ linear queries. Moreover\, the queries we use are comparison q
ueries\, which compare the sums of two $k$-subsets\; when viewed as linear
queries\, comparison queries are $2k$-sparse and have only $\\{−1\,+1}$
coefficients. We give similar constructions for sorting sumsets $A+B$ and
for solving the SUBSET-SUM problem\, both with optimal number of queries\
, up to poly-logarithmic terms.\n
LOCATION:https://researchseminars.org/talk/MLC_Workshop_Moscow/5/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Shachar Lovett (University of California San Diego\, US)
DTSTART;VALUE=DATE-TIME:20201010T164500Z
DTEND;VALUE=DATE-TIME:20201010T172500Z
DTSTAMP;VALUE=DATE-TIME:20240329T064826Z
UID:MLC_Workshop_Moscow/6
DESCRIPTION:Title: Point location with near-optimal bounds\nby Shachar Lovett
(University of California San Diego\, US) as part of Machine Learning and
Combinatorics Workshop\n\n\nAbstract\nThe point location problem is a cla
ssical problem in discrete and computational geometry. Given a known parti
tion of $\\mathbb R^d$ by $n$ hyperplanes into cells\, and an unknown poin
t\, find as efficiently as possible the cell in which the point lies. Acce
ss to the unknown point is done indirectly\, via linear queries with respe
ct to hyperplanes.\n\nThis problem is equivalent to another well-studied p
roblem\, this time in machine learning: active learning of linear classifi
ers. Here\, there are $n$ known points in $\\mathbb R^d$\, which are label
ed by an unknown hyperplane. The goal is to as efficiently as possible fin
d the labeling of all $n$ points. Similarly here\, access to the unknown h
yperplane is done indirectly\, in the form of label queries.\n\nFor both p
roblems\, there is a simple information-theoretic lower bound of $\\Omega(
d \\log n)$ on the number of queries needed. Despite 40 years of work\, th
e best known upper bounds had a polynomial dependence on the dimension $d$
. In this work\, we achieve for the first time a near-linear dependence on
the dimension. The proof combines tools from geometry\, analysis and mach
ine learning.\n\nJoint work with Max Hopkins\, Daniel Kane and Gaurav Maha
jan.\n\nPaper will appear in FOCS 2020.\n
LOCATION:https://researchseminars.org/talk/MLC_Workshop_Moscow/6/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Amir Yehudayoff (Technion\, Haifa\, Israel)
DTSTART;VALUE=DATE-TIME:20201011T120000Z
DTEND;VALUE=DATE-TIME:20201011T124000Z
DTSTAMP;VALUE=DATE-TIME:20240329T064826Z
UID:MLC_Workshop_Moscow/7
DESCRIPTION:Title: Trichotomy of rates in supervised learning\nby Amir Yehuda
yoff (Technion\, Haifa\, Israel) as part of Machine Learning and Combinato
rics Workshop\n\n\nAbstract\nWe show that in supervised learning there are
only three learning rates possible: exponential\, linear and arbitrarily
slow. \n\nJoint work with Bousquet\, Hanneke\, Moran\, and van Handel.\n
LOCATION:https://researchseminars.org/talk/MLC_Workshop_Moscow/7/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Roi Livni (Tel Aviv University\, Israel)
DTSTART;VALUE=DATE-TIME:20201011T124500Z
DTEND;VALUE=DATE-TIME:20201011T132500Z
DTSTAMP;VALUE=DATE-TIME:20240329T064826Z
UID:MLC_Workshop_Moscow/8
DESCRIPTION:Title: Graph-Based Discrimination\nby Roi Livni (Tel Aviv Univers
ity\, Israel) as part of Machine Learning and Combinatorics Workshop\n\n\n
Abstract\nA basic question in learning theory is to identify if two distri
butions are identical when we have access only to examples sampled from th
e distributions. This basic task arises in the context of learning\, but
also in the context of Generative Adversarial Networks (GANs)\, where a di
scriminator is trained to distinguish between a real-life distribution and
a synthetic distribution. Classically\, we use a hypothesis class H and c
laim that the two distributions are distinct if for some hypothesis in $H$
the expected value on the two distributions is (significantly) different.
Our starting point is the following fundamental problem: “is having the
hypothesis dependent on more than a single random example beneficial”.
To address this challenge we introduce $k$-ary based discriminators which
can be modeled by a family of (hyper)-graphs. Each hypergraph is used to t
est if the distributions are distinct by estimating the probability of an
(hyper)-edge. We study the expressiveness of such graph-based discriminato
rs and compare them to the classical setting of learning\, which is $k=1$.
We show a separation in the expressiveness of $k+1$ vs $k$-ary graph ba
sed discriminators and introduce a combinatorial measure\, called graph-VC
dimension\, and show that it controls the sample complexity.\n
LOCATION:https://researchseminars.org/talk/MLC_Workshop_Moscow/8/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Andrey Kupavskii (MIPT\, Moscow\, Russia and CNRS\, Grenoble\, Fra
nce)
DTSTART;VALUE=DATE-TIME:20201011T133000Z
DTEND;VALUE=DATE-TIME:20201011T141000Z
DTSTAMP;VALUE=DATE-TIME:20240329T064826Z
UID:MLC_Workshop_Moscow/9
DESCRIPTION:Title: VC-dimension of polytopes\nby Andrey Kupavskii (MIPT\, Mos
cow\, Russia and CNRS\, Grenoble\, France) as part of Machine Learning and
Combinatorics Workshop\n\n\nAbstract\nWe discuss recent progress in the q
uestions on the VC-dimension of $d$-dimensional polytopes with $k$ facets
/ $k$ vertices. For the former class\, we determine the order of growth of
the VC-dimension\, which turns out to be superlinear in $k$ (joint with C
sicos and Mustafa). For the latter class\, we show the first polynomial up
per bound on the VC-dimension.\n
LOCATION:https://researchseminars.org/talk/MLC_Workshop_Moscow/9/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Steve Hanneke (Toyota Technological Institute at Chicago\, US)
DTSTART;VALUE=DATE-TIME:20201011T150000Z
DTEND;VALUE=DATE-TIME:20201011T154000Z
DTSTAMP;VALUE=DATE-TIME:20240329T064826Z
UID:MLC_Workshop_Moscow/10
DESCRIPTION:Title: Proper Learning\, Helly Number\, and an Optimal SVM Bound
\nby Steve Hanneke (Toyota Technological Institute at Chicago\, US) as par
t of Machine Learning and Combinatorics Workshop\n\n\nAbstract\nThe classi
cal PAC sample complexity bounds are stated for any Empirical Risk Minimiz
er (ERM) and contain an extra logarithmic factor $\\log(1/\\varepsilon)$ w
hich is known to be necessary for ERM in general. It has been recently sho
wn by Hanneke (2016) that the optimal sample complexity of PAC learning fo
r any VC class $C$ does not include this $\\log$ factor and is achieved by
a particular improper learning algorithm\, which outputs a specific major
ity-vote of hypotheses in $C$. This leaves the question of when this bound
can be achieved by proper learning algorithms\, which are restricted to a
lways output a hypothesis from $C$.\n\nIn this work we aim to characterize
the classes for which the optimal sample complexity can be achieved by a
proper learning algorithm. We identify that these classes can be character
ized by the dual Helly number\, which is a combinatorial parameter that ar
ises in discrete geometry and abstract convexity. In particular\, under ge
neral conditions on $C$\, we show that the dual Helly number is bounded if
and only if there is a proper learner that obtains the optimal dependence
on $\\varepsilon$.\n\nAs further implications of our techniques we resolv
e a long-standing open problem posed by Vapnik and Chervonenkis (1974) on
the performance of the Support Vector Machine by proving that the sample c
omplexity of SVM in the realizable case is $\\Theta((n/\\varepsilon)+(1/\\
varepsilon)log(1/\\delta))$\, where $n$ is the dimension. This gives the f
irst optimal PAC bound for Halfspaces achieved by a proper learning algori
thm\, and moreover is computationally efficient.\n
LOCATION:https://researchseminars.org/talk/MLC_Workshop_Moscow/10/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Hunter Chase (University of Illinois at Chicago\, US)
DTSTART;VALUE=DATE-TIME:20201011T154500Z
DTEND;VALUE=DATE-TIME:20201011T162500Z
DTSTAMP;VALUE=DATE-TIME:20240329T064826Z
UID:MLC_Workshop_Moscow/11
DESCRIPTION:Title: Query learning\, Littlestone dimension\, and consistency dime
nsion”\nby Hunter Chase (University of Illinois at Chicago\, US) as
part of Machine Learning and Combinatorics Workshop\n\n\nAbstract\nWe use
Littlestone dimension coupled with consistency dimension to characterize e
fficient learning by equivalence and membership queries. We also briefly d
escribe how these combinatorial measures connect machine learning with mod
el theory\, where many examples\, for query learning and other kinds of le
arning\, can be found.\n
LOCATION:https://researchseminars.org/talk/MLC_Workshop_Moscow/11/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Jacob Fox (Stanford University\, US)
DTSTART;VALUE=DATE-TIME:20201011T163000Z
DTEND;VALUE=DATE-TIME:20201011T171000Z
DTSTAMP;VALUE=DATE-TIME:20240329T064826Z
UID:MLC_Workshop_Moscow/12
DESCRIPTION:Title: Bounded VC-dimension and Extremal Graph Theory\nby Jacob
Fox (Stanford University\, US) as part of Machine Learning and Combinatori
cs Workshop\n\n\nAbstract\nWe survey a variety of tools and results on ext
remal problems for graphs of bounded VC-dimension. This is based on joint
works with János Pach and Andrew Suk.\n
LOCATION:https://researchseminars.org/talk/MLC_Workshop_Moscow/12/
END:VEVENT
END:VCALENDAR