BEGIN:VCALENDAR
VERSION:2.0
PRODID:researchseminars.org
CALSCALE:GREGORIAN
X-WR-CALNAME:researchseminars.org
BEGIN:VEVENT
SUMMARY:Huacheng Yu (Princeton University)
DTSTART:20200422T170000Z
DTEND:20200422T180000Z
DTSTAMP:20260422T225636Z
UID:TCSPlus/1
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/TCSPlus/1/">
 Nearly Optimal Static Las Vegas Succinct Dictionary</a>\nby Huacheng Yu (P
 rinceton University) as part of TCS+\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/TCSPlus/1/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Sepideh Mahabadi (TTIC)
DTSTART:20200429T170000Z
DTEND:20200429T180000Z
DTSTAMP:20260422T225636Z
UID:TCSPlus/2
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/TCSPlus/2/">
 Non-Adaptive Adaptive Sampling in Turnstile Streams</a>\nby Sepideh Mahaba
 di (TTIC) as part of TCS+\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/TCSPlus/2/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Nathan Klein (University of Washington)
DTSTART:20200506T170000Z
DTEND:20200506T180000Z
DTSTAMP:20260422T225636Z
UID:TCSPlus/3
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/TCSPlus/3/">
 An improved approximation algorithm for TSP in the half integral case</a>\
 nby Nathan Klein (University of Washington) as part of TCS+\n\nAbstract: T
 BA\n
LOCATION:https://researchseminars.org/talk/TCSPlus/3/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Mark Bun (Boston University)
DTSTART:20200520T170000Z
DTEND:20200520T180000Z
DTSTAMP:20260422T225636Z
UID:TCSPlus/4
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/TCSPlus/4/">
 An Equivalence between Private Classification and Online Predictability</a
 >\nby Mark Bun (Boston University) as part of TCS+\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/TCSPlus/4/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Rahul Ilango (MIT)
DTSTART:20200527T170000Z
DTEND:20200527T180000Z
DTSTAMP:20260422T225636Z
UID:TCSPlus/5
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/TCSPlus/5/">
 Is it (NP) hard to distinguish order from chaos?</a>\nby Rahul Ilango (MIT
 ) as part of TCS+\n\n\nAbstract\n"The Minimum Circuit Size Problem (MCSP) 
 roughly asks what the ""complexity"" of a given string is. Informally\, on
 e can think of this as determining the degree of ""computational order"" a
  string has.\n\nIn the past several years\, there has been a resurgence of
  interest in MCSP. A series of exciting results have begun unraveling what
  looks to be a fascinating story. This story already reveals deep connecti
 ons between MCSP and a growing list of fields\, including cryptography\, l
 earning theory\, structural complexity theory\, average-case complexity\, 
 and circuit complexity. As an example\, Santhanam recently proved a condit
 ional equivalence between the complexity of MCSP and the existence of one-
 way functions.\n\nThis talk is split into two parts. The first part is a b
 road introduction to MCSP\, answering the following questions: What is thi
 s problem? Why is it interesting? What do we know so far\, and where might
  the story go next? The second part discusses recent joint work with Bruno
  Loff and Igor Oliveira showing that the ""multi-output version"" of MCSP 
 is NP-hard. "\n\nThe Minimum Circuit Size Problem (MCSP) roughly asks what
  the "complexity" of a given string is. Informally\, one can think of this
  as determining the degree of "computational order" a string has.\n\nIn th
 e past several years\, there has been a resurgence of interest in MCSP. A 
 series of exciting results have begun unraveling what looks to be a fascin
 ating story. This story already reveals deep connections between MCSP and 
 a growing list of fields\, including cryptography\, learning theory\, stru
 ctural complexity theory\, average-case complexity\, and circuit complexit
 y. As an example\, Santhanam recently proved a conditional equivalence bet
 ween the complexity of MCSP and the existence of one-way functions.\n\nThi
 s talk is split into two parts. The first part is a broad introduction to 
 MCSP\, answering the following questions: What is this problem? Why is it 
 interesting? What do we know so far\, and where might the story go next? T
 he second part discusses recent joint work with Bruno Loff and Igor Olivei
 ra showing that the "multi-output version" of MCSP is NP-hard.\n
LOCATION:https://researchseminars.org/talk/TCSPlus/5/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Michael P. Kim (Stanford University)
DTSTART:20200603T170000Z
DTEND:20200603T180000Z
DTSTAMP:20260422T225636Z
UID:TCSPlus/6
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/TCSPlus/6/">
 Learning from outcomes: evidence-based rankings</a>\nby Michael P. Kim (St
 anford University) as part of TCS+\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/TCSPlus/6/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Sahil Singla (Princeton University)
DTSTART:20200513T170000Z
DTEND:20200513T180000Z
DTSTAMP:20260422T225636Z
UID:TCSPlus/7
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/TCSPlus/7/">
 Online vector balancing and geometric discrepancy</a>\nby Sahil Singla (Pr
 inceton University) as part of TCS+\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/TCSPlus/7/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Clifford Stein (Coumbia University)
DTSTART:20200618T170000Z
DTEND:20200618T180000Z
DTSTAMP:20260422T225636Z
UID:TCSPlus/8
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/TCSPlus/8/">
 Parallel approximate undirected shortest paths via low hop emulators</a>\n
 by Clifford Stein (Coumbia University) as part of TCS+\n\n\nAbstract\nAlth
 ough sequential algorithms with (nearly) optimal running time for finding 
 shortest paths in undirected graphs with non-negative edge weights have be
 en known for several decades\, near-optimal parallel algorithms have turne
 d out to be a much tougher challenge. In this talk\, we present a $(1+\\va
 repsilon)$-approximate parallel algorithm for computing shortest paths in 
 undirected graphs\, achieving polylog depth and near-linear work. All prio
 r $(1+\\varepsilon)$-algorithms with polylog  depth perform at least super
 linear work. Improving this long-standing upper bound obtained by Cohen (S
 TOC'94) has been open for 25 years.\n\nOur algorithm uses several new tool
 s. Prior work uses hopsets to introduce shortcuts in the graph. We introdu
 ce a new notion that we call low hop emulators.  We also introduce compres
 sible preconditioners\, which we use in conjunction with Serman's framewor
 k (SODA '17) for the uncapacitated minimum cost flow problem.\n\nJoint wor
 k with Alex Andoni and Peilin Zhong.\n\nRescheduled to 06/18/2020 (origina
 lly scheduled for the 06/10/2020).\n
LOCATION:https://researchseminars.org/talk/TCSPlus/8/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Richard Peng (Georgia Tech)
DTSTART:20200917T170000Z
DTEND:20200917T180000Z
DTSTAMP:20260422T225636Z
UID:TCSPlus/9
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/TCSPlus/9/">
 Solving Sparse Linear Systems Faster than Matrix Multiplication</a>\nby Ri
 chard Peng (Georgia Tech) as part of TCS+\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/TCSPlus/9/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Fotis Iliopoulos (IAS)
DTSTART:20200923T170000Z
DTEND:20200923T180000Z
DTSTAMP:20260422T225636Z
UID:TCSPlus/10
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/TCSPlus/10/"
 >Stochastic Local Search and the Lovasz Local Lemma</a>\nby Fotis Iliopoul
 os (IAS) as part of TCS+\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/TCSPlus/10/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Alex Wein (NYU)
DTSTART:20200930T170000Z
DTEND:20200930T180000Z
DTSTAMP:20260422T225636Z
UID:TCSPlus/11
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/TCSPlus/11/"
 >Low-Degree Hardness of Random Optimization Problems</a>\nby Alex Wein (NY
 U) as part of TCS+\n\n\nAbstract\nIn high-dimensional statistical problems
  (including planted clique\, sparse PCA\, community detection\, etc.)\, th
 e class of "low-degree polynomial algorithms" captures many leading algori
 thmic paradigms such as spectral methods\, approximate message passing\, a
 nd local algorithms on sparse graphs. As such\, lower bounds against low-d
 egree algorithms constitute concrete evidence for average-case hardness of
  statistical problems. This method has been widely successful at explainin
 g and predicting statistical-to-computational gaps in these settings.\n\nW
 hile prior work has understood the power of low-degree algorithms for prob
 lems with a "planted" signal\, we consider here the setting of "random opt
 imization problems" (with no planted signal)\, including the problem of fi
 nding a large independent set in a random graph\, as well as the problem o
 f optimizing the Hamiltonian of mean-field spin glass models. I will defin
 e low-degree algorithms in this setting\, argue that they capture the best
  known algorithms\, and explain new proof techniques for giving lower boun
 ds against low-degree algorithms in this setting. The proof involves a var
 iant of the so-called "overlap gap property"\, which is a structural prope
 rty of the solution space.\n\nBased on joint work with David Gamarnik and 
 Aukosh Jagannath\, available at: https://arxiv.org/abs/2004.12063\n
LOCATION:https://researchseminars.org/talk/TCSPlus/11/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Susanna F. de Rezende (Mathematical Institute of the Czech Academy
  of Sciences)
DTSTART:20201007T170000Z
DTEND:20201007T180000Z
DTSTAMP:20260422T225636Z
UID:TCSPlus/12
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/TCSPlus/12/"
 >Lifting with Simple Gadgets and Applications to Circuit and Proof Complex
 ity</a>\nby Susanna F. de Rezende (Mathematical Institute of the Czech Aca
 demy of Sciences) as part of TCS+\n\n\nAbstract\nLifting theorems in compl
 exity theory are a method of transferring lower bounds in a weak computati
 onal model into lower bounds for a more powerful computational model\, via
  function composition. There has been an explosion of lifting theorems in 
 the last ten years\, essentially reducing communication lower bounds to qu
 ery complexity lower bounds. These theorems only hold for composition with
  very specific "gadgets" such as indexing and inner product.\n \nIn this t
 alk\, we will present a generalization of the theorem lifting Nullstellens
 atz degree to monotone span program size by Pitassi and Robere (2018) so t
 hat it works for any gadget with high enough rank\, in particular\, for us
 eful gadgets such as equality and greater-than. We will then explain how t
 o apply our generalized theorem to solve three open problems:\n\n- We pres
 ent the first result that demonstrates a separation in proof power for cut
 ting planes with unbounded versus polynomially bounded coefficients. Speci
 fically\, we exhibit CNF formulas that can be refuted in quadratic length 
 and constant line space in cutting planes with unbounded coefficients\, bu
 t for which there are no refutations in subexponential length and subpolyn
 omial line space if coefficients are restricted to be of polynomial magnit
 ude.\n\n- We give the strongest separation to-date between monotone Boolea
 n formulas and monotone Boolean circuits. Namely\, we show that the classi
 cal GEN problem\, which has polynomial-size monotone Boolean circuits\, re
 quires monotone Boolean formulas of size $2^{\\Omega(n / \\text{polylog}(n
 ))}$.\n\n- We give the first explicit separation between monotone Boolean 
 formulas and monotone real formulas. Namely\, we give an explicit family o
 f functions that can be computed with monotone real formulas of nearly lin
 ear size but require monotone Boolean formulas of exponential size. Previo
 usly only a non-explicit separation was known.\n\nThis talk is based on jo
 int work with Or Meir\, Jakob Nordström\, Toniann Pitassi\, Robert Robere
 \, and Marc Vinyals.\n
LOCATION:https://researchseminars.org/talk/TCSPlus/12/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Jayadev Acharya (Cornell)
DTSTART:20201014T170000Z
DTEND:20201014T180000Z
DTSTAMP:20260422T225636Z
UID:TCSPlus/13
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/TCSPlus/13/"
 >Distributed Statistical Inference under Local Information Constraints</a>
 \nby Jayadev Acharya (Cornell) as part of TCS+\n\n\nAbstract\nWe consider 
 statistical inference tasks in a distributed setting where access to data 
 samples is subjected to strict "local constraints\," through a unified fra
 mework that captures communication limitations and (local) privacy constra
 ints as special cases. We study estimation (learning) and goodness-of-fit 
 (testing) for both discrete and high-dimensional distributions. Our goal i
 s to understand how the sample complexity increases under the information 
 constraints.\n\nIn this talk we will provide an overview of this field and
  a sample of some of our results. We will discuss the role of (public) ran
 domness  and interactivity in information-constrained inference\, and make
  a case for thinking about randomness and interactivity as resources.\n\nT
 he work is part of a long-term ongoing collaboration with Clément Canonne
  (IBM Research) and Himanshu Tyagi (IISc)\, and includes works done with C
 ody Freitag (Cornell)\, Yanjun Han (Stanford)\, Yuhan Liu (Cornell)\, and 
 Ziteng Sun (Cornell).\n
LOCATION:https://researchseminars.org/talk/TCSPlus/13/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Aayush Jain (UCLA)
DTSTART:20201021T170000Z
DTEND:20201021T180000Z
DTSTAMP:20260422T225636Z
UID:TCSPlus/14
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/TCSPlus/14/"
 >Indistinguishability Obfuscation from Well-Founded Assumptions</a>\nby Aa
 yush Jain (UCLA) as part of TCS+\n\nAbstract: TBA\n
LOCATION:https://researchseminars.org/talk/TCSPlus/14/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Omar Montasser (TTIC)
DTSTART:20201028T170000Z
DTEND:20201028T180000Z
DTSTAMP:20260422T225636Z
UID:TCSPlus/15
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/TCSPlus/15/"
 >Adversarially Robust Learnability: Characterization and Reductions</a>\nb
 y Omar Montasser (TTIC) as part of TCS+\n\n\nAbstract\nWe study the questi
 on of learning an adversarially robust predictor from uncorrupted samples.
  We show that any VC class H is robustly PAC learnable\, but we also show 
 that such learning must sometimes be improper (i.e. use predictors from ou
 tside the class)\, as some VC classes are not robustly properly learnable.
   In particular\, the popular robust empirical risk minimization approach 
 (also known as adversarial training)\, which is proper\, cannot robustly l
 earn all VC classes.  After establishing learnability\, we turn to ask whe
 ther having a tractable non-robust learning algorithm is sufficient for tr
 actable robust learnability and give a reduction algorithm for robustly le
 arning any hypothesis class H using a non-robust PAC learner for H\, with 
 nearly-optimal oracle complexity.\n\nThis is based on joint work with Stev
 e Hanneke and Nati Srebro\, available at https://arxiv.org/abs/1902.04217 
 and https://arxiv.org/abs/2010.12039..\n
LOCATION:https://researchseminars.org/talk/TCSPlus/15/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Shalev Ben-David (U Waterloo)
DTSTART:20201104T180000Z
DTEND:20201104T190000Z
DTSTAMP:20260422T225636Z
UID:TCSPlus/16
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/TCSPlus/16/"
 >Forecasting Algorithms\, Minimax Theorems\, and Randomized Lower Bounds</
 a>\nby Shalev Ben-David (U Waterloo) as part of TCS+\n\n\nAbstract\nI will
  present a new approach to randomized lower bounds\, particularly in the s
 etting where we wish to give a fine-grained analysis of randomized algorit
 hms that achieve small bias. The approach is as follows: instead of consid
 ering ordinary randomized algorithms which give an output in {0\,1} and ma
 y err\, we switch models to look at "forecasting" randomized algorithms wh
 ich output a confidence in [0\,1] for whether they think the answer is 1. 
 When scored by a proper scoring rule\, the performance of the best forecas
 ting algorithm is closely related to the bias of the best (ordinary) rando
 mized algorithm\, but is more amenable to analysis.\n\nAs an application\,
  I'll present a new minimax theorem for randomized algorithms\, which can 
 be viewed as a strengthening of Yao's minimax theorem. Yao's minimax theor
 em guarantees the existence of a hard distribution for a function f such t
 hat solving f against this distribution (to a desired error level) is as h
 ard as solving f in the worst case (to that same error level). However\, t
 he hard distribution provided by Yao's theorem depends on the chosen error
  level. Our minimax theorem removes this dependence\, giving a distributio
 n which certifies the hardness of f against all bias levels at once. In re
 cent work\, we used this minimax theorem to give a tight composition theor
 em for randomized query complexity.\n\nBased on joint work with Eric Blais
 .\n
LOCATION:https://researchseminars.org/talk/TCSPlus/16/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Shuai Shao (University of Wisconsin-Madison)
DTSTART:20201111T180000Z
DTEND:20201111T190000Z
DTSTAMP:20260422T225636Z
UID:TCSPlus/17
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/TCSPlus/17/"
 >A Dichotomy for Real Boolean Holant Problems</a>\nby Shuai Shao (Universi
 ty of Wisconsin-Madison) as part of TCS+\n\n\nAbstract\nAbstract: In this 
 talk\, we present a complexity dichotomy for Holant problems on the boolea
 n domain with arbitrary sets of real-valued constraint functions. These co
 nstraint functions need not be symmetric nor do we assume any auxiliary fu
 nctions as in previous results. It is proved that for every set F of real-
 valued constraint functions\, Holant(F) is either P-time computable or #P-
 hard. The classification has an explicit criterion. This is a culmination 
 of much research on a decade-long classification program for Holant proble
 ms\, and it uses previous results and techniques from many researchers.  H
 owever\, as it turned out\, the journey to the present theorem has been ar
 duous. Some particularly intriguing concrete functions f6\, f8 and their a
 ssociated families with extraordinary closure properties related to Bell s
 tates in quantum information theory play an important role in this proof. 
 \n\nBased on joint work with Jin-Yi Cai.\n
LOCATION:https://researchseminars.org/talk/TCSPlus/17/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Sumegha Garg (Harvard)
DTSTART:20201125T180000Z
DTEND:20201125T190000Z
DTSTAMP:20260422T225636Z
UID:TCSPlus/18
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/TCSPlus/18/"
 >The Coin Problem with Applications to Data Streams</a>\nby Sumegha Garg (
 Harvard) as part of TCS+\n\n\nAbstract\nConsider the problem of computing 
 the majority of a stream of $n$ i.i.d. uniformly random bits. This problem
 \, known as the coin problem\, is central to a number of counting problems
  in different data stream models. We show that any streaming algorithm for
  solving this problem with large constant advantage (over the uniform dist
 ribution) must use $\\Omega(\\log n)$ bits of space. Previously\, it was k
 nown that computing the majority on every input with a constant probabilit
 y takes $\\Omega(\\log n)$ space. We extend our lower bound to proving tig
 ht lower bounds for solving multiple\, randomly interleaved copies of the 
 coin problem\, as well as for solving the OR of multiple copies of a varia
 nt of the coin problem. Our proofs involve new measures of information com
 plexity that are well-suited for data streams.\n\nWe use these lower bound
 s to obtain a number of new results for data streams. In each case there i
 s an underlying $d$-dimensional vector x with additive updates to its coor
 dinates given in a stream of length $m$. The input streams arising from ou
 r coin lower bound have nice distributional properties\, and consequently 
 for many problems for which we only had lower bounds in general turnstile 
 streams\, we now obtain the same lower bounds in more natural models\, suc
 h as the bounded deletion model\, in which $\\|x\\|_2$ never drops by a co
 nstant fraction of what it was earlier\, or in the random order model\, in
  which the updates are ordered randomly.\n\nBased on joint work with Mark 
 Braverman and David P. Woodruff.\n
LOCATION:https://researchseminars.org/talk/TCSPlus/18/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Yang Liu (Stanford University)
DTSTART:20201202T180000Z
DTEND:20201202T190000Z
DTSTAMP:20260422T225636Z
UID:TCSPlus/19
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/TCSPlus/19/"
 >Faster Algorithms for Unit Maximum Flow</a>\nby Yang Liu (Stanford Univer
 sity) as part of TCS+\n\n\nAbstract\nThe maximum flow problem is one of th
 e most well-studied problems in combinatorial optimization\, encompassing 
 a broad range of cut\, matching\, and scheduling problems. Here we present
  a recent line of work obtaining provably faster algorithms for solving th
 e maximum flow problem using interior point methods. In particular\, we sh
 ow how to solve the maximum flow problem in m-edge unit capacity graphs in
  time almost $m^{4/3}$\, improving over the breakthrough $m^{10/7}$ time a
 lgorithm of Mądry.\n\nThis is based on joint work with Aaron Sidford (htt
 ps://arxiv.org/abs/1910.14276 and https://arxiv.org/abs/2003.08929).\n
LOCATION:https://researchseminars.org/talk/TCSPlus/19/
END:VEVENT
BEGIN:VEVENT
SUMMARY:William Hoza (UT Austin)
DTSTART:20210217T180000Z
DTEND:20210217T190000Z
DTSTAMP:20260422T225636Z
UID:TCSPlus/20
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/TCSPlus/20/"
 >Fooling Constant-Depth Threshold Circuits</a>\nby William Hoza (UT Austin
 ) as part of TCS+\n\n\nAbstract\nWe present the first non-trivial pseudora
 ndom generator (PRG) for linear threshold (LTF) circuits of arbitrary cons
 tant depth and super-linear size. This PRG fools circuits with depth $d$ a
 nd $n^{1 + \\delta}$ wires\, where $\\delta = \\exp(-O(d))$\, using seed l
 ength $O(n^{1 - \\delta})$ and with error $\\exp(-n^{\\delta})$. This tigh
 tly matches the best known lower bounds for this circuit class. As a conse
 quence of our result\, all the known hardness for LTF circuits has now eff
 ectively been translated into pseudorandomness. This brings the extensive 
 effort in the last decade to construct PRGs and deterministic circuit-anal
 ysis algorithms for this class to the point where any subsequent improveme
 nt would yield breakthrough lower bounds.\n\nA key ingredient in our const
 ruction is a pseudorandom restriction procedure that has tiny failure prob
 ability\, but simplifies the function to a non-natural "hybrid computation
 al model" that combines decision trees and LTFs. As part of our proof we a
 lso construct an "extremely low-error" PRG for the class of functions comp
 utable by an arbitrary function of s linear threshold functions that can h
 andle even the extreme setting of parameters $s = n/\\mathrm{polylog}(n)$ 
 and $\\epsilon = \\exp(-n/\\mathrm{polylog}(n))$.\n\nJoint work with Pooya
  Hatami\, Avishay Tal\, and Roei Tell.\n
LOCATION:https://researchseminars.org/talk/TCSPlus/20/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Steve Hanneke (TTIC)
DTSTART:20210303T180000Z
DTEND:20210303T190000Z
DTSTAMP:20260422T225636Z
UID:TCSPlus/21
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/TCSPlus/21/"
 >A Theory of Universal Learning</a>\nby Steve Hanneke (TTIC) as part of TC
 S+\n\n\nAbstract\nHow quickly can a given class of concepts be learned fro
 m examples? It is common to measure the performance of a supervised machin
 e learning algorithm by plotting its "learning curve"\, that is\, the deca
 y of the error rate as a function of the number of training examples. Howe
 ver\, the classical theoretical framework for understanding learnability\,
  the PAC model of Vapnik-Chervonenkis and Valiant\, does not explain the b
 ehavior of learning curves: the distribution-free PAC model of learning ca
 n only bound the upper envelope of the learning curves over all possible d
 ata distributions. This does not match the practice of machine learning\, 
 where the data source is typically fixed in any given scenario\, while the
  learner may choose the number of training examples on the basis of factor
 s such as computational resources and desired accuracy.\n\nIn this work\, 
 we study an alternative learning model that better captures such practical
  aspects of machine learning\, but still gives rise to a complete theory o
 f the learnable in the spirit of the PAC model. More precisely\, we consid
 er the problem of universal learning\, which aims to understand the perfor
 mance of learning algorithms on every data distribution\, but without requ
 iring uniformity over the distribution. The main result of this work is a 
 remarkable trichotomy: there are only three possible rates of universal le
 arning. More precisely\, we show that the learning curves of any given con
 cept class decay either at an exponential\, linear\, or arbitrarily slow r
 ates. Moreover\, each of these cases is completely characterized by approp
 riate combinatorial parameters\, and we exhibit optimal learning algorithm
 s that achieve the best possible rate in each case.\n\nJoint work with Oli
 vier Bousquet\, Shay Moran\, Ramon van Handel\, and Amir Yehudayoff.\n
LOCATION:https://researchseminars.org/talk/TCSPlus/21/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Avishay Tal (UC Berkeley)
DTSTART:20210317T170000Z
DTEND:20210317T180000Z
DTSTAMP:20260422T225636Z
UID:TCSPlus/22
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/TCSPlus/22/"
 >Junta Distance Approximation with Sub-Exponential Queries</a>\nby Avishay
  Tal (UC Berkeley) as part of TCS+\n\n\nAbstract\nJoint Work with Vishnu I
 yer and Michael Whitmeyer\n\nA Boolean function $f:{0\,1}^n \\to {0\,1}$ i
 s called a k-junta if it depends only on k out of the n input bits. Junta 
 testing is the task of distinguishing between k-juntas and functions that 
 are far from k-juntas. A long line of work settled the query complexity of
  testing k-juntas\, which is O(k log(k)) [Blais\, STOC 2009\; Saglam\, FOC
 S 2018]. Suppose\, however\, that f is not a perfect k-junta but rather co
 rrelated with a k-junta. How well can we estimate this correlation? This q
 uestion was asked by De\, Mossel\, and Neeman [FOCS 2019]\, who gave an al
 gorithm for the task making ~exp(k) queries. We present an improved algori
 thm that makes ~exp(sqrt{k}) many queries. \n\nAlong the way\, we also giv
 e an algorithm\, making poly(k) queries\, that provides "implicit oracle a
 ccess" to the underlying k-junta. Our techniques are Fourier analytical an
 d introduce the notion of "normalized influences" that might be of indepen
 dent interest.\n
LOCATION:https://researchseminars.org/talk/TCSPlus/22/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Jasper Lee (Brown University)
DTSTART:20210331T170000Z
DTEND:20210331T180000Z
DTSTAMP:20260422T225636Z
UID:TCSPlus/23
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/TCSPlus/23/"
 >Optimal Sub-Gaussian Mean Estimation in $\\mathbb{R}$</a>\nby Jasper Lee 
 (Brown University) as part of TCS+\n\n\nAbstract\nWe revisit and settle a 
 fundamental problem in statistics: given access to independent samples fro
 m a 1D random variable (with finite but unknown mean and variance)\, what 
 is the best way to estimate the mean in the high probability regime\, in t
 erms of error convergence with respect to sample size? The conventional wi
 sdom is to use the empirical mean as our estimate. However\, it is known t
 hat the empirical mean can in fact have exponentially sub-optimal converge
 nce for certain heavy-tailed distributions. On the other hand\, the median
 -of-means estimator (invented and reinvented in various literature) does h
 ave sub-Gaussian convergence for all finite-variance distributions\, albei
 t only in the big-O sense with a sub-optimal multiplicative constant. The 
 natural remaining question then\, is whether it is possible to bridge the 
 gap\, and have an estimator that has optimal convergence with the right co
 nstant for all finite-variance distributions.\n\nIn this talk\, we answer 
 the question affirmatively by giving an estimator that converges with the 
 optimal constant inside the big-O\, up to a 1+o(1) multiplicative factor. 
 The estimator is also easy to compute. The convergence analysis involves d
 eriving tail bounds using linear and convex-concave programming dualities\
 , which may be of independent interest.\n\nBased on joint work with Paul V
 aliant.\n
LOCATION:https://researchseminars.org/talk/TCSPlus/23/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Santhoshini Velusamy (Harvard University)
DTSTART:20210512T170000Z
DTEND:20210512T180000Z
DTSTAMP:20260422T225636Z
UID:TCSPlus/24
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/TCSPlus/24/"
 >Classification of the approximability of all finite Max-CSPs in the dynam
 ic streaming setting</a>\nby Santhoshini Velusamy (Harvard University) as 
 part of TCS+\n\n\nAbstract\nA maximum constraint satisfaction problem\, Ma
 x-CSP(F)\, is specified by a finite family of constraints F\, where each c
 onstraint is of arity k. An instance of the problem on n variables is give
 n by m applications of constraints from F to length-k subsequences of the 
 n variables\, and the goal is to find an assignment to the n variables tha
 t satisfies the maximum number of constraints. The class of Max-CSP(F) inc
 ludes optimization problems such as Max-CUT\, Max-DICUT\, Max-3SAT\, Max-q
 -Coloring\, Unique Games\, etc.\n\nIn this talk\, I will present our recen
 t dichotomy theorem on the approximability of Max-CSP(F) for every finite 
 family F\, in the single-pass dynamic streaming setting. In this setting\,
  at each time step\, a constraint is either added to or deleted from the s
 tream. In the end\, the streaming algorithm must estimate the maximum numb
 er of constraints that can be satisfied using space that is only polylogar
 ithmic in n. No background in streaming algorithms or constraint satisfact
 ion problems will be needed to enjoy this talk!\n\nThe talk will be based 
 on this paper (https://eccc.weizmann.ac.il/report/2021/011/)\, and this pa
 per (https://eccc.weizmann.ac.il/report/2021/063/) with Chi-Ning Chou\, Al
 exander Golovnev\, and Madhu Sudan.\n
LOCATION:https://researchseminars.org/talk/TCSPlus/24/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Andrea Lincoln (UC Berkeley)
DTSTART:20210414T170000Z
DTEND:20210414T180000Z
DTSTAMP:20260422T225636Z
UID:TCSPlus/25
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/TCSPlus/25/"
 >New Techniques for Proving Fine-Grained Average-Case Hardness</a>\nby And
 rea Lincoln (UC Berkeley) as part of TCS+\n\n\nAbstract\nIn this talk I wi
 ll cover a new technique for worst-case to average-case reductions. There 
 are two primary concepts introduced in this talk: "factored" problems and 
 a framework for worst-case to average-case fine-grained (WCtoACFG) self re
 ductions.\n\nWe will define new versions of OV\, kSUM and zero-k-clique th
 at are both worst-case and average-case fine-grained hard assuming the cor
 e hypotheses of fine-grained complexity. We then use these as a basis for 
 fine-grained hardness and average-case hardness of other problems. Our har
 d factored problems are also simple enough that we can reduce them to many
  other problems\, e.g. to edit distance\, k-LCS and versions of Max-Flow. 
 We further consider counting variants of the factored problems and give WC
 toACFG reductions for them for a natural distribution.\n\nTo show hardness
  for these factored problems we formalize the framework of [Boix-Adsera et
  al. 2019]  that was used to give a WCtoACFG reduction for counting k-cliq
 ues. We define an explicit property of problems such that if a problem has
  that property one can use the framework on the problem to get a WCtoACFG 
 self reduction.\nIn total these factored problems and the framework togeth
 er give tight fine-grained average-case hardness for various problems incl
 uding the counting variant of regular expression matching.\n\nBased on joi
 nt work with Mina Dalirrooyfard and Virginia Vassilevska Williams.\n
LOCATION:https://researchseminars.org/talk/TCSPlus/25/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Ronen Eldan (Weizmann Institute)
DTSTART:20210428T170000Z
DTEND:20210428T180000Z
DTSTAMP:20260422T225636Z
UID:TCSPlus/26
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/TCSPlus/26/"
 >Localization\, stochastic localization\, and Chen's recent breakthrough o
 n the Kannan-Lovasz-Simonivits conjecture</a>\nby Ronen Eldan (Weizmann In
 stitute) as part of TCS+\n\n\nAbstract\nThe Kannan-Lovasz and Simonovits (
 KLS) conjecture considers the following isoperimetric problem on high-dime
 nsional convex bodies: Given a convex body $K$\, consider the optimal way 
 to partition it into two pieces of equal volume so as to minimize their in
 terface. Is it true that up to a universal constant\, the minimal partitio
 n is attained via a hyperplane cut? Roughly speaking\, this question can b
 e thought of as asking "to what extent is a convex set a good expander"?\n
 \nIn analogy to expander graphs\, such lower bounds on the capacity would 
 imply bounds on mixing times of Markov chains associated with the convex s
 et\, and so this question has direct implications on the complexity of man
 y computational problems on convex sets. Moreover\, it was shown that a po
 sitive answer would imply Bourgain's slicing conjecture.  \n\nVery recentl
 y\, Yuansi Chen obtained a striking breakthrough\, nearly solving this con
 jecture. In this talk\, we will overview some of the central ideas used in
  the proof. We will start with the classical concept of "localization" (a 
 very useful tool to prove concentration inequalities) and its extension\, 
 stochastic localization - the main technique used in the proof.\n
LOCATION:https://researchseminars.org/talk/TCSPlus/26/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Kira Goldner (Columbia University)
DTSTART:20210526T170000Z
DTEND:20210526T180000Z
DTSTAMP:20260422T225636Z
UID:TCSPlus/27
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/TCSPlus/27/"
 >An Overview of Using Mechanism Design for Social Good</a>\nby Kira Goldne
 r (Columbia University) as part of TCS+\n\n\nAbstract\nIn order to accurat
 ely predict an algorithm's outcome and quality when it interacts with part
 icipants who have a stake in the outcome\, we must design it to be robust 
 to strategic manipulation.  This is the subject of algorithmic mechanism d
 esign\, which borrows ideas from game theory and economics to design robus
 t algorithms.  In this talk\, I will show how results from the theoretical
  foundations of algorithmic mechanism design can be used to solve problems
  of societal concern.  \n\nI will overview recent work in this area in man
 y different applications — housing\, labor markets\, carbon license allo
 cations\, health insurance markets\, and more — as well as discuss open 
 problems and directions ripe for tools from both mechanism design and gene
 ral TCS.\n
LOCATION:https://researchseminars.org/talk/TCSPlus/27/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Pravesh Kothari and Ankur Moitra (CMU and MIT)
DTSTART:20210609T170000Z
DTEND:20210609T183000Z
DTSTAMP:20260422T225636Z
UID:TCSPlus/28
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/TCSPlus/28/"
 >Robustly Learning Mixtures of Gaussians</a>\nby Pravesh Kothari and Ankur
  Moitra (CMU and MIT) as part of TCS+\n\n\nAbstract\nFor a while now the p
 roblem of robustly learning a high-dimensional mixture of Gaussians has ha
 d a target on its back. The first works in algorithmic robust statistics g
 ave provably robust algorithms for learning a single Gaussian. Since then 
 there has been steady progress\, including algorithms for robustly learnin
 g mixtures of spherical Gaussians\, mixtures of Gaussians under separation
  conditions\, and arbitrary mixtures of two Gaussians. In this talk we wil
 l discuss two recent works that essentially resolve the general problem. T
 here are important differences in their techniques\, setup\, and overall q
 uantitative guarantees\, which we will discuss.\n\nThe talk will cover the
  following independent works:\n- Liu\, Moitra\, "Settling the Robust Learn
 ability of Mixtures of Gaussians"\n- Bakshi\, Diakonikolas\, Jia\, Kane\, 
 Kothari\, Vempala\, "Robustly Learning Mixtures of $k$ Arbitrary Gaussians
 "\n
LOCATION:https://researchseminars.org/talk/TCSPlus/28/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Audra McMillan (Apple)
DTSTART:20210929T170000Z
DTEND:20210929T180000Z
DTSTAMP:20260422T225636Z
UID:TCSPlus/29
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/TCSPlus/29/"
 >Hiding among the clones: a simple and nearly optimal analysis of privacy 
 amplification by shuffling</a>\nby Audra McMillan (Apple) as part of TCS+\
 n\n\nAbstract\nDifferential privacy (DP) is a model of privacy-preserving 
 machine learning that has garnered significant interest in recent years du
 e to its rigorous privacy guarantees. An algorithm differentially private 
 if the output is stable under small changes in the input database. While D
 P has been adopted in a variety of applications\, most applications of DP 
 in industry actually satisfy a stronger notion called local differential p
 rivacy. In local differential privacy data subjects perturb their data bef
 ore it reaches the data analyst. While this requires less trust\, it comes
  a substantial cost to accuracy. Recent work of Erlingsson\, Feldman\, Mir
 onov\, Raghunathan\, Talwar\, and Thakurta [EFMRTT19] demonstrated that ra
 ndom shuffling amplifies differential privacy guarantees of locally random
 ized data. Such amplification implies substantially stronger privacy guara
 ntees for systems in which data is contributed anonymously [BEMMRLRKTS17] 
 and has led to significant interest in the shuffle model of privacy [CSUZZ
 19\, EFMRTT19]. In this talk\, we will discuss a new result on privacy amp
 lification by shuffling\, which achieves the asymptotically optimal depend
 ence in the local privacy parameter. Our result is based on a new proof st
 rategy which is simpler than previous approaches\, and extends to a lightl
 y weaker notion known as approximate differential privacy with nearly the 
 same guarantees. \n\nBased on joint work with Vitaly Feldman and Kunal Tal
 war.\n
LOCATION:https://researchseminars.org/talk/TCSPlus/29/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Nutan Limaye (IT University of Copenhagen)
DTSTART:20211013T170000Z
DTEND:20211013T180000Z
DTSTAMP:20260422T225636Z
UID:TCSPlus/30
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/TCSPlus/30/"
 >Superpolynomial Lower Bounds Against Low-Depth Algebraic Circuits</a>\nby
  Nutan Limaye (IT University of Copenhagen) as part of TCS+\n\n\nAbstract\
 n"Every multivariate polynomial P(X) can be written as a sum of monomials\
 , i.e. a sum of products of variables and field constants. In general\, th
 e size of such an expression is the number of monomials that have a non-ze
 ro coefficient in P.\n\nWhat happens if we add another layer of complexity
 \, and consider sums of products of sums (of variables and field constants
 ) expressions? Now\, it becomes unclear how to prove that a given polynomi
 al P(X) does not have small expressions. In this result\, we solve exactly
  this problem.\n\nMore precisely\, we prove that certain explicit polynomi
 als have no polynomial-sized "Sigma-Pi-Sigma" (sums of products of sums) r
 epresentations. We can also show similar results for Sigma-Pi-Sigma-Pi\, S
 igma-Pi-Sigma-Pi-Sigma and so on for all "constant-depth" expressions. \n\
 nThe talk is based on a joint work of Nutan Limaye\, Srikanth Srinivasan\,
  and Sébastien Tavenas."\n
LOCATION:https://researchseminars.org/talk/TCSPlus/30/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Shravas Rao (Northwestern University)
DTSTART:20211027T170000Z
DTEND:20211027T180000Z
DTSTAMP:20260422T225636Z
UID:TCSPlus/31
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/TCSPlus/31/"
 >Degree vs. Approximate Degree and Quantum Implications of Huang's Sensiti
 vity Theorem</a>\nby Shravas Rao (Northwestern University) as part of TCS+
 \n\n\nAbstract\nBased on the recent breakthrough of Huang (2019)\, we show
  that for any total Boolean function $f$\,\n\n-  The degree of $f$ is at m
 ost quadratic in the approximate degree of $f$. This is optimal as witness
 ed by the OR function.\n\n- The deterministic query complexity of $f$ is a
 t most quartic in the quantum query complexity of $f$. This matches the kn
 own separation (up to log factors) due to Ambainis\, Balodis\, Belovs\, Le
 e\, Santha\, and Smotrovs (2017).\n\nWe apply these results to resolve the
  quantum analogue of the Aanderaa--Karp--Rosenberg conjecture. We show tha
 t if f is a nontrivial monotone graph property of an $n$-vertex graph spec
 ified by its adjacency matrix\, then $Q(f)=\\Omega(n)$\, which is also opt
 imal. We also show that the approximate degree of any read-once formula on
  n variables is $\\Theta(\\sqrt{n})$.\n\nBased on joint work with Scott Aa
 ronson\, Shalev Ben-David\, Robin Kothari\, and Avishay Tal.\n
LOCATION:https://researchseminars.org/talk/TCSPlus/31/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Kuikui Liu (University of Washington)
DTSTART:20211110T180000Z
DTEND:20211110T190000Z
DTSTAMP:20260422T225636Z
UID:TCSPlus/32
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/TCSPlus/32/"
 >Spectral Independence: A New Tool to Analyze Markov Chains</a>\nby Kuikui
  Liu (University of Washington) as part of TCS+\n\n\nAbstract\nMarkov chai
 n Monte Carlo is a widely used class of algorithms for sampling from high-
 dimensional probability distributions\, both in theory and in practice. Wh
 ile simple to implement\, analyzing the rate of convergence to stationarit
 y\, i.e. the "mixing time"\, remains a challenging problem in many setting
 s. We introduce a new technique to bound mixing times called "spectral ind
 ependence"\, which says that certain pairwise correlation matrices all hav
 e bounded spectral norm. This surprisingly powerful technique originates i
 n the emerging study of high-dimensional expanders\, and has allowed us to
  "unify" nearly all existing approaches to approximate counting and sampli
 ng by building new connections with other areas\, including statistical ph
 ysics\, geometry of polynomials\, functional analysis\, and more. Through 
 these connections\, several long-standing open problems have recently been
  answered\, including counting bases of matroids and optimal mixing of the
  Glauber dynamics/Gibbs sampler up to the algorithmic phase transition thr
 eshold. \n\nBased on several joint works with Dorna Abdolazimi\, Nima Anar
 i\, Zongchen Chen\, Shayan Oveis Gharan\, Eric Vigoda\, Cynthia Vinzant\, 
 and June Vuong.\n
LOCATION:https://researchseminars.org/talk/TCSPlus/32/
END:VEVENT
BEGIN:VEVENT
SUMMARY:William Kuszmaul (MIT)
DTSTART:20211201T180000Z
DTEND:20211201T190000Z
DTSTAMP:20260422T225636Z
UID:TCSPlus/33
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/TCSPlus/33/"
 >Linear Probing Revisited: Tombstones Mark the Demise of Primary Clusterin
 g</a>\nby William Kuszmaul (MIT) as part of TCS+\n\n\nAbstract\nThe linear
 -probing hash table is one of the oldest and most widely used data structu
 res in computer science. However\, linear probing also famously comes with
  a major drawback: as soon as the hash table reaches a high memory utiliza
 tion\, elements within the hash table begin to cluster together\, causing 
 insertions to become slow. This phenomenon\, now known as "primary cluster
 ing"\, was first captured by Donald Knuth in 1963\; at a load factor of $1
  - 1/x$\, the expected time per insertion becomes $\\Theta(x^2)$\, rather 
 than the more desirable $\\Theta(x)$.\n\nWe show that there is more to the
  story than the classic analysis would seem to suggest. It turns out that 
 small design decisions in how deletions are implemented have dramatic effe
 cts on the asymptotic performance of insertions. If these design decisions
  are made correctly\, then even a hash table that is continuously at a loa
 d factor $1 - \\Theta(1/x)$ can achieve average insertion time $\\tilde{O}
 (x)$. A key insight is that the tombstones left behind by deletions cause 
 a surprisingly strong "anti-clustering" effect\, and that when insertions 
 and deletions are one-for-one\, the anti-clustering effects of deletions a
 ctually overpower the clustering effects of insertions.\n\nWe also present
  a new variant of linear probing\, which we call "graveyard hashing"\, tha
 t completely eliminates primary clustering on any sequence of operations. 
 If\, when an operation is performed\, the current load factor is $1 - 1/x$
  for some $x$\, then the expected cost of the operation is $O(x)$. One cor
 ollary is that\, in the external-memory model with a data block size of $B
 $\, graveyard hashing offers the following remarkable guarantee: at any lo
 ad factor $1-1/x$ satisfying $x = o(B)$\, graveyard hashing achieves $1 + 
 o(1)$ expected block transfers per operation. Past external-memory hash ta
 bles have only been able to offer a $1+o(1)$ guarantee when the block size
  $B$ is at least $\\Omega(x^2)$.\n\nBased on joint work with Michael A. Be
 nder and Bradley C. Kuszmaul. To appear in FOCS 2021.\n
LOCATION:https://researchseminars.org/talk/TCSPlus/33/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Guy Blanc (Stanford University)
DTSTART:20211208T180000Z
DTEND:20211208T190000Z
DTSTAMP:20260422T225636Z
UID:TCSPlus/34
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/TCSPlus/34/"
 >Properly learning decision trees in almost polynomial time</a>\nby Guy Bl
 anc (Stanford University) as part of TCS+\n\n\nAbstract\nWe give an $n^{O(
 \\log\\log n)}$-time membership query algorithm for properly and agnostica
 lly learning decision trees under the uniform distribution over $\\{-1\,1\
 \}^n$. Even in the realizable setting\, the previous fastest runtime was $
 n^{O(\\log n)}$\, a consequence of a classic algorithm of Ehrenfeucht and 
 Haussler.  \n\nOur algorithm shares similarities with practical heuristics
  for learning decision trees\, which we augment with additional ideas to c
 ircumvent known lower bounds against these heuristics. To analyze our algo
 rithm\, we prove a new structural result for decision trees that strengthe
 ns a theorem of O'Donnell\, Saks\, Schramm\, and Servedio. While the OSSS 
 theorem says that every decision tree has an influential variable\, we sho
 w how every decision tree can be "pruned"  so that every variable in the r
 esulting tree is influential.\n\nJoint work with Jane Lange\, Mingda Qiao\
 , and Li-Yang Tan. To appear in FOCS 2021.\n
LOCATION:https://researchseminars.org/talk/TCSPlus/34/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Merav Parter (Weizmann Institute of Science)
DTSTART:20220223T180000Z
DTEND:20220223T190000Z
DTSTAMP:20260422T225636Z
UID:TCSPlus/35
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/TCSPlus/35/"
 >New Diameter Reducing Shortcuts: Breaking the $O(\\sqrt{n})$ Barrier</a>\
 nby Merav Parter (Weizmann Institute of Science) as part of TCS+\n\n\nAbst
 ract\nFor an $n$-vertex digraph $G=(V\,E)$\, a \\emph{shortcut set} is a (
 small) subset of edges $H$ taken from the transitive closure of $G$ that\,
  when added to $G$ guarantees that the diameter of $G \\cup H$ is small. S
 hortcut sets\, introduced by Thorup in 1993\, have a wide range of applica
 tions in algorithm design\, especially in the context of parallel\, distri
 buted and dynamic computation on directed graphs. A folklore result in thi
 s context shows that every $n$-vertex digraph admits a shortcut set of lin
 ear size (i.e.\, of $O(n)$ edges) that reduces the diameter to $\\widetild
 e{O}(\\sqrt{n})$. Despite extensive research over the years\, the question
  of whether one can reduce the diameter to $o(\\sqrt{n})$ with $\\widetild
 e{O}(n)$ shortcut edges has been left open.\n\nIn this talk\, I will prese
 nt the first improved diameter-sparsity tradeoff for this problem\, breaki
 ng the $\\sqrt{n}$ diameter barrier. Specifically\, we show an $O(n^{\\ome
 ga})$-time randomized algorithm for computing a linear shortcut set that r
 educes the diameter of the digraph to $\\widetilde{O}(n^{1/3})$. We also e
 xtend our algorithms to provide improved $(\\beta\,\\epsilon)$ hopsets for
  $n$-vertex weighted directed graphs.\n\nJoint work with Shimon Kogan.\n
LOCATION:https://researchseminars.org/talk/TCSPlus/35/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Roei Tell (Institute for Advanced Study (IAS))
DTSTART:20220309T180000Z
DTEND:20220309T190000Z
DTSTAMP:20260422T225636Z
UID:TCSPlus/36
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/TCSPlus/36/"
 >Hardness vs Randomness\, Revised: Uniform\, Non-Black-Box\, and Instance-
 Wise</a>\nby Roei Tell (Institute for Advanced Study (IAS)) as part of TCS
 +\n\n\nAbstract\nIn this talk I'll show how to revise the classical hardne
 ss-vs-randomness framework\, so that it can work in a non-black-box fashio
 n. Specifically\, we will construct derandomization algorithms that don't 
 rely on classical PRGs\, and instead "extract pseudorandomness" from the g
 iven input on which we want to simulate the probabilistic machine.\n\nUsin
 g a non-black-box approach allows us to deduce stronger conclusions (or al
 ternatively rely on weaker hypotheses)\, compared to classical approaches.
  In one instantiation of the new framework\, we reveal a close connection 
 between the promiseBPP = promiseP conjecture and a new type of uniform low
 er bounds. In another instantiation\, we simulate probabilistic algorithms
  with essentially no observable runtime overhead\, under plausible hypothe
 ses.\n\nBased on a joint work with Lijie Chen.\n
LOCATION:https://researchseminars.org/talk/TCSPlus/36/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Shuichi Hirahara (NII (Japan))
DTSTART:20220323T220000Z
DTEND:20220323T230000Z
DTSTAMP:20260422T225636Z
UID:TCSPlus/37
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/TCSPlus/37/"
 >Excluding PH Pessiland</a>\nby Shuichi Hirahara (NII (Japan)) as part of 
 TCS+\n\n\nAbstract\nPessiland is the worst of Impagliazzo's five possible 
 worlds: it is a world where NP is hard on average and pseudorandom generat
 ors do not exist. Excluding Pessiland (i.e.\, showing the equivalence betw
 een average-case hardness of NP and the existence of pseudorandom generato
 rs) is one of the most important open questions in theoretical computer sc
 ience.\n\nIn this talk\, we propose to consider PH (Polynomial Hierarchy) 
 variants of Impagliazzo's five possible worlds.  Our main result is to unc
 onditionally rule out PH variants of Pessiland. I will also mention recent
  progress on excluding PH Heuristica: average-case hardness of PH follows 
 from exponential worst-case hardness of PH.\n\nBased on joint works with R
 ahul Santhanam.\n
LOCATION:https://researchseminars.org/talk/TCSPlus/37/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Jessica Sorrell (UCSD)
DTSTART:20220406T170000Z
DTEND:20220406T180000Z
DTSTAMP:20260422T225636Z
UID:TCSPlus/38
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/TCSPlus/38/"
 >Reproducibility in Learning</a>\nby Jessica Sorrell (UCSD) as part of TCS
 +\n\n\nAbstract\nReproducibility is vital to ensuring scientific conclusio
 ns are reliable\, but failures of reproducibility have been a major issue 
 in nearly all scientific areas of study in recent decades. A key issue und
 erlying the reproducibility crisis is the explosion of methods for data ge
 neration\, screening\, testing\, and analysis\, where\, crucially\, only t
 he combinations producing the most significant results are reported. Such 
 practices (also known as p-hacking\, data dredging\, and researcher degree
 s of freedom) can lead to erroneous findings that appear to be significant
 \, but that don’t hold up when other researchers attempt to replicate th
 em.  \n\nIn this talk\, we introduce a new notion of reproducibility for r
 andomized algorithms. This notion ensures that with high probability\, an 
 algorithm returns exactly the same output when run with two samples from t
 he same distribution. Despite the exceedingly strong demand of reproducibi
 lity\, there are efficient reproducible algorithms for several fundamental
  problems in statistics and learning. We show that any statistical query a
 lgorithm can be made reproducible with a modest increase in sample complex
 ity\, and we use this to construct reproducible algorithms for finding app
 roximate heavy-hitters and medians. Using these ideas\, we give the first 
 reproducible algorithm for learning halfspaces via a reproducible weak lea
 rner and a reproducible boosting algorithm. We initiate the study of lower
  bounds and inherent tradeoffs for reproducible algorithms\, giving nearly
  tight sample complexity upper and lower bounds for reproducible versus no
 n-reproducible SQ algorithms. Finally\, we discuss connections to other we
 ll-studied notions of algorithmic stability\, such as differential privacy
 .\n\nJoint work with Russell Impagliazzo (UCSD)\, Rex Lei (UCSD)\, and Ton
 iann Pitassi (Columbia University)\, to appear in STOC 2022.\n
LOCATION:https://researchseminars.org/talk/TCSPlus/38/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Rasmus Kyng (ETH Zürich)
DTSTART:20220420T170000Z
DTEND:20220420T180000Z
DTSTAMP:20260422T225636Z
UID:TCSPlus/39
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/TCSPlus/39/"
 >Almost-Linear Time Algorithms for Maximum Flow and More</a>\nby Rasmus Ky
 ng (ETH Zürich) as part of TCS+\n\n\nAbstract\nWe give the first almost-l
 inear time algorithm for computing exact maximum flows and minimum-cost fl
 ows on directed graphs. By well-known reductions\, this implies almost-lin
 ear time algorithms for several problems including bipartite matching\, op
 timal transport\, and undirected vertex connectivity.\n\nOur algorithm use
 s a new Interior Point Method (IPM) that builds the optimal flow as a sequ
 ence of an almost-linear number of approximate undirected minimum-ratio cy
 cles\, each of which is computed and processed very efficiently using a ne
 w dynamic data structure.\n\nOur framework extends to give an almost-linea
 r time algorithm for computing flows that minimize general edge-separable 
 convex functions to high accuracy. This gives the first almost-linear time
  algorithm for several problems including entropy-regularized optimal tran
 sport\, matrix scaling\, p-norm flows\, and Isotonic regression.\n\nJoint 
 work with Li Chen\, Yang Liu\, Richard Peng\, Maximilian Probst Gutenberg\
 , and Sushant Sachdeva.\n
LOCATION:https://researchseminars.org/talk/TCSPlus/39/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Thatchaphol Saranurak (University of Michigan)
DTSTART:20220518T170000Z
DTEND:20220518T180000Z
DTSTAMP:20260422T225636Z
UID:TCSPlus/40
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/TCSPlus/40/"
 >All-pairs minimum cuts in nearly quadratic time: a tutorial</a>\nby Thatc
 haphol Saranurak (University of Michigan) as part of TCS+\n\n\nAbstract\nW
 e recently showed an algorithm for computing all-pairs minimum cuts (or\, 
 more precisely\, the Gomory-Hu tree) in $\\tilde{O}(n^2)$ time in weighted
  graphs and even almost-linear time in unweighted graphs. For weighted gra
 phs\, this is the first improvement over the 60-year-old algorithm by Gomo
 ry and Hu. Thus\, surprisingly\, computing all-pairs minimum cuts seems to
  be strictly easier than computing all-pairs shortest paths\, which is con
 jectured to require $n^{3-o(1)}$ time.\n\nI will give a tutorial on the te
 chniques behind our new result\, one of which is called "isolating cuts". 
 Then\, I will survey recent progress in fast minimum cut algorithms and di
 scuss open problems.\n\nJoint work with Amir Abboud\, Robert Krauthgamer\,
  Jason Li\, Debmalya Panigrahi\, and Ohad Trabelsi.\n
LOCATION:https://researchseminars.org/talk/TCSPlus/40/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Vera Traub (ETH Zürich)
DTSTART:20220504T170000Z
DTEND:20220504T180000Z
DTSTAMP:20260422T225636Z
UID:TCSPlus/41
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/TCSPlus/41/"
 >Recent Developments in Graph Augmentation Problems</a>\nby Vera Traub (ET
 H Zürich) as part of TCS+\n\n\nAbstract\nAugmentation problems are a fund
 amental class of network design problems. They ask about the cheapest way 
 to increase the (edge-)connectivity of a graph by adding edges among a giv
 en set of options. One of the most elementary and intensely studied augmen
 tation problems is the (Weighted) Tree Augmentation Problem. Here\, a span
 ning tree has to be augmented into a 2-edge-connected graph.\n\nClassic te
 chniques for network design yield 2-approximation algorithms for a wide cl
 ass of augmentation problems. For the Unweighted Tree Augmentation Problem
 \, better-than-2 approximations are known for more than 20 years. However\
 , only recently the first better-than-2 approximations have been found for
  the more general Unweighted Connectivity Augmentation Problem and Weighte
 d Tree Augmentation Problem. In this talk we will discuss these recent adv
 ances.\n
LOCATION:https://researchseminars.org/talk/TCSPlus/41/
END:VEVENT
BEGIN:VEVENT
SUMMARY:Inbal Talgam-Cohen (Technion – Israel Institute of Technology)
DTSTART:20220601T170000Z
DTEND:20220601T180000Z
DTSTAMP:20260422T225636Z
UID:TCSPlus/42
DESCRIPTION:Title: <a href="https://researchseminars.org/talk/TCSPlus/42/"
 >Algorithmic contract theory</a>\nby Inbal Talgam-Cohen (Technion – Isra
 el Institute of Technology) as part of TCS+\n\n\nAbstract\nAlgorithms incr
 easingly interact with strategic\, self-interested players\; their design 
 must take player incentives into account or risk being "gamed" and failing
  miserably. The algorithmic game theory literature traditionally focused o
 n "mechanisms" - algorithms that incentivize players to truthfully report 
 the input. In this talk we shift focus from mechanisms to "contracts"\, wh
 ich are concerned with the algorithm's output and players' incentives to c
 arry it out. The goal of this talk is to describe where we're at in formin
 g a new algorithmic theory of contract design.\n\nI will demonstrate how a
 lgorithmic approaches – in particular the approach of beyond worst-case 
 analysis – can shed light on a classic economic puzzle regarding simple 
 contracts. Within the realm of incentives and learning\, I will discuss ho
 w classifiers induce incentives and show a formal relation to contracts.\n
 \nBased on joint works with Tal Alon\, Magdalen Dobson\, Paul Duetting\, R
 on Lavi\, Ariel Procaccia\, Tim Roughgarden\, Elisheva Shamash and Jamie T
 ucker-Foltz.\n
LOCATION:https://researchseminars.org/talk/TCSPlus/42/
END:VEVENT
END:VCALENDAR
