October 14, 2016 - October 15, 2016

Workshop on Local Algorithms

Location: Cambridge, MA, USA

Day 1 | Friday, October 14

  • Speaker: Christian Borgs

    TBA

  • Speaker: Devavrat Shah

    In this talk, we shall discuss the Latent Space Model in the context of various modern applications: recommendation systems, crowd-sourcing and demand prediction in retail. We’ll briefly discuss the connection between popular heuristic “collaborative filtering” and the first-order Taylor’s expansion in the context of this model. Time permitting, we’ll discuss desiderata of local algorithms in the context of these models.

    Based on joint works with Christina Lee, Dogyoon Song, Jehangir Muhammad and Yehua Li.

  • Speaker: Asu Ozdaglar

    We focus on the problem of minimizing the sum of smooth component functions (where the sum is strongly convex) and a non-smooth convex function, which arises in regularized empirical risk minimization in machine learning and distributed constrained optimization in wireless sensor networks and smart grids. We consider solving this problem using the proximal incremental aggregated gradient (PIAG) method, which at each iteration moves along an aggregated gradient (formed by incrementally updating gradients of component functions according to a deterministic order) and taking a proximal step with respect to the non-smooth function. While the convergence properties of this method with randomized orders (in updating gradients of component functions) have been investigated, this paper, to the best of our knowledge, is the first study that establishes the convergence rate properties of the PIAG method for any deterministic order. In particular, we show that the PIAG algorithm is globally convergent with a linear rate provided that the step size is sufficiently small. We explicitly identify the rate of convergence and the corresponding step size to achieve this convergence rate. Our results improve upon the best known condition number dependence of the convergence rate of the incremental aggregated gradient methods used for minimizing a sum of smooth functions.

    This is joint work with Denizcan Vanli and Mert Gurbuzbalaban.

  • Speaker: Dana Ron

    In this talk I will present the notion of a Partition Oracle, as defined by Hassidim, Kelner, Nguyenn and Onak.  I will describe several partition oracles, for various families of graphs, and show how such oracles can be applied to solve a variety of approximation problems in sublinear time.

  • Speaker: Shai Vardi

    Local Computation Algorithms (abbreviated LCA) is a computational model aimed at problem instances with huge inputs and output. For graph problems, the input graph is accessed using probes, where a probe specifies the ID of a vertex and receives in reply the IDs of its neighbors.  Given a local query (e.g., “is a certain vertex in the vertex cover of the input graph?”), an LCA should compute the corresponding local output (e.g., “yes” or “no”) while making only a small number of probes. The requirement is that all local outputs form a single global solution (e.g., a legal vertex cover).

    In this talk, I will discuss the possibilities and limitations of LCAs that are required to work on graphs that may have arbitrarily large degrees (possibly linear in $n$) but are limited to a number of probes that is significantly smaller than the minimal degree.  I will present negative and positive results: I will show a probe complexity lower bounds for approximating minimum vertex cover and finding a maximal matching, and an algorithm for finding an approximate maximum matching on high-girth regular graphs that uses a constant number of probes.

  • Speaker: Asaf Shapira

    About 10 years ago, I proved with N. Alon that every hereditary graph property can be tested with a constant number of queries. Unfortunately, the constants involved were enormous. A natural question is therefore to decide if for natural graph properties one can get a polynomial bound.

    Our first result in this work is a very simple sufficient combinatorial criteria which guarantees that a graph property can be tested with a polynomial number of queries. This allows us to recover (almost) all previous results in the area, as well as obtain some new ones.

    Our second main result is a very simple sufficient combinatorial criteria which guarantees that a graph property cannot be tested with a polynomial number of queries. This allows us to recover (almost) all previous results in the area, as well as obtain some new ones.

    Joint work with L. Gishboliner.

  • Speaker: Reut Levi

    Constructing a spanning tree of a graph is one of the most basic tasks in graph theory. We consider this problem in the setting of local algorithms: one wants to quickly determine whether a given edge $e$ is in a specific spanning tree, without computing the whole spanning tree, but rather by inspecting the local neighborhood of $e$. The challenge is to maintain consistency. That is, to answer queries about different edges according to the {\em same\/} spanning tree. Since it is known that this problem cannot be solved without essentially viewing all the graph, we consider the relaxed version of finding a spanning subgraph with $(1+\eps)n$ edges instead of $n-1$ edges (where $n$ is the number of vertices and $\eps$ is a given approximation/sparsity parameter).  It is known that this relaxed problem requires inspecting $\Omega(\sqrt{n})$ edges in general graphs (for any constant $\eps$), which motivates the study of natural restricted families of graphs.  One such family is the family of graphs with an excluded minor (which in particular includes planar graphs).  For this family there is an algorithm that achieves constant success probability, and inspects $(d/\eps)^{\poly(h)\log(1/\eps)}$ edges (for each edge it is queried on), where $d$ is the maximum degree in the graph and $h$ is the size of the excluded minor. The distances between pairs of vertices in the spanning subgraph $G’$ are at most a factor of $\poly(d, 1/\eps, h)$ larger than in $G$.

    In this work, we show that for an input graph that is $H$-minor free for any $H$ of size $h$, this task can be performed by inspecting only $\poly(d, 1/\eps, h)$ edges in $G$. The distances between pairs of vertices in the spanning subgraph $G’$ are at most a factor of $\tilde{O}(h\log(d)/\eps)$ larger than in $G$.  Furthermore, the error probability of the new algorithm is significantly improved to $\Theta(1/n)$.

    This algorithm can also be easily adapted to yield an efficient algorithm for the distributed (message passing) setting.

    Joint work with Dana Ron and Ronitt Rubinfeld.

  • Speaker: Emmanuel Abbe

    I will attempt to describe a framework in which variables need to be recovered from their local noisy interactions, and for which a single phase transition takes place between the computationally solvable and information-theoretically unsolvable regimes. The idea is based on amplifying local thresholds to global ones. Our study-case will be exact recovery in the stochastic block model. We will also discuss how this breaks down for the detection problem, with the emergence of computational gaps.

  • Speaker: David Gamarnik

    The talk will discuss algorithms for finding solutions of randomly generated constraint satisfaction problem such random K-SAT problem or finding a largest independent set of a graph. We establish a fundamental barrier on the power of local algorithms to solve such problems, despite some conjectures put forward in the past. In particular, we refute a conjecture regarding the power of so-called i.i.d factors to find nearly largest independent sets in random regular graphs. Similarly, we show that a broad class of sequential local algorithms are incapable of finding satisfying assignments for a random NAE-K-SAT problem, above a certain asymptotic threshold, below which even simple algorithms succeed with high probability. Our negative results exploit fascinating geometry of feasible solutions of random constraint satisfaction problems, which was first predicted by physicists heuristically and now confirmed by rigorous methods. According to this picture, the solution space exhibits a clustering property whereby the feasible solutions tend to cluster according to the underlying Hamming distance.  We show that success of local algorithms would imply violation of such a clustering property thus leading to a contradiction.

    Joint work with Madhu Sudan, Microsoft Research.

  • Speaker: Madhu Sudan

    I will survey the concept of locality in coding theory, including the notions of locally decodable codes, locally testable codes, local reconstruction codes etc. Several of these notions are accompanied by strikingly strong constructions, and a few have even turned to be valuable (measured in $$$s).

  • Speaker: Yash Deshpande

    The hidden clique problem posits to find a large clique, or completely connected subgraph, in an otherwise random Erdos-Renyi graph. This problem was originally posed in theoretical computer science as an average case version of the maximum clique problem. More recently, it has seen intense interest from a variety of communities as a statistical estimation problem wherein computationally efficient algorithms are far from achieving known statistically optimal thresholds. I will discuss local algorithms in the context of the hidden clique problem and generalizations, which form the basis of the best-known algorithms for this setting.

    This is joint work with Andrea Montanari.

  • Speaker: Elchanan Mossel

    I will give a few different perspectives on local algorithms in sparse graphs coming from inference and optimization on random graphs, Bayesian economics on sparse graphs and deep learning.

  • Speaker: Zheyuan Allen-Zhu

    In this talk, I will discuss how to use optimization to obtain sublinear algorithms, as well as an interpolation between sublinear-time and linear-time algorithms using optimization insights and acceleration techniques.

  • Speaker: Lorenzo Orrecchia

    Local clustering algorithms find a good approximation to a local sparse cut around a seed node while running in nearly-linear time in the size of the output. The main idea behind many of these algorithms is of spectral nature. The algorithms simulate a random walk starting at the seed node. The target local cut will limit the mixing, allowing the walk to both approximate the conductance of the cut and maintain the locality of the algorithm. In this talk, I will consider an optimization interpretation of these algorithms. Besides giving us a more principle framework to think about random walks for local clustering, the optimization view will also allow us to discuss non-spectral analogues of these algorithms and introduce local maximum flow computations. I will show how this novel local primitive can be combined with random walks to yield improved guarantees for local clustering. Finally, I will discuss a number of open questions about localizing SDP-based global approximation algorithms for sparsest cut using local random walks and local maximum flows.

  • Speaker: Sofya Raskhodnikova

    Property testers form an important class of sublinear algorithms. In the standard property testing model, an algorithm accesses the input function via an oracle that returns function values at all queried domain points. In many realistic situations, the oracle may be unable to reveal the function values at some domain points due to privacy concerns, or when some of the values get erased by mistake or by an adversary.

    We initiate a study of property testers that are resilient to the presence of adversarially erased function values. An alpha-erasure-resilient epsilon-tester for a property P is given parameters alpha, epsilon in (0,1), along with oracle access to a function f such that at most an alpha fraction of the function values have been erased. The tester does not know whether a point is erased unless it queries that point. The tester has to accept with high probability if there is a way to assign values to the erased points such that the resulting function satisfies P. It has to reject with high probability if, for all assignments of values to the erased points, the resulting function has to be changed in at least an epsilon-fraction of the nonerased domain points to satisfy P.

    We design erasure-resilient property testers for a large class of properties. For some properties, it is possible to obtain erasure-resilient testers by using standard testers as a black box. But there are more challenging properties for which all known testers rely on querying a specific point. If this point is erased, all these testers break. We give efficient erasure-resilient testers for several classes of such properties including monotonicity, the Lipschitz property, and convexity. Finally, we describe a property that can be epsilon-tested with O(1/epsilon) queries in the standard model, whereas testing it in the erasure-resilient model requires number of queries polynomial in the input size.

    Joint work with Kashyap Dixit, Abhradeep Thakurta, and Nithin Varma.

Day 2 | Saturday, October 15

  • Speaker: Mohsen Ghaffari

    TBA

  • Speaker: Moti Medina

    This work bridges the gap between distributed and centralised models of computing in the context of sublinear-time graph algorithms.  A priori, typical centralised models of computing  (e.g., parallel decision trees or centralised local algorithms) seem to be much more powerful than distributed message-passing algorithms: centralised algorithms can directly probe any part of the input, while in distributed algorithms nodes can only communicate with their immediate neighbours. We show that for a large class of graph problems, this extra freedom does not help centralised algorithms at all: efficient stateless deterministic centralised local algorithms can be simulated with efficient distributed message-passing algorithms. In particular, this enables us to transfer existing lower bound results from distributed algorithms to centralised local algorithms.

    Joint work with Mika Goos, Juho Hirvonen, Reut Levi, and Jukka Suomela.

  • Speaker: Pierre Fraigniaud

    Locally finding a solution to \emph{symmetry-breaking} tasks such as vertex-coloring, edge-coloring, maximal matching, maximal independent set, etc., is a long-standing challenge in \emph{distributed network} computing. More recently, it has also become a challenge in the framework of \emph{centralized local} computation. We introduce \emph{conflict coloring} as a general symmetry-breaking task that includes all the aforementioned tasks as specific instantiations — conflict coloring includes all \emph{locally checkable labeling}~tasks from [Naor\ \&\ Stockmeyer, STOC 1993]. Conflict coloring is characterized by two parameters $l$ and $d$, where the former measures the amount of freedom given to the nodes for selecting their colors, and the latter measures the number of constraints which colors of adjacent nodes are subject to. We show that, in the standard \LOCAL model for distributed network computing, if $l/d > \Delta$, then conflict coloring can be solved in $\Otilde(\sqrt{\Delta})+\log^*n$ rounds in $n$-node graphs with maximum degree~$\Delta$, where $\Otilde$ ignores the polylog factors in $\Delta$. The dependency in~$n$ is optimal, as a consequence of the $\Omega(\log^*n)$ lower bound by [Linial, SIAM J. Comp. 1992] for $(\Delta+1)$-coloring. An important special case of our  result is a significant improvement over the best known algorithm for  distributed $(\Delta+1)$-coloring due to [Barenboim, PODC 2015], which required $\Otilde(\Delta^{3/4})+\log^*n$ rounds. Improvements for other variants of coloring, including  $(\Delta+1)$-list-coloring, $(2\Delta-1)$-edge-coloring, coloring with forbidden color distances, etc., also follow from our general result on conflict coloring. Likewise, in the framework of centralized local computation algorithms (LCAs), our general result yields an LCA which requires a smaller number of probes than the previously best known algorithm for vertex-coloring, and works for a wide range of coloring problems.

    Joint work with Marc Heinrich and Adrian Kosowski.

  • Speaker: Jukka Suomela

    In this talk I will show that there are settings in which the design of local distributed algorithms can be automated. In essence, we can synthesize asymptotically optimal local algorithms for solving nontrivial graph problems.

    I will focus on LCL problems (locally checkable labellings). These are problems in which valid solutions can be detected by checking the constant-radius neighborhoods of all nodes. Examples of such problems include maximal independent sets, maximal matchings, vertex coloring, and edge coloring.

    We will study LCL problems on 2-dimensional grids (more precisely, toroidal n x n grids with consistent orientations and unique identifiers). In this setting, each LCL problem has a complexity of precisely O(1), Theta(log^* n), or Theta(n) rounds. There are no problems with an “intermediate complexity” of e.g. Theta(log n); all problems are either local or global.

    The bad news is that it is undecidable to tell if a given LCL problem is local or global. However, we show that this is the only obstacle for algorithm synthesis: if we are told (or make a lucky guess) that the problem is local, then we can use a computer program to find an O(log^* n)-round algorithm for solving it! Furthermore, this is not merely a theoretical construction: we have successfully used computers to design local algorithms for numerous LCL problems, many of which do not have any human-designed algorithms. Perhaps the most interesting case is a computer-designed local algorithm for 4-coloring, complemented with a human-designed proof that shows that 3-coloring cannot be solved locally.

    This is joint work with Sebastian Brandt, Orr Fischer, Juho Hirvonen, Janne H. Korhonen, Tuomo Lempiäinen, Christopher Purcell, Joel Rybicki, Patric Östergård, and Przemysław Uznański.

  • Speaker: Artur Czumaj

    I’ll survey some of the recent results in graph property testing that rely on the random walk approach, and then I’ll briefly discuss some of the challenges of this approach.

  • Speaker: Christian Sohler

    The spectrum of a graph $G=(V,E)$ with adjacency matrix $A$ consists of the eigenvalues of the normalized Laplacian of $L= I – D^{-1/2} A D^{-1/2}$. We study the problem of approximating the spectrum $\lambda = (\lambda_1,\dots,\lambda_{|V|})$, $0 \le \lambda_1,\le \dots, \le \lambda_{|V|}\le 2$ of $G$ when  the algorithm can query random vertices of $G$ and for a given vertex a random neighbor in $O(1)$ time.  We present a sublinear time algorithm that computes a succinct representation of an approximation $\widetilde \lambda = (\widetilde \lambda_1,\dots,\widetilde \lambda_{|V|})$, $0 \le \widetilde\lambda_1,\le \dots, \le \widetilde \lambda_{|V|}\le 2$ such that $\|\widetilde \lambda – \lambda\|_1 \le \epsilon n$. Our algorithm has query complexity and running time $exp(O(1/\eps))$, independent of $|V|$.

    We then study the implications of our algorithm to property testing in the bounded degree graph model.

    Joint work with David Cohen-Steiner and Gregory Valiant.

    • Maryam Aliakbarpour
    • Alkida Balli: Local Distributed Verification
    • Clement Canonne
    • Laurent Feuilloley
    • Meiram Murzabulatov: Tolerant Testers of Image Properties
    • Ali Vakilian
    • Nithin Varma: Erasure-resilient property testing
    • Gandikota Venkata: Local Testing for Membership in Lattices
  • Speaker: Amin Karbasi

    Can we summarize multi-category data based on user preferences in a scalable manner? Many utility functions used for data summarization satisfy submodularity, a natural diminishing returns property. We cast personalized data summarization as an instance of a general submodular maximization problem subject to multiple constraints. We develop the first practical and FAst coNsTrained submOdular Maximization algorithm, FANTOM, with strong theoretical guarantees. FANTOM maximizes a submodular function (not necessarily monotone) subject to the intersection of a p-system and l knapsacks constrains. It achieves a (p+1)(2p+2l+1)/p approximation guarantee with only O(nrp log(n)) query complexity (n and r indicate the size of the ground set and the size of the largest feasible solution, respectively). We then show how we can use FANTOM for personalized data summarization. In particular, a p-system can model different aspects of data, such as categories or time stamps, from which the users choose. In addition, knapsacks encode users’ constraints including budget or time. In our set of experiments, we consider several concrete applications: movie recommendation over 11K movies, personalized image summarization with 10K images, and revenue maximization on the YouTube social networks with 5000 communities. We observe that FANTOM constantly provides the highest utility against all the baselines.

  • Speaker: Adi Rosen

    We consider the problem of giving to a user access to a graph which is sampled according to a certain probability distribution, and are interested in the time, spaceand randomness complexities of such samplers. We are specifically interested in the case when the distribution is defined using a sequentially-evolving graph model.

    In the standard approach the whole graph is sampled according to the probability distribution, then stored, and later the user can issue certain queries, e.g., asking for the adjacency list of a given node. This however may require prohibitive amounts of time, space and random bits, especially when the size of the graph is big and only few queries are actually issued. Instead, we propose to generate the graph on the fly, in response to queries, while being able to respect the probability distribution defined by the sequential model.

    We focus on the Barabasi-Albert Preferential Attachement model (BA-graphs), and give on-the-fly generation algorithms with the following properties: Let n denote the number of nodes in the underlying sampled graph; with probability 1-1/poly(n), each query is answered in polylog(n) time, and for each query both the increase in space, and the amount of random bits used are polylog(n). The answers that the user gets to its queries are (with probability 1) distributed according to the probability distribution which is the projection, on the queried nodes, of the original distribution on graphs.

    Joint work with Guy Even, Reut Levi, and Moti Medina.

  • Speaker: Greg Valiant

    How can the availability of a very small amount of high-quality or trusted data enable the efficient extraction of the information contained in a large but less-trusted dataset?  We investigate this question via the introduction of a new model for studying robust estimation, learning, and optimization, which we term the “semi-verified” model.  In this model, we assume a large dataset of which a subset consists of points drawn from the distribution of interest, and make no assumptions on the remaining points—they could be well-behaved, extremely biased, or even adversarially generated as a function of the remaining points.  Additionally we assume a much smaller (typically constant-sized) set of “verified” or trusted data points that have been drawn from the distribution of interest.   We present several results in this model, for a large class of mathematically clean and practically relevant robust estimation and learning tasks.  These results include both local and non-local algorithms that achieve information theoretic limits for two different parameter regimes.

    The talk is based on joint works with Jacob Steinhardt and Moses Charikar, and with Michela Meister.

  • Speaker: Huy L. Nguyen

    In the turnstile L_p heavy hitters problem with parameter eps, one must maintain a high-dimensional vector x in R^n subject to updates of the form update(i, Delta), which increases x_i by Delta, where i is in [n], and Delta is an integer. Upon receiving a query, the goal is to report every “heavy hitter” i in[n] with |x_i| >= eps*|x|_p as part of a list L, which is a subset of [n] of size O(1/\eps^p), i.e. proportional to the maximum possible number of heavy hitters. In fact we solve the stronger tail version, in which L should include every i such that |x_i| >= \eps |x_{proj{1/eps^p}}|_p and |x_i| > 0, where x_{proj{k}} denotes the vector obtained by zeroing out the largest k entries of x in magnitude.

    We provide a new algorithm, which in the most general turnstile model achieves optimal O(\eps^{-p}\log n) space, O(\log n) update time, and fast O(\eps^{-p}\poly(\log n)) query time, providing correctness whp. Our main innovation is an efficient reduction from the heavy hitters to a clustering problem in which each heavy hitter is encoded as some form of noisy spectral cluster in a much bigger graph, and the goal is to identify every cluster. Since every heavy hitter must be found, correctness requires that every cluster be found. We thus need a “cluster-preserving clustering” algorithm, that partitions the graph into clusters with the promise of not destroying any original cluster. To do this we first apply standard spectral graph partitioning, and then we use some novel combinatorial techniques to modify the cuts obtained so as to make sure that the original clusters are sufficiently preserved.  Our cluster-preserving clustering may be of broader interest much beyond heavy hitters.

    Joint work with Kasper Green Larsen, Jelani Nelson, and Mikkel Thorup.

  • Speaker: Jason Morten

    Traditionally categorical data analysis (e.g. generalized linear models) works with simple, flat datasets akin to a single table in a database with no missing data. What if we have a distributed database with many partial local tables, and data is streaming in too fast for the local tables ever to be globally consistent? I’ll discuss how some hints from physics (e.g. quantum nonlocality) could help define sensible models in this scenario.

  • Speaker: Elena Grigorescu

    In this work we initiate the study of Boolean k-monotone functions in the Property Testing model.  K-monotone functions defined over a finite poset domain D  alternate between the values 0 and 1 at most k times on any ascending chain in D. Hence, k-monotone functions are natural generalizations of the classical monotone functions, which are the 1-monotone functions. This work is motivated by the recent interest in k-monotone functions in the context of circuit complexity and learning theory, and by the central role that monotonicity testing plays in the context of property testing.

    I will present our results on testing k-monotonicity on the hypercube and hypergrid domains, outline some connections with other problems from the literature, and discuss a few important open questions.

    Joint work with Clement Canonne, Siyao Guo, Akash Kumar, Karl Wimmer.

  • Speaker: Kevin Matulef

    We present HyperHeadTail, a new streaming algorithm for estimating the degree distribution of a graph from a stream of edges using very little space. Our algorithm builds upon the HeadTail algorithm of Simpson, Comandur, and McGregor (2015), but extends it to handle repeat edges and sliding time windows, making it suitable for application to real-world streams such as those generated by network traffic or other communications networks. We implement HyperHeadTail, and demonstrate its utility on both synthetic and real-world data sets, showing that it effectively captures the degree distribution, with space requirements equal to only a small fraction of the graph.

  • Speaker: Jared Saia

    How can we compute over a noisy channel? Say Alice and Bob want to run a distributed algorithm, but can only communicate over a channel where some number of bits may be flipped.  What is the smallest number of bits they must send in order to obtain the correct output, with high probability? This general problem is called Interactive Communication (IC).

    Several results in IC take a protocol requiring L bits of noise-free communication, and make it robust over a channel with adversarial noise. In a recent result, Haeupler described an algorithm that sends a number of bits that is conjectured to be near optimal. However, his algorithm critically requires a priori knowledge of the number of bits that will be flipped by the adversary.

    In this talk, we describe an algorithm requiring no such knowledge. To achieve this result, our algorithm must make local decisions about communication redundancy, without access to global information about past and future noise rates.  If an adversary flips T bits, our algorithm sends L + soft-O (T + √ (LT)) bits in expectation and succeeds with high probability in L. Assuming a conjectured lower bound by Haeupler, our result is optimal up to logarithmic factors.

  • Speaker: Robert Tarjan

    The disjoint set union problem is a classical problem in data structures with a simple and efficient sequential solution that has a notoriously complicated analysis. One application is to find strongly connected components in huge, implicitly defined graphs arising in model checking. In this application, the use of multiprocessors has the potential to produce significant speedups. We explore this possibility. We devise and analyze concurrent versions of standard sequential algorithms that use single and double compare-and-swap primitives for synchronization, making them wait-free. We obtain work bounds that grow logarithmically with the number of processors, suggesting the possibility of significant speedup in practice. This is joint work with Siddhartha Jayanti, an undergraduate at Princeton.

  • Speaker: Krzysztof Onak

    The last decade has seen an unprecedented success of massive parallel and distributed systems such as MapReduce, Spark, and Hadoop. Their computation consists of a small number of rounds in which machines first process local data and then exchange information. After introducing the computational model in more detail, I will discuss how its complexity relates to the notion of locality for graph problems. On the one hand, some more standard local algorithms can be simulated efficiently, taking additionally advantage of the global data exchange. On the other hand, graph exploration seems to be hindered similarly to the streaming model. Many interesting questions remain open in this relatively young area of research.