Rutgers/DIMACS Theory of Computing Seminar
Fall 2014

 

Place:          CoRE 301 (NOTE THE ROOM FOR THIS SEMESTER) 
Time:          Wednesdays 11:00 am -- 12:00 Noon
Organizers: Swastik Kopparty and Shubhangi Saraf


Directions

For those arriving by train to New Brunswick station, the best way to get to the seminar room is by Rutgers bus. The directions are available by clicking here.
For those driving in, the best strategy is to pre-arrange to pick up a parking tag from one of the organizers upon arrival, and park in lot 64. For directions to Lot 64, click on this link
.

Links

·      The Rutgers discrete math seminar.

·      Other Math and CS seminars at Rutgers.

·      Theory of Computing seminars from previous semesters.

 

Schedule

September 10

Anand Sarwate

Rutgers

Active Learning with Asymmetric Costs

 

In some learning problems, unlabeled data is freely available, but labeling is costly since it must be done by an expert. Active learning approaches have been shown in many cases to be quite effective in reducing the labeling cost. Active learning algorithms sequentially select points to be labeled to more efficiently explore the hypothesis space. In some applications the cost of labeling may depend on the label value; a positive label may result in a reward, for example. We study an extreme version of this problem, which we call auditing: for binary labels, querying a point with positive label costs nothing, while a negative label costs a single point. In this talk I will describe this new problem, how the new cost structure leads to different algorithms and asymptotic costs than regular active learning for specific hypothesis classes, as well as a competitive approach for the general case. I will close with some ongoing work on applying active learning to problems in linear classification and subspace learning.

 

Much of this work is with Sivan Sabato (Ben Gurion U.) and Nati Srebro (TTI-Chicago/Technion)

September 17

Omri Weinstein

Princeton University

Approximating the best Nash Equilibrium in n^{o(log n)}-time breaks the Exponential Time Hypothesis

 

The celebrated PPAD hardness result for finding an exact Nash equilibrium in a two-player game initiated a quest for finding *approximate* Nash equilibria efficiently, and is one of the major open questions in algorithmic game theory.

 

We study the computational complexity of finding an \eps-approximate Nash equilibrium with good social welfare. Hazan and Krauthgamer and subsequent improvements showed that finding an epsilon-approximate Nash equilibrium with good social welfare in a two player game and many variants of this problem is at least as hard as finding a planted clique of size O(log n) in the random graph G(n,1/2).

 

We show that any polynomial time algorithm that finds an \eps-approximate Nash equilibrium with good social welfare refutes (the worst-case) Exponential Time Hypothesis by Impagliazzo and Paturi. Specifically, it would imply a 2^O(\sqrt{n}) algorithm for SAT.

 

Our lower bound matches the quasi-polynomial time algorithm by Lipton, Markakis and Mehta for solving the problem. Our key tool is a reduction from the PCP machinery to finding Nash equilibrium via free games, the framework introduced in the recent work by Aaronson, Impagliazzo and Moshkovitz. Techniques developed in the process may be useful for replacing planted clique hardness with ETH-hardness in other applications.

 

Joint work with Mark Braverman and Young Kun Ko.

September 24

 

NO SEMINAR

(mass absences)

October 1

Sanjeev Arora

Princeton University

Adventures in Linear Algebra++ and unsupervised learning

 

Many problems in unsupervised learning are NP-complete. To design efficient algorithms with provable guarantees, one must move away from worst-case complexity and make specific assumptions about the input instances. The talk will survey some successes we and others have had in designing such provable algorithms for nononegative matrix factorization, topic modeling, deep learning, hidden markov models, etc. The techniques used in these can be seen as new variants of classical linear algebra: what we think of as Linear Algebra++.

October 8

Amir Shpilka

Technion

Reed-Muller codes with respect to random errors and erasures

 

In TCS we usually study error correcting codes with respect to the Hamming metric, i.e. we study their behaviour with respect to worst case errors. However, in coding theory a more common model is that of random errors, where  Shannon’s results show a much better tradeoff between rate and decoding radius.

 

We consider the behaviour of Reed-Muller codes in the Shannon model of random errors. In particular, we show that RM codes with either low- or high-degree (n^1/2 or n-n^1/2, respectively), with high probability, can decode from an 1-r  fraction of random erasures (where r is the rate). In other words, for this range of parameters RM codes achieve capacity for the Binary-Erasure-Channel. This result matches experimental observations that RM codes can achieve capacity for the BEC, similarly to Polar codes. We also show that RM-codes can handle many more random errors than the minimum distance, i.e. roughly n^d/2 errors for codes of degree n-d (where the minimum distance is only 2^d).

 

We show that the questions regarding the behaviour of Reed-Muller codes wrt random errors are tightly connected to the following question. Given a random set of vectors in {0,1}^n, what is the probability the their d’th tensor products are linearly independent? We obtain our results by giving answer to this question for certain range of parameters.

 

Based on a joint work with Emmanuel Abbe and Avi Wigderson.

October 15

Gil Cohen

Weizmann Institute

Bi-Lipschitz Bijection between the Boolean Cube and the Hamming Ball

 

We construct a bi-Lipschitz bijection from the Boolean cube to the Hamming ball of equal volume. More precisely, we show that for all even n, there exists an explicit bijection f from the n-dimensional Boolean cube to the Hamming ball of equal volume embedded in (n+1)-dimensional Boolean cube, such that for all x and y it holds that dist(x,y) / 5 <= dist(f(x),f(y)) <= 4 dist(x,y), where dist(,) denotes the Hamming distance.

 

This result gives a strong negative answer to an open problem of Lovett and Viola [CC 2012], who raised the question in the context of sampling distributions in low-level complexity classes. The conceptual implication is that the problem of proving lower bounds in the context of sampling distributions requires ideas beyond the sensitivity-based structural results of Boppana [IPL 97].

 

No prior knowledge is assumed.

Joint work with Itai Benjamini and Igor Shinkar.

October 22

Bhaskar DasGupta

UIC

Complexity of Measuring Global Stability of Banking Networks

 

Threats on the stability of a financial system may severely affect the functioning of the entire economy, and thus considerable emphasis is placed on the analyzing the cause and effect of such threats. The financial crisis in the current and past decade has shown that one important cause of instability in global markets is the so-called financial contagion, namely the so-called financial contagion process in which failures of few individual banks propagate through the "web of banking dependencies" to affect a significant part of the entire financial system. Motivated by such observations, we consider the problem of defining and evaluating stabilities of both homogeneous and heterogeneous banking networks against propagation of synchronous idiosyncratic shocks given to a subset of banks. We formalize the homogeneous banking network model of Nier et al. and its corresponding heterogeneous version, formalize the synchronous shock propagation procedures, define two appropriate stability measures and investigate the computational complexities of evaluating these measures for various network topologies and parameters of interest. Time permitting, we will also discuss empirical evaluation of our global stability measure for several classes of banking networks, and some interesting implications of our evaluations.

October 29

Jop Briët

NYU

Locally decodable codes and Banach-space geometry

(Based on joint work with Assaf Naor and Oded Regev)

 

Locally decodable codes (LDCs) are error correcting codes that allow any symbol of an encoded message to be retrieved by querying only a small number of randomly selected codeword symbols, even if the codeword is partially corrupted. A main open problem is to determine what the smallest-possible codeword length of such codes is when the query complexity is a fixed constant. This talk is about a link between this problem and basic properties of certain Banach spaces that has implications in two directions. In one direction the link gives a short alternative proof of the known exponential lower bound on the length of binary 2-query LDCs due to Kerenidis and de Wolf ('04). In the other direction it shows that the known existence of sub-exponential-length 3-query LDCs implies the answer to an open question about the geometry of tensor products of l_p spaces.

November 5

Alan Selman

University of Buffalo

Disjoint NP-pairs and Propositional Proof Systems

 

This talk surveys results on disjoint NP-pairs, propositional proof systems, function classes, and promise classes - including results that demonstrate close connections that bind these topics together.  We illustrate important links between the questions of whether these classes have complete objects and whether optimal proof systems may exist.

November 12

Noga Ron-Zewi

IAS

Limitations of Public Key Encryption from Noisy Codewords

 

Several well-studied public key encryption schemes, including those of Alekhnovich (FOCS 2003), Regev (STOC 2005), and Gentry, Peikert and Vaikuntanathan (STOC 2008), rely on the conjectured intractability of inverting a noisy linear encoding. These schemes are limited in that they either require the underlying field to grow polynomially with the security parameter, or alternatively they can work over the binary field but have a low noise entropy that gives rise to sub-exponential attacks. Motivated by the goal of achieving efficient public key cryptography, we study the possibility of obtaining improved security over the binary field by using different noise distributions.

 

For this, we first propose a unified framework that captures all three encryption schemes mentioned above and allows for arbitrary choices of the underlying field and noise distributions.

 

We then show, using a simple argument that relies on agnostic learning of parities (Kalai, Mansour and Verbin, STOC 2008), that any instance of the unified framework over the binary field can be attacked in time exp(n/log(n)) where n is the ciphertext size. Combining this attack with Regev's security proof (STOC 2005) we obtain an algorithm that solves the learning parity with noise (LPN) problem in time exp(n/loglog(n)) using only n^{1+\epsilon} samples, reproducing the result of  Lyubashevsky (Random 2005) and Kopparty and Saraf (STOC 2010) in a conceptually different way.

 

Our main result shows a new connection between additive combinatorics and public key encryption by showing that the ``approximate duality conjecture" from additive combinatorics (Ben-Sasson and R., STOC 2011) improves the running time of the above attack to exp(sqrt(n)) where n is the maximum of the ciphertext size and the public key size. On the positive side, counter examples to the above conjecture may lead to candidate distributions over the binary field with improved security guarantees.

 

Joint work with Eli Ben-Sasson, Iddo Ben-Tov, Ivan Damgard and Yuval Ishai.

November 19

Eric Allender

Rutgers

Zero Knowledge and Circuit Minimization

 

For roughly four decades, two of the best-studied problems in NP that are not known to be in P or to be NP complete have been:

* Graph Isomorphism, and

* MCSP (the Minimum Circuit Size Problem).

Yet there had been no theorem, relating the complexity of these two problems to each other.

 

Until now.

 

We give a simple argument -- drawing on the connection between MCSP and time-bounded Kolmogorov

complexity -- showing that not only Graph Isomorphism, but every problem in the complexity class SZK

(Statistical Zero Knowledge) is BPP reducible to MCSP.

 

Joint work with Bireswar Das.

November 26

 

NO SEMINAR

(the day before Thanksgiving)

December 3

Noga Alon

IAS/Tel Aviv University

Sign Rank, Spectral Gap and VC dimension

 

The signrank of an N by N matrix A of signs is the minimum possible rank of a real matrix B in which every entry has the same sign as the corresponding entry of A. The VC-dimension of A is the maximum cardinality of a set of columns I of A so that for every subset J of I there is a row i of A so that A_{ij}=+1 for all j in  J and A_{ij}=-1 for all j in I-J.

 

I will describe explicit examples of N by N matrices with VC-dimension 2 and signrank Omega(N^{1/4}). I will also discuss the maximum possible signrank of an N by N matrix with VC-dimension d. We conjecture that this maximum is N^{1-1/d}, up to logarithmic factors, and can prove that this is the case for d \leq 2.

 

I will also mention briefly the applications of these results to communication complexity and learning.

 

Joint work with Shay Moran and Amir Yehudayoff.

December 10

Michael Kapralov

IBM Watson

Approximating matching size from random streams

 

We present a streaming algorithm that makes one pass over the edges of an unweighted graph presented in {\em random} order, and produces a polylogarithmic approximation to the size of the maximum matching in the graph, while using only polylogarithmic space. Prior to this work the only approximations known were a folklore $\tilde{O}(\sqrt{n})$ approximation with polylogarithmic space in an $n$ vertex graph and a constant approximation with $\Omega(n)$ space. Our work thus gives the first algorithm where both the space and approximation factors are smaller than any polynomial in $n$.

 

Our algorithm is obtained by effecting a streaming implementation of a simple ``local'' algorithm that we design for this problem. The local algorithm produces a $O(k\cdot n^{1/k})$ approximation to the size of a maximum matching by exploring the radius $k$ neighborhoods of vertices, for any parameter $k$. We show, somewhat surprisingly, that our local algorithm can be implemented in the streaming setting even for $k = \Omega(\log n/\log\log n)$. Our analysis exposes some of the problems that arise in such conversions of local algorithms into streaming ones, and gives techniques to overcome such problems.

 

This is joint work with Sanjeev Khanna and Madhu Sudan