Uniformization of distributional limits of graphs
Benjamini and Schramm (2001) showed that distributional limits of finite planar graphs with uniformly bounded degrees are almost surely recurrent. The major tool in their proof is a lemma which asserts that for a limit…
Accelerating Stochastic Gradient Descent
There is widespread sentiment that it is not possible to effectively utilize fast gradient methods (e.g. Nesterov’s acceleration, conjugate gradient, heavy ball) for the purposes of stochastic optimization due to their instability and error accumulation,…
Homomorphic Encryption for Arithmetic of Approximate Numbers
We suggest a method to construct a homomorphic encryption scheme for approximate arithmetic. It supports approximate addition and multiplication of encrypted messages, together with the rescaling procedure for managing the magnitude of plaintext. This procedure…
Foundations of Optimization
Optimization methods are the engine of machine learning algorithms. Examples abound, such as training neural networks with stochastic gradient descent, segmenting images with submodular optimization, or efficiently searching a game tree with bandit algorithms. We…
Counterfactual Multi-Agent Policy Gradients
Many real-world problems, such as network packet routing and the coordination of autonomous vehicles, are naturally modelled as cooperative multi-agent systems. In this talk, I overview some of the key challenges in developing reinforcement learning…