Invariance and Stability to Deformations of Deep Convolutional Representations
- Alberto Bietti | Inria
The success of deep convolutional architectures is often attributed in part to their ability to learn multiscale and invariant representations of natural signals. However, a precise study of these properties and how they affect learning guarantees is still missing. In this talk, we consider deep convolutional representations of signals; we study their invariance to translations and to more general groups of transformations, their stability to the action of diffeomorphisms, and their ability to preserve signal information. This analysis is carried by introducing a multilayer kernel based on convolutional kernel networks and by studying the geometry induced by the kernel mapping. We then characterize the corresponding reproducing kernel Hilbert space (RKHS), showing that it contains a large class of convolutional neural networks with smooth activation functions. This analysis allows us to separate data representation from learning, and to provide a canonical measure of model complexity, the RKHS norm, which controls both stability and generalization of any learned model. This theory also leads to new practical regularization strategies for deep learning that are effective when learning on small datasets, or to obtain adversarially robust models.
Speaker Details
Alberto Bietti is a PhD student at Inria, working under the supervision of Julien Mairal in the Thoth team and the MSR-Inria joint center. Prior to his PhD, he obtained his Master’s degree in applied math from Ecole Normale Supérieure de Cachan and Mines ParisTech, and later spent two years at Quora working on machine learning and systems as a software engineer.
Watch Next
-
-
-
-
Magma: A foundation model for multimodal AI Agents
- Jianwei Yang
-
-
-
-
-
Evaluation and Understanding of Foundation Models
- Besmira Nushi
-
MARI Grand Seminar - Large Language Models and Low Resource Languages
- Monojit Choudhury,
- Edward Ombui,
- Sunayana Sitaram