Symmetry-Based Learning
Learning representations is arguably the central problem in machine learning, and symmetry group theory is a natural foundation for it. A symmetry of a classifier is a representation change that doesn’t change the examples’ classes. The goal of representation learning is to get rid of unimportant variations, making important ones easy to detect, and unimportant variations are symmetries of the target function. Exploiting symmetries reduces sample complexity, leads to new generalizations of classic learning algorithms, provides a new approach to deep learning, and is applicable with all types of machine learning. In this talk I will present three approaches to symmetry-based learning: (1) Exchangeable variable models are distributions that are invariant under permutations of subsets of the variables. They subsume existing tractable independence-based models and difficult cases like parity functions, and outperform SVMs and state-of-the-art probabilistic classifiers. (2) Deep symmetry networks generalize convolutional neural networks by tying parameters and pooling over an arbitrary symmetry group, not just the translation group. In preliminary experiments, they outperformed convnets on a digit recognition task. (3) Symmetry-based semantic parsing defines a symmetry of a sentence as a syntactic transformation that preserves its meaning. The meaning of a sentence is thus its orbit under the semantic symmetry group of the language. This allows us to map sentences to their meanings without pre-defining a formal meaning representation or requiring labeled data in the form of sentence-formal meaning pairs, and achieved promising results in a paraphrase detection problem. (Joint work with Rob Gens, Chloe Kiddon and Mathias Niepert.)
Speaker Details
Pedro Domingos is Professor of Computer Science and Engineering at the University of Washington. His research interests are in artificial intelligence, machine learning and data mining. He received a PhD in Information and Computer Science from the University of California at Irvine, and is the author or co-author of over 200 technical publications.
He is a member of the editorial board of the Machine Learning journal, co-founder of the International Machine Learning Society, and past associate editor of JAIR. He was program co-chair of KDD-2003 and SRL-2009, and has served on numerous program committees. He is a AAAI Fellow, and has received several awards, including a Sloan Fellowship, an NSF CAREER Award, a Fulbright Scholarship, an IBM Faculty Award, and best paper awards at several leading conferences.
- Series:
- Microsoft Research Talks
- Date:
- Speakers:
- Pedro Domingos
- Affiliation:
- University of Washington
-
-
Jeff Running
-
Series: Microsoft Research Talks
-
Decoding the Human Brain – A Neurosurgeon’s Experience
Speakers:- Pascal Zinn,
- Ivan Tashev
-
-
-
-
Galea: The Bridge Between Mixed Reality and Neurotechnology
Speakers:- Eva Esteban,
- Conor Russomanno
-
Current and Future Application of BCIs
Speakers:- Christoph Guger
-
Challenges in Evolving a Successful Database Product (SQL Server) to a Cloud Service (SQL Azure)
Speakers:- Hanuma Kodavalla,
- Phil Bernstein
-
Improving text prediction accuracy using neurophysiology
Speakers:- Sophia Mehdizadeh
-
-
DIABLo: a Deep Individual-Agnostic Binaural Localizer
Speakers:- Shoken Kaneko
-
-
Recent Efforts Towards Efficient And Scalable Neural Waveform Coding
Speakers:- Kai Zhen
-
-
Audio-based Toxic Language Detection
Speakers:- Midia Yousefi
-
-
From SqueezeNet to SqueezeBERT: Developing Efficient Deep Neural Networks
Speakers:- Sujeeth Bharadwaj
-
Hope Speech and Help Speech: Surfacing Positivity Amidst Hate
Speakers:- Monojit Choudhury
-
-
-
-
-
'F' to 'A' on the N.Y. Regents Science Exams: An Overview of the Aristo Project
Speakers:- Peter Clark
-
Checkpointing the Un-checkpointable: the Split-Process Approach for MPI and Formal Verification
Speakers:- Gene Cooperman
-
Learning Structured Models for Safe Robot Control
Speakers:- Ashish Kapoor
-