Variational Continual Learning
- Richard E Turner | University of Cambridge
This talk introduces variational continual learning, a simple but general framework for continual learning that fuses online variational inference (VI) and recent advances in Monte Carlo VI for neural networks. The framework can successfully train both deep discriminative models and deep generative models in complex continual learning settings where existing tasks evolve over time and entirely new tasks emerge. Experimental results show that variational continual learning outperforms state-of-the-art continual learning methods on a variety of tasks, avoiding catastrophic forgetting in a fully automatic way.
Speaker Details
Richard Turner holds a Lectureship (equivalent to US Assistant Professor) in Computer Vision and Machine Learning in the Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, UK. He is a Fellow of Christ’s College Cambridge. Previously, he held an EPSRC Postdoctoral research fellowship which he spent at both the University of Cambridge and the Laboratory for Computational Vision, NYU, USA. He has a PhD degree in Computational Neuroscience and Machine Learning from the Gatsby Computational Neuroscience Unit, UCL, UK and an M.Sci. degree in Natural Sciences (specialism Physics) from the University of Cambridge, UK.
Watch Next
-
-
-
Magma: A foundation model for multimodal AI Agents
- Jianwei Yang
-
AI for Precision Health: Learning the language of nature and patients
- Hoifung Poon,
- Ava Amini,
- Lili Qiu
-
-
What's new in AutoGen?
- Chi Wang
-
-
Strategic Subset Selection in Satellite Imagery: Machine Vision Insights
- Akram Zaytar,
- Simone Nsutezo Fobi
-
MEGA: Multi-lingual Evaluation of Generative AI
- Kabir Ahuja,
- Millicent Ochieng
-
MARI Grand Seminar - Large Language Models and Low Resource Languages
- Monojit Choudhury,
- Edward Ombui,
- Sunayana Sitaram