Microsoft Research Summit 2021
Return to Microsoft Research Summit 2021 Microsoft Research Summit 2021

Research talk: Transformer efficiency: From model compression to training acceleration

At Microsoft Research, we are approaching large-scale AI from many different perspectives, which include not only creating new, bigger models, but also developing unique ways of optimizing AI models from training to deployment. One of the main challenges posed by larger AI models is that they are more difficult to deploy in an affordable and sustainable way, and it is also still hard for them to learn new concepts and tasks effectively. Join Microsoft Researcher Yu Cheng for the first of three lightning talks in this series on Efficient and adaptable large-scale AI. See talks from Microsoft Researchers Subho Mukherjee and Guoqing Zheng to learn more about the work Microsoft is doing to improve the efficiency of computation and data in large-scale AI models.

Learn more about the 2021 Microsoft Research Summit:

Deep Learning & Large-Scale AI
Yu Cheng
Microsoft Research Redmond

Deep Learning & Large-Scale AI