AI at Scale
At Microsoft, we are pioneering a new approach to AI that is fuelling the next generation of AI innovation at scale.
Accelerating innovation with AI at Scale
On the TWIML podcast, host Sam Charrington and David Carmona discuss the future of AI at Scale and the impact of AI-powered supercomputing on innovation.
Accelerating AI innovation
Today, AI is bound by limitations of infrastructure, effectiveness of machine learning models and ease of development. AI at Scale expands beyond these limitations to allow rapid acceleration of AI innovation.
Realising next-generation AI
Discover how we are helping computers more fully perceive the nuances of our world.
Unlocking new opportunities
AvePoint uses AI at Scale and customised Turing models to keep up with the vast, dynamic knowledge that powers their business.
Taking on big challenges
AI at Scale works at unprecedented levels of complexity to solve some of today’s biggest challenges.
Advancements in AI models are changing how AI is developed and advanced through the creation of large-scale, centralised models that can be scaled and specialised across product domains. Supercomputing is crucial to leveraging data scaled to billions of parameters. AI at the scale of billions enables us to solve complex challenges like natural language processing.
Natural language processing
AI at Scale is unlocking breakthroughs in natural language processing (NLP) across text, images and video, allowing humans to interact with data more naturally than ever before. NLP powers virtual assistants, analysis of research or records and more. Beyond interpretation, NLP can produce content – generating tests in education, or imagining new ideas for films, books and other media.
Microsoft Turing Academic Programme (MS-TAP)
MS-TAP research advances principles of learning and reasoning, exploring novel applications and understanding the responsible use of large-scale neural language models.
OpenAI’s groundbreaking GPT-3 language model
GPT-3 is the largest and most advanced language model in the world with 175 billion parameters, enabling remarkably human-like text abilities. Learn about the Microsoft exclusive licence.
DeepSpeed learning optimisation library
DeepSpeed can train DL models with over a hundred billion parameters on current generation of GPU clusters, while achieving over 10× in system performance compared to the state-of-art.
Microsoft DeBERTa tops SuperGLUE test
SuperGLUE is a benchmark for evaluating NLU models, including question answering, inference, co-reference resolution, word disambiguation and other tasks. Learn how DeBERTa performs.
Turing-NLG: 17-billion-parameter language model
Turing Natural Language Generation (T-NLG) is a 17 billion parameter language model by Microsoft that outperforms the state of the art on many downstream NLP tasks.
T-ULRv2 leads XTREME leaderboard
The multilingual Turing Universal Language Representation (T-ULRv2) model is leading the Google XTREME public leaderboard. Learn about the Turing model benchmarks and comparisons.
AI from Bing powers Azure Cognitive Search
Azure Cognitive Search gives developers tools to build rich experiences in any applications. Now search can go beyond keywords to semantic meaning behind words and content.
VinVL: Advancing vision-language models
Vision-language (VL) systems enable computers to effectively learn from visual information to make sense of the world around us. VinVL is advancing state-of-the-art performance on VL tasks.