Loading...

In the news | MSPoweruser

Meet Microsoft DeepSpeed, a new deep learning library that can train massive 100-billion-parameter models 

February 10, 2020

Microsoft Research today announced DeepSpeed, a new deep learning optimization library that can train massive 100-billion-parameter models. In AI, you need to have larger natural language models for better accuracy. But training larger natural language models is time consuming and…

In the news | VentureBeat

Microsoft trains world’s largest Transformer language model 

February 10, 2020

Microsoft AI & Research today shared what it calls the largest Transformer-based language generation model ever and open-sourced a deep learning library named DeepSpeed to make distributed training of large models easier.

In the news | InfoWorld

Microsoft speeds up PyTorch with DeepSpeed 

February 10, 2020

Microsoft has released DeepSpeed, a new deep learning optimization library for PyTorch, that is designed to reduce memory use and train models with better parallelism on existing hardware.

In the news | Forbes

Microsoft Brings Enhanced NLP Capabilities To ONNX Runtime 

January 23, 2020

Microsoft has announced that it has integrated an optimized implementation of BERT (Bidirectional Encoder Representations from Transformers) with the open source ONNX Runtime. Developers can take advantage of this implementation for scalable inferencing of BERT at an affordable cost.

In the news | WinBuzzer

Microsoft Open Sources BERT for ONNX Runtime 

January 22, 2020

In December, Microsoft open sourced its ONNX Runtime inference engine. Now, the company says it also open-sourced an optimized version of BERT, a natural language model from Google, for ONNX.

In the news | VentureBeat

Microsoft open-sources ONNX Runtime model to speed up Google’s BERT 

January 21, 2020

Microsoft Research AI today said it plans to open-source an optimized version of Google’s popular BERT natural language model designed to work with the ONNX Runtime inference engine. Microsoft uses to the same model to lower latency for BERT when…

In the news | ZDNet

Microsoft makes performance, speed optimizations to ONNX machine-learning runtime available to developers 

January 21, 2020

Microsoft is open sourcing and integrating some updates it it has made in deep-learning models used for natural-language processing. On January 21, the company announced it is making available to developers these optimizations by integrating them into the ONNX Runtime.

In the news | Microsoft Open Source Blog

Microsoft open sources breakthrough optimizations for transformer inference on GPU and CPU 

January 21, 2020

One of the most popular deep learning models used for natural language processing is BERT (Bidirectional Encoder Representations from Transformers). Due to the significant computation required, inferencing BERT at high scale can be extremely costly and may not even be possible…

In the news | Search Engine Journal

Bing is Now Utilizing BERT at a Larger Scale Than Google 

November 19, 2019

Bing revealed today that it has been using BERT in search results before Google, and it’s also being used at a larger scale. Google’s use of BERT in search results is currently affecting 10% of search results in the US,…

  • Previous
  • 1
  • …
  • 4
  • 5
  • 6
  • 7
  • 8
  • Next