New Open Source Interface Simplifies Deep Learning Development
At Microsoft we have a vision and passion to amplify human ingenuity with the transformative power of AI. As a society we face enormous challenges which AI has the potential to solve. However, developing with AI, especially deep learning models isn’t easy—it can be a fairly daunting and specialized practice for most data professionals. We believe bringing AI advances to all developers, on any platform, using any language, with an open AI ecosystem, will help ensure AI is more accessible and valuable to all.
As the broader industry continues to rally for a more open AI ecosystem as demonstrated by the growing support behind this week’s ONNX format announcement, today we are excited to announce another major AI partnership—this one with Amazon to jointly introduce a new deep learning library, or interface, called Gluon. Together we will collaborate on Gluon, a symbolic and imperative programming API that greatly simplifies the process for creating deep learning models without compromising training speed. This will include the addition of Gluon support for Microsoft Cognitive Toolkit (CNTK) deep learning libraries. Gluon will provide a high-level API giving developers the choice of interchangeably running multiple deep learning libraries.
What is Gluon?
Gluon is a concise, dynamic, high-level deep learning library, or interface, for building neural networks. It can be used with either Apache MXNet or Microsoft Cognitive Toolkit, and will be supported in all Azure services, tools and infrastructure. Gluon offers an easy-to-use interface for developers, highly-scalable training, and efficient model evaluation–all without sacrificing flexibility for more experienced researchers. For companies, data scientists and developers Gluon offers simplicity without compromise through high-level APIs and pre-build/modular building blocks, and more accessible deep learning.
What are Gluon’s key features and benefits?
Gluon makes it easy for developers to learn, define, debug and then iterate or maintain deep neural networks, allowing developers to build and train their networks quickly. Key highlights include:
- Symbolic and imperative programming. For advanced users, Gluon supports sophisticated techniques like dynamic graphs and flexible structures. Support for both symbolic and imperative programming is not found in other toolkits.
- Hybridization. Gluon includes fully symbolic automatic differentiation of code that has been procedurally executed including control flow. Gluon achieves this through hybridization: static compute graphs are computed the first time and then cached and reused for subsequent iterations. The compute graphs can also be exported, e.g., for serving on mobile devices.
- Define complex models. Gluon comes with a rich, built-in layers library that significantly simplifies the task of defining complex model architectures through reuse of the pre-built building blocks from the library.
- Execution efficiency. Gluon has native support for loops and ragged tensors (batching variable length sequences) which translates into unparalleled execution efficiency for RNN and LSTM models.
- Sparse data support. Gluon has comprehensive support for sparse and quantized data and operations, both for computation and communication. Sparsity is a common occurrence in the NLP domain DNNs, and quantization is crucial for runtime evaluation performance.
- Advanced scheduling. While scheduling on a single GPU is easy, doing so on multiple GPUs is far more complex. Through its MXNet or Cognitive Toolkit backends, Gluon offers automatic distribution for both symbolic and imperative mode.
We expect the unique hybrid symbolic and imperative programming functionality offered by Gluon will become prevalent, as users realize the usability benefits without sacrificing performance. Gluon builds on the powerful training and inference engines in MXNet or Cognitive Toolkit. This means that neural networks built in Gluon can take advantage of MXNet’s or Cognitive Toolkit’s distributed training. Thus, a single Gluon training job can be linearly scaled to 500 GPUs or more–dramatically reducing training time. Inference is also highly optimized–allowing models to run on lower performance, lower cost, and more power-efficient hardware.
The Gluon interface is available now on GitHub for use with MXNet (https://github.com/gluon-api/gluon-api/), and we are actively working for support of an upcoming release of Cognitive Toolkit.
You can get started with these deep learning frameworks very easily by using the services, tools and infrastructure provided by the Azure platform. The Azure Data Science Virtual Machine provides a ready-to-use environment for AI development and includes Cognitive Toolkit and MXNet support. Azure Machine Learning provides high-level services to accelerate your model development and deployment, and tools like Azure Machine Learning Workbench or Visual Studio Code Tools for AI will help you be productive on the model development process on any framework.
When you think about the complexities of today’s machine or deep learning stack—from the APIs to deep learning framework front-ends and runtimes—it requires more education, resources and ultimately patience to get all the way from production to shipping. We believe today’s announcement of a powerful, new open-source interface spanning multiple frameworks combined with a common intermediate representation that can increase interoperability across these frameworks is further validation for growing momentum in the industry. This is another step in fostering an open AI ecosystem to accelerate innovation and democratization of AI—making it more accessible and valuable to all.
With Gluon, developers will be able to deliver new and exciting AI innovations faster by using a higher-level programming model and the tools and platforms they are most comfortable with. This, combined with the Open Neural Network Exchange (ONNX) announcement which enables you to create and save AI models using a standard open format, is another part of creating an open AI ecosystem. We look forward to the amazing AI experiences you will create.