ONNX and ONNX Runtime
- Pranav Sharma | Microsoft
What is the universal inference engine for neural networks?
Tensorflow? PyTorch? Keras? There are many popular frameworks out there for working with Deep Learning and ML models, each with their pros and cons for practical usability for product development and/or research. Once you decide what to use and train a model, now you need to figure out how to deploy it onto your platform and architecture of choice. Cloud? Windows? Linux? IOT? Performance sensitive? How about GPU acceleration? With a landscape of 1,000,001 different combinations for deploying a trained model from some chosen framework into a performant production environment for prediction, we can benefit from some standardization.
[Slides]
-
-
John Langford
Partner Researcher Manager
-
-
Watch Next
-
-
-
AI for Precision Health: Learning the language of nature and patients
- Hoifung Poon,
- Ava Amini,
- Lili Qiu
-
-
-
-
-
-
Panel: AI Frontiers
- Ashley Llorens,
- Sébastien Bubeck,
- Ahmed Awadallah
-