Unlock deeper learning with the new Microsoft Cognitive Toolkit
The Microsoft Cognitive Toolkit—previously known as CNTK—empowers you to harness the intelligence within massive datasets through deep learning by providing uncompromised scaling, speed, and accuracy with commercial-grade quality and compatibility with the programming languages and algorithms you already use. Hear about the team that developed the Cognitive Toolkit, or read more below.
Speed & Scalability
The Microsoft Cognitive Toolkit trains and evaluates deep learning algorithms faster than other available toolkits, scaling efficiently in a range of environments—from a CPU, to GPUs, to multiple machines—while maintaining accuracy.
The Microsoft Cognitive Toolkit is built with sophisticated algorithms and production readers to work reliably with massive datasets. Skype, Cortana, Bing, Xbox, and industry-leading data scientists already use the Microsoft Cognitive Toolkit to develop commercial-grade AI.
The Microsoft Cognitive Toolkit offers the most expressive, easy-to-use architecture available. Working with the languages and networks you know, like C++ and Python, it empowers you to customize any of the built-in training algorithms, or use your own.
- Components can handle multi-dimensional dense or sparse data from Python, C++ or BrainScript
- FFN, CNN, RNN/LSTM, Batch normalization, Sequence-to-Sequence with attention and more
- Reinforcement learning, generative adversarial networks, supervised and unsupervised learning
- Ability to add new user-defined core-components on the GPU from Python
- Automatic hyperparameter tuning
- Built-in readers optimized for massive datasets
- Parallelism with accuracy on multiple GPUs/machines via 1-bit SGD and Block Momentum
- Memory sharing and other built-in methods to fit even the largest models in GPU memory
- Full APIs for defining networks, learners, readers, training and evaluation from Python, C++ and BrainScript
- Evaluate models with Python, C++, C# and BrainScript
- Interoperation with NumPy
- Both high-level and low-level APIs available for ease of use and flexibility
- Automatic shape inference based on your data
- Fully optimized symbolic RNN loops (no unrolling needed)
- Takes advantage of high-speed resources when used with Azure GPU and Azure networks
- Host trained models easily on Azure and add real-time training if desired
Visit our GitHub site to download the toolkit, review sample code and tutorials, and see the latest release notes.GO TO GITHUB