LITMUS Predictor
LITMUS Predictor provides support for simulating performance in ~100 languages given training observations of the desired task-model. Each training observation specifies the finetuning-datasize + test-performance in different languages. Further, the tool provides support for constructing…
NICE: Neural Image Commenting with Empathy
Invariant Language Modeling
Stochastic Mixture-of-Experts
This PyTorch package implements Taming Sparsely Activated Transformer with Stochastic Experts.
Deep Neural Machine Translation
This PyTorch package implements Very Deep Transformers for Neural Machine Translation, to stabilize the large scale language model and neural machine translation training, as described in: Very deep transformers for neural machine translation
TaxiXNLI
This repository contains necessary data associated with the Analyzing the Effects of Reasoning Types on Cross-Lingual Transfer Performance (opens in new tab), published in EMNLP 2021 Multilingual Representation Learning workshop (opens in new tab). This data is…
Meta Self-training for Few-shot Neural Sequence Labeling [Code]
This is the implementation of the paper Meta Self-training for Few-shot Neural Sequence Labeling. MetaST is short for meta-learning for self-training.
Speech Research Team
The Speech Research Team is part of the Azure Cognitive Services Research (CSR) group and is responsible for fundamental advances in audio, speech, and spoken language processing technologies.
Microsoft Translator: Now translating 100 languages and counting!
Today, we’re excited to announce that Microsoft Translator has added 12 new languages and dialects to the growing repertoire of Microsoft Azure Cognitive Services Translator, bringing us to a total of 103 languages! The new…