Conservative Uncertainty Estimation By Fitting Prior Networks
Code accompanying “Conservative Uncertainty Estimation By Fitting Prior Networks” – ICLR 2020
Discover an index of datasets, SDKs, APIs and open-source tools developed by Microsoft researchers and shared with the global academic community below. These experimental technologies—available through Azure AI Foundry Labs (opens in new tab)—offer a glimpse into the future of AI innovation.
Code accompanying “Conservative Uncertainty Estimation By Fitting Prior Networks” – ICLR 2020
VL-BERT is a simple yet powerful pre-trainable generic representation for visual-linguistic tasks. It is pre-trained on the massive-scale caption dataset and text-only corpus, and can be fine-tuned for various down-stream visual-linguistic tasks, such as Visual…
This repository implements Ranking-Critical Training (RaCT) for Collaborative Filtering, accepted in International Conference on Learning Representations (ICLR), 2020. By using an actor-critic architecture to fine-tune a differentiable collaborative filtering model, we can improve the performance…
BERT-fused NMT is a new algorithm in which we first use BERT to extract representations for an input sequence, and then the representations are fused with each layer of the encoder and decoder of the…
KG-A2C is a reinforcement learning agent that builds a dynamic knowledge graph while exploring and generates natural language using a template-based action space – outperforming all current agents on a wide set of text-based games.
FreeLB is an adversarial training approach for improving transformer-based language models on Natural Language Understanding tasks. It accumulates the gradient in the ascent steps and updates the parameters with the accumulated gradients, which is approximately…
Prevalent: A Pretrained Generic VLN Agent [Code Clean Is In Progress] This repository contains source code to reproduce the results presented in the paper: Towards Learning a Generic Agent for Vision-and-Language Navigation via Pre-training, CVPR…
This repository contains source code to reproduce the results presented in the paper: Feature Quantization Improves GAN Training, 2020 Yang Zhao, Chunyuan Li, Ping Yu, Jianfeng Gao, Changyou Chen
The Microsoft Rocketbox Avatar library consists of 115 characters and avatars fully rigged and with high definition that was developed over the course of 10 years. The diversity of the characters and the quality of…