FStar Data Set v2
This dataset is the Version 2.0 of the FStar Data Set. This dataset’s primary objective is to train and evaluate Proof-oriented Programming with AI (PoPAI, in short). Given a specification of a program and proof…
An index of datasets, SDKs, APIs and other open source code created by Microsoft researchers and shared with the broader academic community. We also maintain a collection highlighting some of the tools you’ll find here.
This dataset is the Version 2.0 of the FStar Data Set. This dataset’s primary objective is to train and evaluate Proof-oriented Programming with AI (PoPAI, in short). Given a specification of a program and proof…
This dataset contains programs and proofs in F* proof-oriented programming language. The data, proposed in Towards Neural Synthesis for SMT-Assisted Proof-Oriented Programming, is an archive of source code, build artifacts, and metadata assembled from eight…
This repository contains the code for the Eureka ML Insights, a framework for standardizing evaluations of large foundation models, beyond single-score reporting and rankings. The framework is designed to help researchers and practitioners run reproducible evaluations…
Transform data and create rich visualizations iteratively with AI.
RadFact is a framework for the evaluation of model-generated radiology reports given a ground-truth report, with or without grounding. Leveraging the logical inference capabilities of large language models, RadFact is not a single number but…
vAttention is a memory manager for KV-cache in LLM serving systems. It decouples the allocation of virtual memory and physical memory using the CUDA virtual memory APIs. This approach enables allocating physical memory on demand…
Trace is a new AutoDiff-like tool for training AI systems end-to-end with general feedback (like numerical rewards or losses, natural language text, compiler errors, etc.). Trace generalizes the back-propagation algorithm by capturing and propagating an…
LongRoPE is a novel method that extends the context window of pre-trained LLMs to an impressive 2048k tokens by non-uniformly rescaling RoPE positional embeddings. LongRoPE has been integrated into Microsoft Phi-3.