PLC-Challenge
This repo contains required files for the INTERSPEECH 2022 Audio Deep Packet Loss Concealment (PLC) Challenge.
This repo contains required files for the INTERSPEECH 2022 Audio Deep Packet Loss Concealment (PLC) Challenge.
We have created AECMOS for evaluating clips with regards to echo ratings and other degradations ratings. There are two ways for you to use AECMOS: a web API and an onnx version of the AECMOS model. We recommend the second way, using the onnx model, because it can be much faster than the web API. The onnx model and inference script are located in the AECMOS_local directory.
Toxic language detection systems often falsely flag text that contains minority group mentions as toxic, as those groups are often the targets of online hate. Such over-reliance on spurious correlations also causes systems to struggle with detecting implicitly toxic language. To help mitigate these issues, we create TOXIGEN, a new large-scale and machine generated dataset of 274k toxic and benign statements about 13 minority groups. We develop a demonstration-based prompting framework and an adversarial classifier-in-the-loop…
Find and fix bugs in natural language machine learning models using adaptive testing.
Here, we provide a plug-in-and-play implementation of Admin, which stabilizes previously-diverged Transformer training and achieves better performance, without introducing additional hyper-parameters. The design of Admin is half-precision friendly and can be reparameterized into the original Transformer.
XtremeDistil framework for distilling/compressing massive multilingual neural network models to tiny and efficient models for AI at scale.
This repo contains the source code of the Python package loralib and several examples of how to integrate it with PyTorch models, such as those in HuggingFace. We only support PyTorch for now. See our paper for a detailed description of LoRA.
Implementation of MoLeR: a generative model of molecular graphs which supports scaffold-constrained generation. This open-source code accompanies our paper “Learning to Extend Molecular Scaffolds with Structural Motifs”, which has been accepted at the ICLR 2022 conference.
Github link to Iris – pretrained summarization models for structured datasets and cardinality estimation.
Pretraining Text Encoders with Adversarial Mixture of Training Signal Generators. This repository contains the scripts for fine-tuning AMOS pretrained models on GLUE and SQuAD 2.0 benchmarks. Accompanying paper accepted to ICLR 2022: “Pretraining Text Encoders with Adversarial Mixture of Training Signal Generators”