Pre-training
We are working on pre-trained language model, including new pre-training method, pre-trained model compression, pre-training for other tasks including speech and music. Our Papers Zhonghao Sheng, Kaitao Song, Xu Tan, Yi Ren, Wei Ye, Shikun…
Text to Speech
We are working on neural network based text to speech (TTS). including acoustic model, vocoder, frontend, and end-to-end text-to-wave model. Our research works have been transferred in Microsoft Azure TTS service to improve the product…
Uncertainty-aware Self-training for Few-shot Text Classification (code)
Uncertainty-aware self-training (UST) for few-shot text classification with pre-trained language models. With only 20-30 labeled samples per class for each task, UST can perform similar to fully supervised pre-trained language models like BERT fine-tuned on…