{"id":707653,"date":"2020-11-23T20:37:32","date_gmt":"2020-11-24T04:37:32","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-project&#038;p=707653"},"modified":"2020-12-03T07:50:07","modified_gmt":"2020-12-03T15:50:07","slug":"pre-training","status":"publish","type":"msr-project","link":"https:\/\/www.microsoft.com\/en-us\/research\/project\/pre-training\/","title":{"rendered":"Pre-training"},"content":{"rendered":"<p>We are working on pre-trained language model, including new pre-training method, pre-trained model compression, pre-training for other tasks including speech and music.<\/p>\n<h4><strong>Our Papers<\/strong><\/h4>\n<ul>\n<li>Zhonghao Sheng, Kaitao Song, Xu Tan, Yi Ren, Wei Ye, Shikun Zhang, Tao Qin, <em>SongMASS: Automatic Song Writing with Pre-training and Alignment Constraint<\/em>, <strong>AAAI<\/strong> 2021. [<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/songmass-automatic-song-writing-with-pre-training-and-alignment-constraint\/\">Paper<\/a>]<\/li>\n<li>Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu, <em>MPNet: Masked and Permuted Pre-training for Language Understanding<\/em>, <strong>NeurIPS<\/strong> 2020. [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/arxiv.org\/pdf\/2004.09297.pdf\">Paper<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>] [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/mp.weixin.qq.com\/s?__biz=MzAwMTA3MzM4Nw==&mid=2649451850&idx=1&sn=1680d06b76f29027f01e68ee82e397e9&chksm=82c084ceb5b70dd8366f296309d486dcc3feb3d16f887df980aa8d30104b14c216a67778f7a1&scene=21#wechat_redirect\">Blog<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>] [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/github.com\/microsoft\/MPNet\">Code@Github<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>]<\/li>\n<li>Kaitao Song, Hao Sun, Xu Tan, Tao Qin, Jianfeng Lu, Hongzhi Liu, Tie-Yan Liu, <em>LightPAFF: A Two-Stage Distillation Framework for Pre-training and Fine-tuning<\/em>, arXiv 2020. [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/arxiv.org\/pdf\/2004.12817.pdf\">Paper<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>]<\/li>\n<li>Hao Sun, Xu Tan, Jun-Wei Gan, Sheng Zhao, Dongxu Han, Hongzhi Liu, Tao Qin, and Tie-Yan Liu, <em>Knowledge Distillation from BERT in Pre-training and Fine-tuning for Polyphone Disambiguation,<\/em>\u00a0<strong>ASRU<\/strong> 2019. [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/ieeexplore.ieee.org\/abstract\/document\/9003918\">Paper<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>]<\/li>\n<li>Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu, <em>MASS: Masked Sequence to Sequence Pre-training for Language Generation<\/em>, <strong>ICML<\/strong> 2019.\u00a0[<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/arxiv.org\/pdf\/1905.02450.pdf\">Paper<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>][<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/github.com\/microsoft\/MASS\">Code@Github<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>][<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/mp.weixin.qq.com\/s\/7yCnAHk6x0ICtEwBKxXpOw\">Article<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>][<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/blog\/introducing-mass-a-pre-training-method-that-outperforms-bert-and-gpt-in-sequence-to-sequence-language-generation-tasks\/\">Blog<\/a>]<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>We are working on pre-trained language model, including new pre-training method, pre-trained model compression, pre-training for other tasks including speech and music. Our Papers Zhonghao Sheng, Kaitao Song, Xu Tan, Yi Ren, Wei Ye, Shikun Zhang, Tao Qin, SongMASS: Automatic Song Writing with Pre-training and Alignment Constraint, AAAI 2021. [Paper] Kaitao Song, Xu Tan, Tao [&hellip;]<\/p>\n","protected":false},"featured_media":0,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","footnotes":""},"research-area":[13556,13545],"msr-locale":[268875],"msr-impact-theme":[],"msr-pillar":[],"class_list":["post-707653","msr-project","type-msr-project","status-publish","hentry","msr-research-area-artificial-intelligence","msr-research-area-human-language-technologies","msr-locale-en_us","msr-archive-status-active"],"msr_project_start":"2018-12-01","related-publications":[701680,709309,709714],"related-downloads":[],"related-videos":[],"related-groups":[],"related-events":[],"related-opportunities":[],"related-posts":[],"related-articles":[],"tab-content":[],"slides":[],"related-researchers":[],"msr_research_lab":[199560],"msr_impact_theme":[],"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/707653","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-project"}],"version-history":[{"count":8,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/707653\/revisions"}],"predecessor-version":[{"id":710404,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/707653\/revisions\/710404"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=707653"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=707653"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=707653"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=707653"},{"taxonomy":"msr-pillar","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-pillar?post=707653"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}