{"id":707626,"date":"2020-11-23T20:20:44","date_gmt":"2020-11-24T04:20:44","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-project&#038;p=707626"},"modified":"2022-09-08T21:53:59","modified_gmt":"2022-09-09T04:53:59","slug":"ai-music","status":"publish","type":"msr-project","link":"https:\/\/www.microsoft.com\/en-us\/research\/project\/ai-music\/","title":{"rendered":"AI Music"},"content":{"rendered":"<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-medium wp-image-777250\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/11\/logo_gradient-300x64.png\" alt=\"Muzic Logo\" width=\"300\" height=\"64\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/11\/logo_gradient-300x64.png 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/11\/logo_gradient-1024x219.png 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/11\/logo_gradient-768x164.png 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/11\/logo_gradient-240x51.png 240w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/11\/logo_gradient.png 1383w\" sizes=\"auto, (max-width: 300px) 100vw, 300px\" \/><\/p>\n<p><strong>Muzic<\/strong> is a research project on AI music that empowers music understanding and generation with deep learning and artificial intelligence. Muzic is pronounced as [\u02c8mju\u02d0zeik] and &#8216;\u8c2c\u8d3c\u5ba2&#8217; (in Chinese). Besides the logo in image version (see above), Muzic also has a logo in video version (you can click <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/ai-muzic.github.io\/muzic_logo\/\">here<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> to watch).<\/p>\n<p>We summarize the scope of our Muzic project in the following figure:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone wp-image-777256\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/11\/concept_map_new-300x132.png\" alt=\"Muzic Concept Map\" width=\"733\" height=\"323\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/11\/concept_map_new-300x132.png 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/11\/concept_map_new-1024x452.png 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/11\/concept_map_new-768x339.png 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/11\/concept_map_new-1536x677.png 1536w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/11\/concept_map_new-2048x903.png 2048w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/11\/concept_map_new-240x106.png 240w\" sizes=\"auto, (max-width: 733px) 100vw, 733px\" \/><\/p>\n<p>&nbsp;<\/p>\n<p>The current work in\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/project\/ai-music\/\" rel=\"nofollow\">Muzic<\/a>\u00a0include:<\/p>\n<ul>\n<li>Music Understanding\n<ul>\n<li>Symbolic Music Understanding:\u00a0<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/arxiv.org\/pdf\/2106.05630.pdf\" rel=\"noopener noreferrer\">MusicBERT<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/li>\n<li>Automatic Lyrics Transcription:\u00a0<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/arxiv.org\/pdf\/2109.07940.pdf\" rel=\"noopener noreferrer\">PDAugment<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/li>\n<\/ul>\n<\/li>\n<li>Music Generation\n<ul>\n<li>Song Writing:\u00a0<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/arxiv.org\/pdf\/2012.05168.pdf\" rel=\"noopener noreferrer\">SongMASS<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/li>\n<li>Lyric Generation:\u00a0<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/arxiv.org\/pdf\/2107.01875.pdf\" rel=\"noopener noreferrer\">DeepRapper<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/li>\n<li>Lyric-to-Melody Generation: <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/arxiv.org\/pdf\/2109.09617.pdf\" rel=\"noopener noreferrer\">TeleMelody<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/arxiv.org\/pdf\/2207.05688.pdf\">ReLyMe<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/arxiv.org\/pdf\/2208.05697.pdf\">Re-Creation of Creations<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/li>\n<li>Accompaniment Generation:\u00a0<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/arxiv.org\/pdf\/2008.07703.pdf\" rel=\"noopener noreferrer\">PopMAG<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/li>\n<li>Singing Voice Synthesis:\u00a0<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/arxiv.org\/pdf\/2009.01776.pdf\" rel=\"noopener noreferrer\">HiFiSinger<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<p>We release the code of our research work in <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/github.com\/microsoft\/muzic\">https:\/\/github.com\/microsoft\/muzic<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>.<\/p>\n<p>You can find some music samples generated by our systems from this page:\u00a0<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/ai-muzic.github.io\/\" rel=\"noopener noreferrer\">https:\/\/ai-muzic.github.io\/<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>.<\/p>\n<p>Thanks <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/space.bilibili.com\/552760225\">Microsoft Band<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> for the great support on this project!<\/p>\n<h4><\/h4>\n<h4><strong>Our Papers<\/strong><\/h4>\n<ul>\n<li><em>Re-creation of Creations: A New Paradigm for Lyric-to-Melody Generation<\/em>, Ang Lv, Xu Tan, Tao Qin, Tie-Yan Liu, Rui Yan, arXiv 2022. [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/arxiv.org\/pdf\/2208.05697.pdf\">Paper<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>]<\/li>\n<li><em>ReLyMe: Improving Lyric-to-Melody Generation by Incorporating Lyric-Melody Relationships<\/em>, Chen Zhang, LuChin Chang, Songruoyao Wu, Xu Tan, Tao Qin, Tie-Yan Liu, Kejun Zhang, <strong>ACM Multimedia 2022<\/strong>. [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/arxiv.org\/pdf\/2207.05688.pdf\">Paper<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>]<\/li>\n<li><em>TeleMelody: Lyric-to-Melody Generation with a Template-Based Two-Stage Method<\/em>, Zeqian Ju, Peiling Lu, Xu Tan, Rui Wang, Chen Zhang, Songruoyao Wu, Kejun Zhang, Xiangyang Li, Tao Qin, Tie-Yan Liu, arXiv 2021. [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/arxiv.org\/pdf\/2109.09617.pdf\">Paper<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>] [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/ai-muzic.github.io\/telemelody\/\">Demo<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>]<\/li>\n<li><em>PDAugment: Data Augmentation by Pitch and Duration Adjustments for Automatic Lyrics Transcription<\/em>, Chen Zhang, Jiaxing Yu, Luchin Chang, Xu Tan, Jiawei Chen, Tao Qin, Kejun Zhang, <strong>ISMIR 2022<\/strong>. [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/arxiv.org\/pdf\/2109.07940.pdf\">Paper<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>]<\/li>\n<li><em>MusicBERT: Symbolic Music Understanding with Large-Scale Pre-Training<\/em>, Mingliang Zeng, Xu Tan, Rui Wang, Zeqian Ju, Tao Qin, Tie-Yan Liu,\u00a0<strong>ACL<\/strong> 2021. [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/arxiv.org\/pdf\/2106.05630.pdf\">Paper<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>]<\/li>\n<li><em>DeepRapper: Neural Rap Generation with Rhyme and Rhythm Modeling<\/em>, Lanqing Xue, Kaitao Song, Duocai Wu, Xu Tan, Nevin L. Zhang, Tao Qin, Wei-Qiang Zhang, Tie-Yan Liu, <strong>ACL<\/strong> 2021. [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/aclanthology.org\/2021.acl-long.6.pdf\">Paper<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>] [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/ai-muzic.github.io\/deeprapper\/\">Demo<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>]<\/li>\n<li><em>SongMASS: Automatic Song Writing with Pre-training and Alignment Constraint<\/em>, Zhonghao Sheng, Kaitao Song, Xu Tan, Yi Ren, Wei Ye, Shikun Zhang, Tao Qin,\u00a0<strong>AAAI<\/strong> 2021. [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/arxiv.org\/pdf\/2012.05168.pdf\">Paper<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>] [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/ai-muzic.github.io\/songmass\/\">Demo<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>]<\/li>\n<li><span data-bind=\"text: fullName, style:{fontWeight: isPrimary ? 'bold' : ''}\"><em>PopMAG: Pop Music Accompaniment Generation<\/em>, <\/span><span data-bind=\"text: fullName, style:{fontWeight: isPrimary ? 'bold' : ''}\">Yi Ren, <\/span><span data-bind=\"text: fullName, style:{fontWeight: isPrimary ? 'bold' : ''}\">Jinzheng He, <\/span><span data-bind=\"text: fullName, style:{fontWeight: isPrimary ? 'bold' : ''}\">Xu Tan, <\/span><span data-bind=\"text: fullName, style:{fontWeight: isPrimary ? 'bold' : ''}\">Tao Qin, <\/span><span data-bind=\"text: fullName, style:{fontWeight: isPrimary ? 'bold' : ''}\">Zhou Zhao, <\/span><span data-bind=\"text: fullName, style:{fontWeight: isPrimary ? 'bold' : ''}\">Tie-Yan Liu, <\/span>ACM<strong> Multimedia<\/strong> 2020. [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/arxiv.org\/pdf\/2008.07703.pdf\">Paper<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>] [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/ai-muzic.github.io\/popmag\/\">Demo<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>]<\/li>\n<li><em>HiFiSinger: Towards High-Fidelity Neural Singing Voice Synthesis<\/em>, Jiawei Chen, Xu Tan, Jian Luan, Tao Qin, Tie-Yan Liu, arXiv 2020. [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/arxiv.org\/pdf\/2009.01776.pdf\">Paper<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>] [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/ai-muzic.github.io\/hifisinger\/\">Demo<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>]<\/li>\n<li><em>DeepSinger: Singing Voice Synthesis with Data Mined From the Web<\/em>, Yi Ren, Xu Tan, Tao Qin, Jian Luan, Zhou Zhao, Tie-Yan Liu,\u00a0<strong>KDD<\/strong> 2020. [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/arxiv.org\/pdf\/2007.04590.pdf\">Paper<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>] [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/speechresearch.github.io\/deepsinger\/\">Demo<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>]<\/li>\n<li><em>XiaoiceSing: A High-Quality and Integrated Singing Voice Synthesis System<\/em>, Peiling Lu, Jie Wu, Jian Luan, Xu Tan, Li Zhou,\u00a0<strong>INTERSPEECH<\/strong> 2020. [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/arxiv.org\/pdf\/2006.06261.pdf\">Paper<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>] [<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/xiaoicesing.github.io\/\">Demo<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>]<\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>Muzic is a research project on AI music that empowers music understanding and generation with deep learning and artificial intelligence. Muzic is pronounced as [\u02c8mju\u02d0zeik] and &#8216;\u8c2c\u8d3c\u5ba2&#8217; (in Chinese). Besides the logo in image version (see above), Muzic also has a logo in video version (you can click here to watch). We summarize the scope [&hellip;]<\/p>\n","protected":false},"featured_media":0,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","footnotes":""},"research-area":[13556,243062],"msr-locale":[268875],"msr-impact-theme":[],"msr-pillar":[],"class_list":["post-707626","msr-project","type-msr-project","status-publish","hentry","msr-research-area-artificial-intelligence","msr-research-area-audio-acoustics","msr-locale-en_us","msr-archive-status-active"],"msr_project_start":"2019-08-01","related-publications":[709714,751096,758389,758824,798556,798565,798571],"related-downloads":[],"related-videos":[],"related-groups":[705946],"related-events":[],"related-opportunities":[],"related-posts":[],"related-articles":[],"tab-content":[],"slides":[],"related-researchers":[{"type":"user_nicename","display_name":"Yang Ou","user_id":37742,"people_section":"Section name 0","alias":"yaou"}],"msr_research_lab":[199560],"msr_impact_theme":[],"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/707626","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-project"}],"version-history":[{"count":27,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/707626\/revisions"}],"predecessor-version":[{"id":876690,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/707626\/revisions\/876690"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=707626"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=707626"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=707626"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=707626"},{"taxonomy":"msr-pillar","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-pillar?post=707626"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}