{"id":788945,"date":"2021-10-26T23:54:04","date_gmt":"2021-10-27T06:54:04","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-project&#038;p=788945"},"modified":"2021-11-25T19:01:25","modified_gmt":"2021-11-26T03:01:25","slug":"privacy-preserving-deep-learning","status":"publish","type":"msr-project","link":"https:\/\/www.microsoft.com\/en-us\/research\/project\/privacy-preserving-deep-learning\/","title":{"rendered":"Privacy-preserving Deep Learning"},"content":{"rendered":"<section class=\"mb-3 moray-highlight\">\n\t<div class=\"card-img-overlay mx-lg-0\">\n\t\t<div class=\"card-background bg-gray-200 has-background- card-background--full-bleed\">\n\t\t\t\t\t<\/div>\n\t\t<!-- Foreground -->\n\t\t<div class=\"card-foreground d-flex mt-md-n5 my-lg-5 px-g px-lg-0\">\n\t\t\t<!-- Container -->\n\t\t\t<div class=\"container d-flex mt-md-n5 my-lg-5 align-self-center\">\n\t\t\t\t<!-- Card wrapper -->\n\t\t\t\t<div class=\"w-100 w-lg-col-5\">\n\t\t\t\t\t<!-- Card -->\n\t\t\t\t\t<div class=\"card material-md-card py-5 px-md-5\">\n\t\t\t\t\t\t<div class=\"card-body \">\n\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\n\n<h1 id=\"privacy-preserving-deep-learning\">Privacy-preserving Deep Learning<\/h1>\n\n\n\n<p><\/p>\n\n\t\t\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t<\/div>\n\t\t<\/div>\n\t<\/div>\n<\/section>\n\n\n\n\n\n<p>Large machine learning model can memorize the training data, which poses privacy risk. To preserve privacy,&nbsp; it requires to control the data access and measure the privacy loss. Differential privacy (DP) is widely recognized as a gold standard of privacy protection due to its mathematical rigor. We propose a series of approaches to solve the challenges of applying DP in large deep neural networks and achieve new state-of-the-art results for private learning.<\/p>\n\n\n\n\n\n<ul class=\"wp-block-list\"><li>Da Yu,&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/huzhang\/\">Huishuai Zhang<\/a>,&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/wche\/\">Wei Chen<\/a>, Jian Yin and&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/tyliu\/\">Tie-Yan Liu<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/large-scale-private-learning-via-low-rank-reparametrization\/\">Large Scale Private Learning via Low-rank Reparametrization<\/a>, <em>International Conference on Machine Learning (ICML), July 2021<\/em><\/li><\/ul>\n\n\n\n<ul class=\"wp-block-list\"><li>Da Yu,&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/huzhang\/\">Huishuai Zhang<\/a>,&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/wche\/\">Wei Chen<\/a> and&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/tyliu\/\">Tie-Yan Liu<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/do-not-let-privacy-overbill-utility-gradient-embedding-perturbation-for-private-learning\/\">Do not Let Privacy Overbill Utility: Gradient Embedding Perturbation for Private Learning<\/a>, <em>ICLR, May 2021<\/em><\/li><\/ul>\n\n\n\n<ul class=\"wp-block-list\"><li>Da Yu,&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/huzhang\/\">Huishuai Zhang<\/a>,&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/wche\/\">Wei Chen<\/a>, Jian Yin and&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/tyliu\/\">Tie-Yan Liu<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/how-does-data-augmentation-affect-privacy-in-machine-learning\/\">How Does Data Augmentation Affect Privacy in Machine Learning?<\/a> <em>AAAI, January 2021<\/em><\/li><\/ul>\n\n\n\n<ul class=\"wp-block-list\"><li>Da Yu,&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/huzhang\/\">Huishuai Zhang<\/a>,&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/wche\/\">Wei Chen<\/a>, Jian Yin and&nbsp;<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/tyliu\/\">Tie-Yan Liu<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/gradient-perturbation-is-underrated-for-differentially-private-convex-optimization\/\">Gradient Perturbation is Underrated for Differentially Private Convex Optimization<\/a>, <em>IJCAI, July 2020<\/em><\/li><\/ul>\n\n\n","protected":false},"excerpt":{"rendered":"<p>Large machine learning model can memorize the training data, which poses privacy risk. To preserve privacy,&nbsp; it requires to control the data access and measure the privacy loss. Differential privacy (DP) is widely recognized as a gold standard of privacy protection due to its mathematical rigor. We propose a series of approaches to solve the [&hellip;]<\/p>\n","protected":false},"featured_media":0,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","footnotes":""},"research-area":[13556],"msr-locale":[268875],"msr-impact-theme":[],"msr-pillar":[],"class_list":["post-788945","msr-project","type-msr-project","status-publish","hentry","msr-research-area-artificial-intelligence","msr-locale-en_us","msr-archive-status-active"],"msr_project_start":"","related-publications":[879525],"related-downloads":[],"related-videos":[],"related-groups":[1054512],"related-events":[],"related-opportunities":[],"related-posts":[916938],"related-articles":[],"tab-content":[],"slides":[],"related-researchers":[],"msr_research_lab":[199560],"msr_impact_theme":[],"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/788945","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-project"}],"version-history":[{"count":4,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/788945\/revisions"}],"predecessor-version":[{"id":799930,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-project\/788945\/revisions\/799930"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=788945"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=788945"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=788945"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=788945"},{"taxonomy":"msr-pillar","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-pillar?post=788945"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}