{"id":3321,"date":"2015-07-01T09:00:00","date_gmt":"2015-07-01T09:00:00","guid":{"rendered":"https:\/\/blogs.technet.microsoft.com\/inside_microsoft_research\/2015\/07\/01\/standing-the-test-of-time-microsoft-researcher-honored-for-prescient-machine-learning-work\/"},"modified":"2016-07-20T07:29:10","modified_gmt":"2016-07-20T14:29:10","slug":"standing-the-test-of-time-microsoft-researcher-honored-for-prescient-machine-learning-work","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-us\/research\/blog\/standing-the-test-of-time-microsoft-researcher-honored-for-prescient-machine-learning-work\/","title":{"rendered":"Standing the test of time: Microsoft researcher honored for prescient machine learning work"},"content":{"rendered":"<p class=\"posted-by\">Posted by <span class=\"author\">Allison Linn<\/span><\/p>\n<p><img decoding=\"async\" style=\"vertical-align: baseline; margin-left: 5px; margin-right: 5px;\" title=\"Chris Burges\" src=\"https:\/\/msdnshared.blob.core.windows.net\/media\/TNBlogsFS\/prod.evol.blogs.technet.com\/CommunityServer.Blogs.Components.WeblogFiles\/00\/00\/00\/90\/35\/chris-burges_550.jpg\" alt=\"Chris Burges\" \/><\/p>\n<p>When <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" title=\"Chris J.C. Burges\" href=\"http:\/\/research.microsoft.com\/en-us\/people\/cburges\/\" target=\"_blank\">Chris J.C. Burges<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> came to Microsoft Research in 2000, he knew he wanted to work on <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" title=\"Machine Lerning research at Microsoft\" href=\"http:\/\/research.microsoft.com\/en-us\/research-areas\/machine-learning-ai.aspx\" target=\"_blank\">machine learning<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> projects that would have a real impact on users.<\/p>\n<p>Burges definitely succeeded: He ended up being part of a team that created the basis for the ranking system that is still used in Microsoft\u2019s <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" title=\"Bing\" href=\"http:\/\/www.bing.com\/\" target=\"_blank\">Bing<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> search engine today.<\/p>\n<p>At next week\u2019s <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" title=\"International Conference on Machine Learning\" href=\"http:\/\/icml.cc\/2015\/\" target=\"_blank\">International Conference on Machine Learning<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Burges, a research manager and principal researcher in Microsoft Research\u2019s <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" title=\"Machine Learning Intelligence Group\" href=\"http:\/\/research.microsoft.com\/en-us\/groups\/tmsn\/\" target=\"_blank\">Machine Learning Intelligence Group<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, and his co-authors will receive the <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" title=\"Test of Time Award\" href=\"http:\/\/icml.cc\/2015\/?page_id=51\" target=\"_blank\">Test of Time Award<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> for the <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" title=\"2005 paper: ICML Ranking (.pdf)\" href=\"http:\/\/research.microsoft.com\/en-us\/um\/people\/cburges\/papers\/ICML_ranking.pdf\" target=\"_blank\">2005 paper<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> that showed how that system works.<\/p>\n<p>The system, called RankNet, was a breakthrough because it was much faster and more accurate than the previous system. Burges said it was able to make as much progress training the search engine system to rank results in one day, using one PC, as a previous system had done in several days using a cluster of computers.<\/p>\n<p>RankNet also relies on <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" title=\"neural networks research at Microsoft\" href=\"http:\/\/research.microsoft.com\/apps\/search\/default.aspx?q=neural+networks\" target=\"_blank\">neural networks<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>. These are computer systems, loosely modeled after the human brain, that can be trained to perform desired tasks based on data labeled by humans. At the time, Burges said he believes they were the only researchers using that technology for a search engine ranking system.<\/p>\n<p>In the last few years, the use of neural networks has exploded, with researchers using them to make great strides in everything from <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" title=\"real-time translation\" href=\"http:\/\/research.microsoft.com\/en-us\/news\/features\/translator-052714.aspx\" target=\"_blank\">real-time translation<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> to <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" title=\"image captioning\" href=\"http:\/\/blogs.microsoft.com\/next\/2015\/05\/28\/picture-this-microsoft-research-project-can-interpret-caption-photos\/\" target=\"_blank\">image captioning<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>. The program co-chairs lauded Burges and his team for showing the power of these networks way back in in 2005, long before this most recent renaissance.<\/p>\n<p>Burges said he was drawn to search engine ranking because it was a hot, highly competitive field in which both researchers and technology companies were competing to be the best.<\/p>\n<p>&#8220;We knew there was a significant opportunity for having impact,&#8221; he said.<\/p>\n<p>These days, Burges is working on a system that <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" title=\"aims to teach machines\" href=\"http:\/\/research.microsoft.com\/apps\/video\/default.aspx?id=214659&l=i\" target=\"_blank\">aims to teach machines<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> to read and comprehend text, and to be able to answer questions about it.<\/p>\n<p>It&#8217;s an ambitious project that, if successful, could have profound implications for artificial intelligence, or the development of systems that can see, hear and understand.<\/p>\n<p>Burges said one thing he\u2019s learned in his career is that it\u2019s worth taking on longshot projects that you are passionate about.<\/p>\n<p>&#8220;At some point you should just say, \u2018What do I really want to accomplish?\u2019&#8221; he said. &#8220;And then you should just do it.&#8221;<\/p>\n<p>A number of Microsoft researchers are presenting other papers at the machine learning conference. They include:<\/p>\n<ul>\n<li><em><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" title=\"Approval Voting and Incentives in Crowdsourcing\" href=\"http:\/\/jmlr.org\/proceedings\/papers\/v37\/shaha15.pdf\" target=\"_blank\">Approval Voting and Incentives in Crowdsourcing<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/em> (611 KB .pdf)<\/li>\n<li><em><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" title=\"A Lower Bound for the Optimization of Finite Sums\" href=\"http:\/\/jmlr.org\/proceedings\/papers\/v37\/agarwal15.pdf\" target=\"_blank\">A Lower Bound for the Optimization of Finite Sums<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/em> (232 KB .pdf)<\/li>\n<li><em><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" title=\"Surrogate Functions for Maximizing Precision at the Top\" href=\"http:\/\/jmlr.org\/proceedings\/papers\/v37\/kar15.pdf\" target=\"_blank\">Surrogate Functions for Maximizing Precision at the Top<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/em> (435 KB .pdf)<\/li>\n<li><em><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" title=\"Optimizing Non-decomposable Performance Measures: A Tale of Two Classes\" href=\"http:\/\/jmlr.org\/proceedings\/papers\/v37\/narasimhana15.pdf\" target=\"_blank\">Optimizing Non-decomposable Performance Measures: A Tale of Two Classes<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/em> (469 KB .pdf)<\/li>\n<li><em><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" title=\"Classification with Low Rank and Missing Data\" href=\"http:\/\/jmlr.org\/proceedings\/papers\/v37\/hazan15.pdf\" target=\"_blank\">Classification with Low Rank and Missing Data<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/em> (363 KB .pdf)<\/li>\n<li><em><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" title=\"Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization\" href=\"http:\/\/jmlr.org\/proceedings\/papers\/v37\/zhanga15.pdf\" target=\"_blank\">Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/em> (426 KB .pdf)<\/li>\n<li><em><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" title=\"DiSCO: Distributed Optimization for Self-Concordant Empirical Loss\" href=\"http:\/\/jmlr.org\/proceedings\/papers\/v37\/zhangb15.pdf\" target=\"_blank\">DiSCO: Distributed Optimization for Self-Concordant Empirical Loss<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/em> (395 KB .pdf)<\/li>\n<li><em><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" title=\"A Linear Dynamical System Model for Text\" href=\"http:\/\/jmlr.org\/proceedings\/papers\/v37\/belanger15.pdf\" target=\"_blank\">A Linear Dynamical System Model for Text<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/em> (335 KB .pdf)<\/li>\n<li><em><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" title=\"Multi-Task Learning for Subspace Segmentation\" href=\"http:\/\/jmlr.org\/proceedings\/papers\/v37\/wangc15.pdf\" target=\"_blank\">Multi-Task Learning for Subspace Segmentation<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/em> (163 KB .pdf)<\/li>\n<li><em><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" title=\"On Greedy Maximization of Entropy\" href=\"http:\/\/jmlr.org\/proceedings\/papers\/v37\/sharma15.pdf\" target=\"_blank\">On Greedy Maximization of Entropy<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/em> (323 KB .pdf)<\/li>\n<li><em><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" title=\"Bimodal Modelling of Source Code and Natural Language\" href=\"http:\/\/jmlr.org\/proceedings\/papers\/v37\/allamanis15.pdf\" target=\"_blank\">Bimodal Modelling of Source Code and Natural Language<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/em> (448 KB .pdf)<\/li>\n<li><em><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" title=\"Intersecting Faces: Non-negative Matrix Factorization With New Guarantees\" href=\"http:\/\/jmlr.org\/proceedings\/papers\/v37\/geb15.pdf\" target=\"_blank\">Intersecting Faces: Non-negative Matrix Factorization With New Guarantees<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/em> (452 KB .pdf)<\/li>\n<li><em><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" title=\"Un-regularizing: approximate proximal point and faster stochastic algorithms for empirical risk minimization\" href=\"http:\/\/jmlr.org\/proceedings\/papers\/v37\/frostig15.pdf\" target=\"_blank\">Un-regularizing: approximate proximal point and faster stochastic algorithms for empirical risk minimization<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/em> (397 KB .pdf)<\/li>\n<li><em><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" title=\"Pushing the Limits of Affine Rank Minimization by Adapting Probabilistic PCA\" href=\"http:\/\/jmlr.org\/proceedings\/papers\/v37\/xin15.pdf\" target=\"_blank\">Pushing the Limits of Affine Rank Minimization by Adapting Probabilistic PCA<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/em> (374 KB .pdf)<\/li>\n<\/ul>\n<p><em>Allison Linn is a senior writer at Microsoft Research. <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" title=\"Follow Allison on Twitter\" href=\"https:\/\/x.com\/allisondlinn\" target=\"_blank\">Follow Allison on Twitter<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>.<\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Posted by Allison Linn When Chris J.C. Burges came to Microsoft Research in 2000, he knew he wanted to work on machine learning projects that would have a real impact on users. Burges definitely succeeded: He ended up being part of a team that created the basis for the ranking system that is still used [&hellip;]<\/p>\n","protected":false},"author":30766,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr-author-ordering":[],"msr_hide_image_in_river":0,"footnotes":""},"categories":[194467,194455],"tags":[187359,200699,200889,201911,186418,202509,202511,187358,204183],"research-area":[13556],"msr-region":[],"msr-event-type":[],"msr-locale":[268875],"msr-post-option":[],"msr-impact-theme":[],"msr-promo-type":[],"msr-podcast-series":[],"class_list":["post-3321","post","type-post","status-publish","format-standard","hentry","category-artifical-intelligence","category-machine-learning","tag-artificial-intelligence","tag-bing-translator","tag-chris-burges","tag-icml","tag-machine-learning","tag-machine-learning-intelligence-group","tag-machine-teaching","tag-neural-networks","tag-test-of-time-award","msr-research-area-artificial-intelligence","msr-locale-en_us"],"msr_event_details":{"start":"","end":"","location":""},"podcast_url":"","podcast_episode":"","msr_research_lab":[],"msr_impact_theme":[],"related-publications":[],"related-downloads":[],"related-videos":[],"related-academic-programs":[],"related-groups":[],"related-projects":[],"related-events":[],"related-researchers":[],"msr_type":"Post","byline":"","formattedDate":"July 1, 2015","formattedExcerpt":"Posted by Allison Linn When Chris J.C. Burges came to Microsoft Research in 2000, he knew he wanted to work on machine learning projects that would have a real impact on users. Burges definitely succeeded: He ended up being part of a team that created&hellip;","locale":{"slug":"en_us","name":"English","native":"","english":"English"},"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/3321","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/users\/30766"}],"replies":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/comments?post=3321"}],"version-history":[{"count":1,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/3321\/revisions"}],"predecessor-version":[{"id":235665,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/3321\/revisions\/235665"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=3321"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/categories?post=3321"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/tags?post=3321"},{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=3321"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=3321"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=3321"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=3321"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=3321"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=3321"},{"taxonomy":"msr-promo-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-promo-type?post=3321"},{"taxonomy":"msr-podcast-series","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-podcast-series?post=3321"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}