{"id":4801,"date":"2015-07-13T09:00:12","date_gmt":"2015-07-13T16:00:12","guid":{"rendered":"https:\/\/blogs.msdn.microsoft.com\/msr_er\/?p=4801"},"modified":"2016-07-20T07:29:06","modified_gmt":"2016-07-20T14:29:06","slug":"icml-2015-best-paper-summary-optimal-and-adaptive-algorithms-for-online-boosting","status":"publish","type":"post","link":"https:\/\/www.microsoft.com\/en-us\/research\/blog\/icml-2015-best-paper-summary-optimal-and-adaptive-algorithms-for-online-boosting\/","title":{"rendered":"ICML 2015 best paper summary: Optimal and adaptive algorithms for online boosting"},"content":{"rendered":"<p><img loading=\"lazy\" decoding=\"async\" class=\" size-full wp-image-4802 alignleft\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/02\/AlinaBeygelzimer1.jpg\" alt=\"AlinaBeygelzimer1\" width=\"308\" height=\"308\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/02\/AlinaBeygelzimer1.jpg 308w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/02\/AlinaBeygelzimer1-150x150.jpg 150w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/02\/AlinaBeygelzimer1-300x300.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2016\/02\/AlinaBeygelzimer1-180x180.jpg 180w\" sizes=\"auto, (max-width: 308px) 100vw, 308px\" \/><\/p>\n<p>A study of new algorithms that improve \u201conline boosting\u201d has won a Best Paper award at the world\u2019s leading academic conference on machine learning.<\/p>\n<p>In the paper, <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" href=\"http:\/\/jmlr.org\/proceedings\/papers\/v37\/beygelzimer15.pdf\" target=\"_blank\"><em>Optimal and Adaptive Algorithms for Online Boosting<\/em>,<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>\u00a0by Alina Beygelzimer, Satyen Kale, and Haipeng Luo, delivered to the recent <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" href=\"http:\/\/icml.cc\/2015\/\" target=\"_blank\">International Conference on Machine Learning\u00a0(ICML)<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, researchers at Yahoo Labs and Princeton University published test results showing an average performance gain of more than 5% over existing boosting methods.<\/p>\n<p>The findings build on more than a decade of machine learning research into enabling more accurate predictions by running boosting algorithms against datasets that may contain weak, unreliable, or inaccurate information (so-called \u201cweak learners\u201d). If most of the weak information is at least more reliable than data that could be obtained from random chance, boosting algorithms can go to work in an effort to deliver more accurate information.<\/p>\n<p>Testing for the new algorithm variants described in the study\u2014OnlineBBM (optimal) and AdaBoost.OL (adaptive)\u2014was performed on <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" href=\"http:\/\/hunch.net\/~vw\/\" target=\"_blank\">Vowpal Wabbit\u00a0(VW)<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, the out-of-core learning system library from Microsoft Research, across 13 publicly available benchmark datasets for categories such as news and gaming (poker). Results showed OnlineBBM made fewer errors than all other boosting algorithms in 10 of 12 datasets.<\/p>\n<p>The study credited three factors for the improved performance:<\/p>\n<ul>\n<li>Weaker learning requirements. Runs against weaker (less accurate) datasets.<\/li>\n<li>No weighting requirements. Does not require \u201cimportance-weighted online learning.\u201d<\/li>\n<li>Optimal utilization. No other online boosting algorithm achieves \u201cthe same error rate with fewer weak learners or examples asymptotically.\u201d<\/li>\n<\/ul>\n<p>OnlineBBM is suited for situations in which certain parameters are known ahead of time. For cases in which such parameters remain unknown, the study described improvements made to AdaBoost, an existing adaptive algorithm that uses \u201conline loss minimization\u201d to achieve results.<\/p>\n<p>The paper concludes: \u201cAlthough the number of weak learners and excess loss for Adaboost.OL are suboptimal, the adaptivity of AdaBoost.OL is an appealing feature and leads to good performance in experiments. The possibility of obtaining an algorithm that is both adaptive and optimal is left as an open question.\u201d<\/p>\n<p>A total of <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" href=\"http:\/\/jmlr.org\/proceedings\/papers\/v37\/#default\" target=\"_blank\">270 papers were accepted<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> at the ICML conference, July 6-11, 2015 in Lille, France.<\/p>\n<p><em>\u2014John Kaiser, Research News<\/em><\/p>\n<p>For more computer science research news, visit <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" href=\"http:\/\/www.researchnews.com\/\" target=\"_blank\">ResearchNews.com<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>A study of new algorithms that improve \u201conline boosting\u201d has won a Best Paper award at the world\u2019s leading academic conference on machine learning. In the paper, Optimal and Adaptive Algorithms for Online Boosting,\u00a0by Alina Beygelzimer, Satyen Kale, and Haipeng Luo, delivered to the recent International Conference on Machine Learning\u00a0(ICML), researchers at Yahoo Labs and [&hellip;]<\/p>\n","protected":false},"author":32627,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr-author-ordering":[],"msr_hide_image_in_river":0,"footnotes":""},"categories":[194466,194455,194459],"tags":[195863,195955,196871,197732,197856],"research-area":[],"msr-region":[],"msr-event-type":[],"msr-locale":[268875],"msr-post-option":[],"msr-impact-theme":[],"msr-promo-type":[],"msr-podcast-series":[],"class_list":["post-4801","post","type-post","status-publish","format-standard","hentry","category-algorithms","category-machine-learning","category-researchnews","tag-icml-2015","tag-international-conference-on-machine-learning-icml","tag-princeton-university","tag-vowpal-wabbit-vw","tag-yahoo-labs","msr-locale-en_us"],"msr_event_details":{"start":"","end":"","location":""},"podcast_url":"","podcast_episode":"","msr_research_lab":[],"msr_impact_theme":[],"related-publications":[],"related-downloads":[],"related-videos":[],"related-academic-programs":[],"related-groups":[],"related-projects":[],"related-events":[],"related-researchers":[],"msr_type":"Post","byline":"","formattedDate":"July 13, 2015","formattedExcerpt":"A study of new algorithms that improve \u201conline boosting\u201d has won a Best Paper award at the world\u2019s leading academic conference on machine learning. In the paper, Optimal and Adaptive Algorithms for Online Boosting,\u00a0by Alina Beygelzimer, Satyen Kale, and Haipeng Luo, delivered to the recent&hellip;","locale":{"slug":"en_us","name":"English","native":"","english":"English"},"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/4801","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/users\/32627"}],"replies":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/comments?post=4801"}],"version-history":[{"count":1,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/4801\/revisions"}],"predecessor-version":[{"id":260727,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/posts\/4801\/revisions\/260727"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=4801"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/categories?post=4801"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/tags?post=4801"},{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=4801"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=4801"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=4801"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=4801"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=4801"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=4801"},{"taxonomy":"msr-promo-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-promo-type?post=4801"},{"taxonomy":"msr-podcast-series","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-podcast-series?post=4801"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}