{"id":188500,"date":"2012-09-17T00:00:00","date_gmt":"2012-10-04T13:43:54","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/msr-research-item\/training-of-binary-classifiers-with-quantum-optimization\/"},"modified":"2016-08-22T11:27:01","modified_gmt":"2016-08-22T18:27:01","slug":"training-of-binary-classifiers-with-quantum-optimization","status":"publish","type":"msr-video","link":"https:\/\/www.microsoft.com\/en-us\/research\/video\/training-of-binary-classifiers-with-quantum-optimization\/","title":{"rendered":"Training of Binary Classifiers with Quantum Optimization"},"content":{"rendered":"<div class=\"asset-content\">\n<p>Modern machine learning theory formulates training of a classifier as minimization of an objective function which is the sum of two terms: the empirical risk which characterizes how well the classifier performs on a training data set and the regularization which controls the classifier complexity. I will discuss the advantages that can be obtained if either of these terms are chosen to be non-convex. A non-convex risk allows the training to cope with a significant amount of label noise while retaining the ability to learn a Bayes optimal classifier. This reduces the quality requirements for the training data, a major bottleneck for machine learning applications, increasing the autonomy of the learner. Non-convex regularization can achieve very sparse classifiers leading to increased execution speed and classifiers suitable for power constrained environments. I will describe our efforts to map the training problems onto quadratic binary optimization, the native input format of the D-Wave quantum optimization processors. The talk will discuss the evidence that the processors behave quantumly and the challenge to perform the mapping such that only a sufficiently small number of ancillary qubits are required.<\/p>\n<\/div>\n<p><!-- .asset-content --><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Modern machine learning theory formulates training of a classifier as minimization of an objective function which is the sum of two terms: the empirical risk which characterizes how well the classifier performs on a training data set and the regularization which controls the classifier complexity. I will discuss the advantages that can be obtained if [&hellip;]<\/p>\n","protected":false},"featured_media":277296,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr_hide_image_in_river":0,"footnotes":""},"research-area":[],"msr-video-type":[],"msr-locale":[268875],"msr-post-option":[],"msr-session-type":[],"msr-impact-theme":[],"msr-pillar":[],"msr-episode":[],"msr-research-theme":[],"class_list":["post-188500","msr-video","type-msr-video","status-publish","has-post-thumbnail","hentry","msr-locale-en_us"],"msr_download_urls":"","msr_external_url":"https:\/\/youtu.be\/a4yH9QLf1_A","msr_secondary_video_url":"","msr_video_file":"","_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video\/188500","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-video"}],"version-history":[{"count":0,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video\/188500\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/277296"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=188500"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=188500"},{"taxonomy":"msr-video-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video-type?post=188500"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=188500"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=188500"},{"taxonomy":"msr-session-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-session-type?post=188500"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=188500"},{"taxonomy":"msr-pillar","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-pillar?post=188500"},{"taxonomy":"msr-episode","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-episode?post=188500"},{"taxonomy":"msr-research-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-research-theme?post=188500"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}