{"id":186376,"date":"2010-03-30T00:00:00","date_gmt":"2011-06-16T13:08:26","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/msr-research-item\/labelme-online-image-annotation-and-applications\/"},"modified":"2016-08-22T11:28:05","modified_gmt":"2016-08-22T18:28:05","slug":"labelme-online-image-annotation-and-applications","status":"publish","type":"msr-video","link":"https:\/\/www.microsoft.com\/en-us\/research\/video\/labelme-online-image-annotation-and-applications\/","title":{"rendered":"LabelMe: online image annotation and applications"},"content":{"rendered":"<div class=\"asset-content\">\n<p>Central to the development of computer vision systems is the<br \/>\ncollection and use of annotated images spanning our visual world.<br \/>\nAnnotations may include information about the identity, spatial<br \/>\nextent, and viewpoint of the objects present in a depicted scene. Such<br \/>\na database is useful for the training and evaluation of computer<br \/>\nvision systems. Motivated by the availability of images on the<br \/>\ninternet, we introduced a web-based annotation tool that allows online<br \/>\nusers to label objects and their spatial extent in images. To date, we<br \/>\nhave collected over 500K annotations that span a variety of different<br \/>\nscene and object classes.  In this talk, I will show the contents of<br \/>\nthe database, its growth over time, and statistics of its usage. In<br \/>\naddition, we use the collected user-provided object annotations to<br \/>\nextract the real-world 3D coordinates of images in a variety of<br \/>\nscenes. Important for this task is the recovery of geometric<br \/>\ninformation that is implicit in the object labels, such as qualitative<br \/>\nrelationships between objects (attachment, support, occlusion) and<br \/>\nquantitative ones (inferring camera parameters).  We show that we are<br \/>\nable to obtain high quality 3D information by evaluating the proposed<br \/>\napproach on a database obtained with a laser range scanner.<\/p>\n<p>Joint work with Antonio Torralba (MIT) and William T. Freeman (MIT)<\/p>\n<\/div>\n<p><!-- .asset-content --><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Central to the development of computer vision systems is the collection and use of annotated images spanning our visual world. Annotations may include information about the identity, spatial extent, and viewpoint of the objects present in a depicted scene. Such a database is useful for the training and evaluation of computer vision systems. Motivated by [&hellip;]<\/p>\n","protected":false},"featured_media":280844,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr_hide_image_in_river":0,"footnotes":""},"research-area":[],"msr-video-type":[],"msr-locale":[268875],"msr-post-option":[],"msr-session-type":[],"msr-impact-theme":[],"msr-pillar":[],"msr-episode":[],"msr-research-theme":[],"class_list":["post-186376","msr-video","type-msr-video","status-publish","has-post-thumbnail","hentry","msr-locale-en_us"],"msr_download_urls":"","msr_external_url":"https:\/\/youtu.be\/jHw6-chPV6w","msr_secondary_video_url":"","msr_video_file":"","_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video\/186376","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-video"}],"version-history":[{"count":0,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video\/186376\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/280844"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=186376"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=186376"},{"taxonomy":"msr-video-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video-type?post=186376"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=186376"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=186376"},{"taxonomy":"msr-session-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-session-type?post=186376"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=186376"},{"taxonomy":"msr-pillar","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-pillar?post=186376"},{"taxonomy":"msr-episode","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-episode?post=186376"},{"taxonomy":"msr-research-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-research-theme?post=186376"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}