{"id":586939,"date":"2019-04-23T00:00:40","date_gmt":"2019-04-23T07:00:40","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-research-item&#038;p=586939"},"modified":"2019-05-13T09:20:10","modified_gmt":"2019-05-13T16:20:10","slug":"both-sides-now-generating-and-understanding-visually-grounded-language","status":"publish","type":"msr-video","link":"https:\/\/www.microsoft.com\/en-us\/research\/video\/both-sides-now-generating-and-understanding-visually-grounded-language\/","title":{"rendered":"Both Sides Now: Generating and Understanding Visually-Grounded Language"},"content":{"rendered":"<p>From robots to cars, virtual assistants and voice-controlled drones, computing devices are increasingly expected to communicate naturally with people and to understand the visual context in which they operate. In this talk, I will present our latest work on generating and comprehending visually-grounded language. First, we will discuss the challenging task of describing an image (image captioning). I will introduce captioning models that leverage multiple data sources, including object detection datasets and unaligned text corpora, in order to learn about the long-tail of visual concepts found in the real world. To support and encourage further efforts in this area, I will present the &#8216;nocaps&#8217; benchmark for novel object captioning. In the second part of the talk, I will describe our recent work on developing agents that follow natural language instructions in reconstructed 3D environments using the R2R dataset for vision-and-language navigation.<\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2019\/05\/Both-Sides-Now-Generating-and-Understanding-Visually-Grounded-Language-slides.pdf\" target=\"_blank\" rel=\"noopener noreferrer\">[SLIDES]<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>From robots to cars, virtual assistants and voice-controlled drones, computing devices are increasingly expected to communicate naturally with people and to understand the visual context in which they operate. In this talk, I will present our latest work on generating and comprehending visually-grounded language. First, we will discuss the challenging task of describing an image [&hellip;]<\/p>\n","protected":false},"featured_media":586972,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr_hide_image_in_river":0,"footnotes":""},"research-area":[13545],"msr-video-type":[206954],"msr-locale":[268875],"msr-post-option":[],"msr-session-type":[],"msr-impact-theme":[],"msr-pillar":[],"msr-episode":[],"msr-research-theme":[],"class_list":["post-586939","msr-video","type-msr-video","status-publish","has-post-thumbnail","hentry","msr-research-area-human-language-technologies","msr-video-type-microsoft-research-talks","msr-locale-en_us"],"msr_download_urls":"","msr_external_url":"https:\/\/youtu.be\/fHijqAchv-4","msr_secondary_video_url":"","msr_video_file":"","_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video\/586939","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-video"}],"version-history":[{"count":1,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video\/586939\/revisions"}],"predecessor-version":[{"id":586957,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video\/586939\/revisions\/586957"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/586972"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=586939"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=586939"},{"taxonomy":"msr-video-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video-type?post=586939"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=586939"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=586939"},{"taxonomy":"msr-session-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-session-type?post=586939"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=586939"},{"taxonomy":"msr-pillar","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-pillar?post=586939"},{"taxonomy":"msr-episode","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-episode?post=586939"},{"taxonomy":"msr-research-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-research-theme?post=586939"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}