{"id":775480,"date":"2021-08-11T10:09:23","date_gmt":"2021-08-11T17:09:23","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-research-item&#038;p=775480"},"modified":"2021-10-23T17:23:19","modified_gmt":"2021-10-24T00:23:19","slug":"visual-question-answering-and-reasoning-over-vision-and-language-beyond-the-limits-of-statistical-learning","status":"publish","type":"msr-video","link":"https:\/\/www.microsoft.com\/en-us\/research\/video\/visual-question-answering-and-reasoning-over-vision-and-language-beyond-the-limits-of-statistical-learning\/","title":{"rendered":"Visual question answering and reasoning over vision and language: Beyond the limits of statistical learning?"},"content":{"rendered":"<p>Advances in deep learning keep producing impressive results at the junction of computer vision and natural language processing. The task of visual question answering (VQA), once considered incredibly ambitious, is now commonly used to benchmark multimodal models. Despite apparent progress, however, I will argue that some capabilities required for a general solution to VQA, such as strong out-of-distribution generalization, are beyond the reach of prevailing practices in machine learning. I will discuss how causal reasoning helps in formalizing the limits of classical, correlation-based learning. We will use a new layer of understanding of existing techniques to identify what information is missing from typical datasets, where else to find it, and how to test our models for the behaviors we really care about.<\/p>\n<div>\n\t<a\n\t\thref=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2021\/10\/08112021_Teney-MSR-talk-slides.pdf\"\n\t\tclass=\"button cta-link\"\n\t\tdata-bi-type=\"button\"\n\t\tdata-bi-cN=\"View slides\"\n\t\tdata-bi-tN=\"shortcodes\/msr-button\"\n\t\ttarget=\"_blank\" rel=\"noopener noreferrer\">\n\t\tView slides\t<\/a>\n\n\t<\/div>\n","protected":false},"excerpt":{"rendered":"<p>Advances in deep learning keep producing impressive results at the junction of computer vision and natural language processing. The task of visual question answering (VQA), once considered incredibly ambitious, is now commonly used to benchmark multimodal models. Despite apparent progress, however, I will argue that some capabilities required for a general solution to VQA, such [&hellip;]<\/p>\n","protected":false},"featured_media":775483,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr_hide_image_in_river":0,"footnotes":""},"research-area":[13556],"msr-video-type":[259633],"msr-locale":[268875],"msr-post-option":[],"msr-session-type":[],"msr-impact-theme":[],"msr-pillar":[],"msr-episode":[],"msr-research-theme":[],"class_list":["post-775480","msr-video","type-msr-video","status-publish","has-post-thumbnail","hentry","msr-research-area-artificial-intelligence","msr-video-type-vision-language-summer-talk-series","msr-locale-en_us"],"msr_download_urls":"","msr_external_url":"https:\/\/youtu.be\/SsYb5RW8__c","msr_secondary_video_url":"","msr_video_file":"","_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video\/775480","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-video"}],"version-history":[{"count":3,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video\/775480\/revisions"}],"predecessor-version":[{"id":787726,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video\/775480\/revisions\/787726"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/775483"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=775480"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=775480"},{"taxonomy":"msr-video-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video-type?post=775480"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=775480"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=775480"},{"taxonomy":"msr-session-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-session-type?post=775480"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=775480"},{"taxonomy":"msr-pillar","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-pillar?post=775480"},{"taxonomy":"msr-episode","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-episode?post=775480"},{"taxonomy":"msr-research-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-research-theme?post=775480"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}