{"id":876411,"date":"2022-09-08T10:55:18","date_gmt":"2022-09-08T17:55:18","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-blog-post&#038;p=876411"},"modified":"2022-09-08T11:18:40","modified_gmt":"2022-09-08T18:18:40","slug":"eccv-workshop-on-computer-vision-in-the-wild","status":"publish","type":"msr-blog-post","link":"https:\/\/www.microsoft.com\/en-us\/research\/articles\/eccv-workshop-on-computer-vision-in-the-wild\/","title":{"rendered":"ECCV Workshop on &#8220;Computer Vision in the Wild&#8221;"},"content":{"rendered":"\n<figure class=\"wp-block-image size-large\"><a href=\"https:\/\/computer-vision-in-the-wild.github.io\/eccv-2022\/\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"259\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/09\/eccv_workshop_pic-631a2ea2d5b5d-1024x259.png\" alt=\"a large body of water with a city in the background\" class=\"wp-image-876459\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/09\/eccv_workshop_pic-631a2ea2d5b5d-1024x259.png 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/09\/eccv_workshop_pic-631a2ea2d5b5d-300x76.png 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/09\/eccv_workshop_pic-631a2ea2d5b5d-768x194.png 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/09\/eccv_workshop_pic-631a2ea2d5b5d-240x61.png 240w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/09\/eccv_workshop_pic-631a2ea2d5b5d.png 1377w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/a><figcaption>Please join the Workshop & Challenge on &#8220;<a href=\"https:\/\/computer-vision-in-the-wild.github.io\/eccv-2022\/\"><em>Computer Vision in the Wild<\/em><\/a>\u2019\u2019 at #ECCV2022<\/figcaption><\/figure>\n\n\n\n<p>Website: <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/computer-vision-in-the-wild.github.io\/eccv-2022\/\">https:\/\/computer-vision-in-the-wild.github.io\/eccv-2022\/<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n\n\n\n<p><strong>Workshop<\/strong>: The research community has recently witnessed a trend in building transferable visual models that can <em>effortlessly adapt<\/em> to <em>a wide range of downstream computer vision (CV) and multimodal (MM) tasks<\/em>. We are organizing this &#8220;Computer Vision in the Wild&#8221; workshop, aiming to gather academic and industry communities to work on CV problems in real-world scenarios, focusing on the challenge of open-set\/domain visual recognition and efficient task-level transfer. Since there is no established benchmarks to measure the progress of &#8220;CV in the Wild&#8221;, we develop new benchmarks for image classification and object detection, to measure the task-level transfer ability of various models\/methods over diverse real-world datasets, in terms of both prediction accuracy and adaption efficiency. <\/p>\n\n\n\n<p><strong>Challenge<\/strong>: This workshop will also host two challenges based on the&nbsp;<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/computer-vision-in-the-wild.github.io\/ELEVATER\/\">ELEVATER benchmarks<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>. It is&nbsp;a platform with 20 image classification and 35 object detection public datasets for evaluating language-image models in task-level visual transfer, measuring both sample-efficiency (#training samples) and parameter-efficiency (#trainable parameters). The two challenges are: <\/p>\n\n\n\n<ul class=\"wp-block-list\" id=\"block-969f9bb5-5f7d-4e4c-89d9-4df78ae0d54f\"><li><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/eval.ai\/web\/challenges\/challenge-page\/1832\/overview\">Image Classification in the Wild (ICinW)<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/li><li><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" rel=\"noopener noreferrer\" target=\"_blank\" href=\"https:\/\/eval.ai\/web\/challenges\/challenge-page\/1839\/overview\">Object Detection in the Wild (ODinW<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>)<\/li><\/ul>\n\n\n\n<p><strong>Call for papers and participation<\/strong>: Solving problems on open-set recognition and task-level visual transfer.<\/p>\n\n\n\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-1 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-full\"><a href=\"https:\/\/eval.ai\/web\/challenges\/challenge-page\/1832\/overview\"><img loading=\"lazy\" decoding=\"async\" width=\"384\" height=\"192\" data-id=\"876423\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/09\/icinw_logo.jpg\" alt=\"ICinW Challenge (20 datasets)\" class=\"wp-image-876423\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/09\/icinw_logo.jpg 384w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/09\/icinw_logo-300x150.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/09\/icinw_logo-240x120.jpg 240w\" sizes=\"auto, (max-width: 384px) 100vw, 384px\" \/><\/a><figcaption>ICinW Challenge (20 datasets)<\/figcaption><\/figure>\n\n\n\n<figure class=\"wp-block-image size-large\"><a href=\"https:\/\/computer-vision-in-the-wild.github.io\/eccv-2022\/\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"586\" data-id=\"876426\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/09\/eccv_logo-1024x586.png\" alt=\"eccv workshop logo\" class=\"wp-image-876426\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/09\/eccv_logo-1024x586.png 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/09\/eccv_logo-300x172.png 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/09\/eccv_logo-768x440.png 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/09\/eccv_logo-1536x879.png 1536w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/09\/eccv_logo-240x137.png 240w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/09\/eccv_logo.png 1976w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/a><figcaption>ECCV Workshop & Challenge<\/figcaption><\/figure>\n\n\n\n<figure class=\"wp-block-image size-full\"><a href=\"https:\/\/eval.ai\/web\/challenges\/challenge-page\/1839\/overview\"><img loading=\"lazy\" decoding=\"async\" width=\"384\" height=\"192\" data-id=\"876420\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/09\/odinw_logo.jpg\" alt=\"ODinW Challenge (35 datasets)\" class=\"wp-image-876420\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/09\/odinw_logo.jpg 384w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/09\/odinw_logo-300x150.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2022\/09\/odinw_logo-240x120.jpg 240w\" sizes=\"auto, (max-width: 384px) 100vw, 384px\" \/><\/a><figcaption>ODinW Challenge (35 datasets)<\/figcaption><\/figure>\n<\/figure>\n\n\n\n<p><\/p>\n\n\n\n<p><em>With this collaborative community-effort, we are aiming to evaluate the best vision foundation models and their adaptation methods, which will serve as the references for future large vision model development..<\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Website: https:\/\/computer-vision-in-the-wild.github.io\/eccv-2022\/ (opens in new tab) Workshop: The research community has recently witnessed a trend in building transferable visual models that can effortlessly adapt to a wide range of downstream computer vision (CV) and multimodal (MM) tasks. We are organizing this &#8220;Computer Vision in the Wild&#8221; workshop, aiming to gather academic and industry communities to [&hellip;]<\/p>\n","protected":false},"author":37971,"featured_media":876495,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr-content-parent":144931,"msr_hide_image_in_river":0,"footnotes":""},"research-area":[],"msr-locale":[268875],"msr-post-option":[],"class_list":["post-876411","msr-blog-post","type-msr-blog-post","status-publish","has-post-thumbnail","hentry","msr-locale-en_us"],"msr_assoc_parent":{"id":144931,"type":"group"},"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-blog-post\/876411","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-blog-post"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-blog-post"}],"author":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/users\/37971"}],"version-history":[{"count":8,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-blog-post\/876411\/revisions"}],"predecessor-version":[{"id":876486,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-blog-post\/876411\/revisions\/876486"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/876495"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=876411"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=876411"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=876411"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=876411"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}