{"id":661083,"date":"2020-05-28T15:59:06","date_gmt":"2020-05-28T22:59:06","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-event&#038;p=661083"},"modified":"2025-08-06T11:52:49","modified_gmt":"2025-08-06T18:52:49","slug":"cvpr-2020","status":"publish","type":"msr-event","link":"https:\/\/www.microsoft.com\/en-us\/research\/event\/cvpr-2020\/","title":{"rendered":"Microsoft at CVPR 2020"},"content":{"rendered":"\n\n<p><strong>Website:<\/strong> <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"http:\/\/cvpr2020.thecvf.com\/\" target=\"_blank\" rel=\"noopener noreferrer\">CVPR 2020<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<p>Microsoft is proud to be a Diamond Sponsor of CVPR 2020. Make sure to catch Satya Nadella\u2019s Fireside Chat at 9:00 PDT on Tuesday, June 16. Stop by our virtual booth to chat with our experts to learn more about our research and open opportunities.<\/p>\n<div class=\"video-wrapper margin-bottom-sp1\"><iframe loading=\"lazy\" title=\"CVPR 2020 Keynote: Satya Nadella\" width=\"500\" height=\"281\" src=\"https:\/\/www.youtube-nocookie.com\/embed\/vgdVIeQKH-E?feature=oembed&rel=0\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe><\/div>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<h2>Tuesday, June 16<\/h2>\n<p>Oral 1.1A \u2013 3D From a Single Image and Shape-From-X (1)<br \/>\n10:50 \u2013 10:55 PDT<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/activemocap-optimized-viewpoint-selection-for-active-human-motion-capture\/\"><strong>ActiveMoCap: Optimized Viewpoint Selection for Active Human Motion Capture<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nSena\u00a0Kiciroglu, Helge\u00a0Rhodin,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/sudipsin\/\">Sudipta\u00a0Sinha<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Mathieu Salzmann, Pascal\u00a0Fua<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/i58Bu-hbZHs\" target=\"_blank\" rel=\"noopener\">Video ><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<hr \/>\n<p>Oral 1.2A \u2013 3D From Multiview and Sensors (1)<br \/>\n12:10 \u2013 12:15 PDT<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/texturefusion-high-quality-texture-acquisition-for-real-time-rgb-d-scanning\/\"><strong>TextureFusion: High-Quality Texture Acquisition for Real-Time RGB-D Scanning<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nJoo\u00a0Ho Lee,\u00a0Hyunho\u00a0Ha,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/yuedong\/\">Yue Dong<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Xin Tong, Min H. Kim<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/dhnCd-hmGQc\" target=\"_blank\" rel=\"noopener\">Video ><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<hr \/>\n<p>Oral 1.2C \u2013 Efficient Training and Inference<br \/>\n12:30 \u2013 12:35 PDT<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/towards-efficient-model-compression-via-learned-global-ranking\/\"><strong>Towards Efficient Model Compression via Learned Global Ranking<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nTing-Wu Chin,\u00a0Ruizhou\u00a0Ding,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/chazhang\/\">Cha Zhang<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Diana\u00a0Marculescu<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/yHGeY_zgtec\" target=\"_blank\" rel=\"noopener\">Video ><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<hr \/>\n<p>Oral 1.3A &#8211; 3D From a Single Image and Shape-From-X (2); 3D From Multiview and Sensors (2)<br \/>\n14:40 \u2013 14:45 PDT<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/why-having-10000-parameters-in-your-camera-model-is-better-than-twelve\/\"><strong>Why Having 10,000 Parameters in Your Camera Model Is Better Than Twelve<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nThomas\u00a0Sch\u00f6ps, Viktor Larsson,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\">Marc Pollefeys<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0Torsten\u00a0Sattler<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/7iYlEA_ZYac\" target=\"_blank\" rel=\"noopener\">Video ><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<hr \/>\n<p>Oral 1.3C \u2013 Low-Level and Physics-Based Vision<br \/>\n14:25 \u2013 14:30 PDT<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/bringing-old-photos-back-to-life\/\"><strong>Bringing Old Photos Back to Life<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nZiyu\u00a0Wan,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/zhanbo\/\">Bo Zhang<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<b>Dongdong Chen<\/b>,\u00a0Pan Zhang,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/doch\/\">Dong Chen<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Jing Liao,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/fangwen\/\">Fang Wen<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/Q5bhszQq9eA\" target=\"_blank\" rel=\"noopener\">Video ><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p>14:30 \u2013 14:35 PDT<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/a-physics-based-noise-formation-model-for-extreme-low-light-raw-denoising\/\"><strong>A Physics-based Noise Formation Model for Extreme Low-light Raw Denoising<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nKaixuan\u00a0Wei, Ying Fu,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jiaoyan\/\">Jiaolong<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>\u00a0Yang,\u00a0Hua Huang<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/DMDKPRozdeo\" target=\"_blank\" rel=\"noopener\">Video ><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<hr \/>\n<h2>Wednesday, June 17<\/h2>\n<p>Oral 2.1A \u2013 3D From Multiview and Sensors (3)<br \/>\n10:15 \u2013 10:20 PDT<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/routedfusion-learning-real-time-depth-map-fusion\/\"><strong>RoutedFusion: Learning Real-Time Depth Map Fusion<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nSilvan\u00a0Weder,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/joschonb\/\">Johannes\u00a0Sch\u00f6nberger<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\">Marc Pollefeys<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Martin R. Oswald<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/oA4TGDQVfls\" target=\"_blank\" rel=\"noopener\">Video ><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<hr \/>\n<p>Oral 2.1B \u2013 Face, Gesture, and Body Pose (1)<br \/>\n10:00 \u2013 10:05 PDT<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/redareinforced-differentiable-attribute-for-3d-face-reconstruction\/\"><strong>ReDA:Reinforced\u00a0Differentiable Attribute for 3D Face Reconstruction<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<b>Wenbin<\/b><b>\u00a0Zhu<\/b>,\u00a0<strong>HsiangTao\u00a0Wu<\/strong>,\u00a0<b>Zeyu<\/b><b>\u00a0Chen<\/b>,\u00a0<b>Noranart<\/b><b>\u00a0<\/b><b>Vesdapunt<\/b>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/baoyuanw\/\">Baoyuan\u00a0Wang<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/NhmGNfILDyw\" target=\"_blank\" rel=\"noopener\">Video ><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p>10:20 \u2013 10:25 PDT<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/face-x-ray-for-more-general-face-forgery-detection\/\"><strong>Face X-ray for More General Face Forgery Detection<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nLingzhi\u00a0Li,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jianbao\/\">Jianmin\u00a0Bao<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/tinzhan\/\">Ting Zhang<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/haya\/\">Hao Yang<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/doch\/\">Dong Chen<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/fangwen\/\">Fang Wen<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/bainguo\/\">Baining\u00a0Guo<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/0p5No4447Mc\" target=\"_blank\" rel=\"noopener\">Video ><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p>10:55 \u2013 11:00 PDT<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/advancing-high-fidelity-identity-swapping-for-forgery-detection\/\"><strong>Advancing High Fidelity Identity Swapping for Forgery Detection<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nLingzhi\u00a0Li,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jianbao\/\">Jianmin\u00a0Bao<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/haya\/\">Hao Yang<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/doch\/\">Dong Chen<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/fangwen\/\">Fang Wen<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/qNvpNuqfNZs\" target=\"_blank\" rel=\"noopener\">Video ><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<hr \/>\n<p>Oral 2.2B \u2013 Motion and Tracking (1)<br \/>\n12:00 \u2013 12:05 PDT<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/lsm-learning-subspace-minimization-for-low-level-vision\/\"><strong>LSM: Learning Subspace Minimization for Low-level Vision<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nChengzhou\u00a0Tang,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/luyuan\/\">Lu Yuan<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Ping Tan<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/4zOMGz38vBo\" target=\"_blank\" rel=\"noopener\">Video ><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p>12:20 \u2013 12:25 PDT<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/maskflownet-asymmetric-feature-matching-with-learnable-occlusion-mask\/\"><strong>MaskFlownet: Asymmetric Feature Matching with Learnable Occlusion Mask<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nShengyu Zhao,\u00a0Yilun\u00a0Sheng, Yue Dong,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/echang\/\">Eric Chang<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0Yan Xu<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/As0B-ubM4NE\" target=\"_blank\" rel=\"noopener\">Video ><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p>12:25 \u2013 12:30 PDT<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/tracking-by-instance-detection-a-meta-learning-approach\/\"><strong>Tracking by Instance Detection: A Meta-Learning Approach<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nGuangting\u00a0Wang,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/cluo\/\">Chong Luo<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<b>Xiaoyan<\/b><b>\u00a0Sun<\/b>,\u00a0Zhiwei\u00a0Xiong,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/wezeng\/\">Wenjun Zeng<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/HRbVwuuD6g0\" target=\"_blank\" rel=\"noopener\">Video ><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<hr \/>\n<p>Oral 2.1C \u2013 Image and Video Synthesis (1)<br \/>\n10:30 \u2013 10:35 PDT<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/cross-domain-correspondence-learning-for-exemplar-based-image-translation\/\"><strong>Cross-domain Correspondence Learning for Exemplar-based Image Translation<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nPan Zhang,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/zhanbo\/\">Bo Zhang<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/doch\/\">Dong Chen<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/luyuan\/\">Lu Yuan<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/fangwen\/\">Fang Wen<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/BdopAApRSgo\" target=\"_blank\" rel=\"noopener\">Video ><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p>10:35 \u2013 10:40 PDT<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/disentangled-and-controllable-face-image-generation-via-3d-imitative-contrastive-learning\/\"><strong>Disentangled and Controllable Face Image Generation via 3D Imitative-Contrastive Learning<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nYu Deng,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jiaoyan\/\">Jiaolong\u00a0Yang<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/doch\/\">Dong Chen<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/fangwen\/\">Fang Wen<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/xtong\/\">Xin Tong<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/l1KCgjJ2Bcc\" target=\"_blank\" rel=\"noopener\">Video ><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<hr \/>\n<p>Oral 2.3A \u2013 Face, Gesture, and Body Pose (3); Motion and Tracking (2)<br \/>\n14:15 \u2013 14:20 PDT<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/recursive-least-squares-estimator-aided-online-learning-for-visual-tracking\/\"><strong>Recursive Least-Squares Estimator-Aided Online Learning for Visual Tracking<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nJin\u00a0Gao,\u00a0Weiming\u00a0Hu,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/yanlu\/\">Yan Lu<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/Hy74EL7fiNA\" target=\"_blank\" rel=\"noopener\">Video ><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<hr \/>\n<p>Oral 2.4C \u2013 Transfer\/Low-Shot\/Semi\/Unsupervised Learning (2)<br \/>\n16:10 \u2013 16:15 PDT<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/hyper-star-task-aware-hyperparameters-for-deep-networks\/\"><strong>HyperSTAR: Task-Aware Hyperparameters for Deep Networks<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<b>Gaurav Mittal<\/b>, Chang Liu, Nikolaos\u00a0Karianakis,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/vifragos\/\">Victor Fragoso<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/meic\/\">Mei Chen<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Yun Fu<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/bvDyoh8vd04\" target=\"_blank\" rel=\"noopener\">Video ><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<hr \/>\n<h2>Thursday, June 18<\/h2>\n<p>Oral 3.1B \u2013 Video Analysis and Understanding<br \/>\n9:05 \u2013 9:10 PDT<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/spatiotemporal-fusion-in-3d-cnns-a-probabilistic-view\/\"><strong>Spatiotemporal Fusion in 3D CNNs: A Probabilistic View<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nYizhou\u00a0Zhou,\u00a0<b>Xiaoyan<\/b><b>\u00a0Sun<\/b>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/cluo\/\">Chong Luo<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0Zheng-Jun\u00a0Zha,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/wezeng\/\">Wenjun Zeng<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/NwMQxFCuPtc\" target=\"_blank\" rel=\"noopener\">Video ><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<hr \/>\n<p>Oral 3.1C \u2013 Vision & Language<br \/>\n9:30 \u2013 9:35 PDT<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/squinting-at-vqa-models-introspecting-vqa-models-with-sub-questions\/\"><strong>SQuINTing\u00a0at VQA Models: Introspecting VQA Models with Sub-Questions<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nRamprasaath\u00a0Ramasamy\u00a0Selvaraju,\u00a0Purva\u00a0Tendulkar, Devi Parikh,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/horvitz\/\">Eric Horvitz<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/marcotcr\/\">Marco Ribeiro<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/benushi\/\">Besmira Nushi<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/eckamar\/\">Ece Kamar<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/k1kOms3eGBA\" target=\"_blank\" rel=\"noopener\">Video ><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p>9:40 \u2013 9:45\u00a0PDT<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/sign-language-transformers-joint-end-to-end-sign-language-recognition-and-translation\/\"><strong>Sign Language Transformers: Joint End-to-end Sign Language Recognition and Translation<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nNecati\u00a0Cihan\u00a0Camgoz, Simon Hadfield,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/oskoller\/\">Oscar Koller<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Richard Bowden<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/J8uytAf5ZR4\" target=\"_blank\" rel=\"noopener\">Video ><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<hr \/>\n<p>Oral 3.2A \u2013 Recognition (Detection, Categorization) (2)<br \/>\n11:25 \u2013 11:30 PDT<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/dynamic-convolution-attention-over-convolution-kernels\/\"><strong>Dynamic Convolution: Attention over Convolution Kernels<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/yiche\/\">Yinpeng\u00a0Chen<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<b>Xiyang<\/b><b>\u00a0Dai<\/b>,\u00a0<b>Mengchen<\/b><b>\u00a0Liu<\/b>,\u00a0<b>Dongdong Chen<\/b>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/luyuan\/\">Lu Yuan<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/zliu\/\">Zicheng\u00a0Liu<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/FNkY7I2R_zM\" target=\"_blank\" rel=\"noopener\">Video ><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<hr \/>\n<p>Oral 3.2C \u2013 Machine Learning Architectures and Formulations<br \/>\n11:40 \u2013 11:45 PDT<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/local-context-normalization-revisiting-local-normalization\/\"><strong>Local Context Normalization: Revisiting Local Normalization<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<strong>Anthony Ortiz<\/strong>, Caleb Robinson, Md\u00a0Mahmudulla\u00a0Hassan,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/dan\/\">Dan Morris<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0Olac\u00a0Fuentes, Christopher\u00a0Kiekintveld,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jojic\/\">Nebojsa Jojic<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/tpmgHb0JTrM\" target=\"_blank\" rel=\"noopener\">Video ><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<h2>Tuesday, June 16<\/h2>\n<p>Poster 1.1 \u2013 3D From a Single Image and Shape-From-X; Action and Behavior Recognition; Adversarial Learning |\u00a010:00 \u2013 12:00 PDT<\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/leveraging-photometric-consistency-over-time-for-sparsely-supervised-hand-object-reconstruction\/\"><strong>Leveraging Photometric Consistency over Time for Sparsely Supervised Hand-Object Reconstruction\u00a0&#8211; #58<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nYana Hasson,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/butekin\/\">Bugra\u00a0Tekin<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/febogo\/\">Federica Bogo<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0Ivan Laptev,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\">Marc Pollefeys<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Cordelia Schmid<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/tea_KWltF_U\" target=\"_blank\" rel=\"noopener\">Video ><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/self-supervised-human-depth-estimation-from-monocular-videos\/\"><strong>Self-Supervised Human Depth Estimation From Monocular Videos\u00a0&#8211; #66<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nFeitong\u00a0Tan, Hao Zhu,\u00a0Zhaopeng\u00a0Cui, Siyu Zhu,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\">Marc Pollefeys<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0Ping Tan<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/b_YTf4FwYA8\" target=\"_blank\" rel=\"noopener\">Video ><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/adversarial-robustness-from-self-supervised-pre-training-to-fine-tuning\/\"><strong>Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning\u00a0&#8211; #71<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nTianlong Chen,\u00a0Sijia\u00a0Liu,\u00a0Shiyu\u00a0Chang,\u00a0<b>Yu Cheng<\/b>,\u00a0Lisa\u00a0Amini,\u00a0Zhangyang\u00a0Wang<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/JGBmz4RtC18\" target=\"_blank\" rel=\"noopener\">Video ><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/geometry-aware-satellite-to-ground-image-synthesis-for-urban-areas\/\"><strong>Geometry-Aware Satellite-to-Ground Image Synthesis for Urban Areas\u00a0&#8211; #87<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nXiaohu\u00a0Lu,\u00a0Zuoyue\u00a0Li,\u00a0Zhaopeng\u00a0Cui, Martin R. Oswald,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\">Marc Pollefeys<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0Rongjun\u00a0Qin<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/s2_EpPpKuAE\" target=\"_blank\" rel=\"noopener\">Video ><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/weakly-supervised-action-localization-by-generative-attention-modeling\/\"><strong>Weakly-Supervised Action Localization by Generative Attention Modeling\u00a0&#8211; #102<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<b>Baifeng<\/b><b>\u00a0Shi<\/b>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/qid\/\">Qi Dai<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0Yadong\u00a0Mu,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jingdw\/\">Jingdong\u00a0Wang<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/FVWgUUicz_c\" target=\"_blank\" rel=\"noopener\">Video ><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/semantics-guided-neural-networks-for-efficient-skeleton-based-human-action-recognition\/\"><strong>Semantics-Guided Neural Networks for Efficient Skeleton-Based Human Action Recognition\u00a0&#8211; #112<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nPengfei\u00a0Zhang,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/culan\/\">Cuiling Lan<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/wezeng\/\">Wenjun Zeng<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0Junliang\u00a0Xing,\u00a0Jianru\u00a0Xue, Nanning Zheng<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/tLNb9oH5NBw\" target=\"_blank\" rel=\"noopener\">Video ><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<hr \/>\n<p>Poster 1.2 \u2013 3D From Multiview and Sensors; Computational Photography; Efficient Training and Inference Methods for Networks | 12:00 \u2013 14:00 PDT<\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/dist-rendering-deep-implicit-signed-distance-function-with-differentiable-sphere-tracing\/\"><strong>DIST: Rendering Deep Implicit Signed Distance Function With Differentiable Sphere Tracing\u00a0&#8211; #77<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nShaohui\u00a0Liu,\u00a0Yinda\u00a0Zhang,\u00a0Songyou\u00a0Peng,\u00a0Boxin\u00a0Shi,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\">Marc Pollefeys<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0Zhaopeng\u00a0Cui<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/Fm7lFQ_F1Ww\" target=\"_blank\" rel=\"noopener\">Video ><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/orientation-regularized-3d-human-pose-estimation\/\"><strong>Fusing Wearable IMUs with Multi-View Images for Human Pose Estimation: A Geometric Approach\u00a0&#8211; #95<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nZhe Zhang,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/chnuwa\/\">Chunyu\u00a0Wang<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0Wenhu\u00a0Qin,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/wezeng\/\">Wenjun Zeng<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/07fKdiHkjEE\" target=\"_blank\" rel=\"noopener\">Video ><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/gdls-generalized-pose-and-scale-estimation-given-scale-and-gravity-priors\/\"><strong>gDLS*: Generalized Pose-and-Scale Estimation Given Scale and Gravity Priors\u00a0&#8211; #96<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/vifragos\/\">Victor Fragoso<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<b>Joseph\u00a0<\/b><b>Degol<\/b>, Gang Hua<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/ETydEFZqe9w\" target=\"_blank\" rel=\"noopener\">Video ><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<hr \/>\n<p>Poster 1.3 \u2014 3D From a Single Image and Shape-From-X; 3D From Multiview and Sensors; Image Retrieval; Datasets and Evaluation; Low-Level and Physics-Based Vision | 14:00 \u2013 16:00 PDT<\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/style-normalization-and-restitution-for-generalizable-person-re-identification\/\"><strong>Style Normalization and Restitution for Generalizable Person Re-identification\u00a0&#8211; #69<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nXin\u00a0Jin,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/culan\/\">Cuiling Lan<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/wezeng\/\">Wenjun Zeng<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0Zhibo\u00a0Chen, Li Zhang<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/7rA6ZgRHv68\" target=\"_blank\" rel=\"noopener\">Video ><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/relation-aware-global-attention-for-person-re-identification\/\"><strong>Relation-aware Global Attention for Person Re-identification\u00a0&#8211; #73<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nZhizheng\u00a0Zhang,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/culan\/\">Cuiling Lan<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/wezeng\/\">Wenjun Zeng<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Xin\u00a0Jin,\u00a0Zhibo\u00a0Chen<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/XxfN3thqgzU\" target=\"_blank\" rel=\"noopener\">Video ><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/single-image-reflection-removal-through-cascaded-refinement\/\"><strong>Single Image Reflection Removal through Cascaded Refinement\u00a0&#8211; #110<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nChao Li,\u00a0Yixiao\u00a0Yang,\u00a0Kun\u00a0He,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/stevelin\/\">Stephen Lin<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, John Hopcroft<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/HjJ9wffM2No\" target=\"_blank\" rel=\"noopener\">Video ><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<hr \/>\n<p>Poster 1.4 \u2014 Scene Analysis and Understanding; Medical, Biological and Cell Microscopy; Transfer\/Low-Shot\/Semi\/Unsupervised Learning | 16:00 \u2013 18:00 PDT<\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/unsupervised-instance-segmentation-in-microscopy-images-via-panoptic-domain-adaptation-and-task-re-weighting\/\"><strong>Unsupervised Instance Segmentation in Microscopy Images via Panoptic Domain Adaptation and Task Re-Weighting\u00a0&#8211; #55<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nDongnan\u00a0Liu,\u00a0Donghao\u00a0Zhang, Yang Song, Fan Zhang, Lauren O\u2019Donnell, Heng Huang,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/meic\/\">Mei Chen<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0Weidong Cai<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/xh5ftH8-Fc0\" target=\"_blank\" rel=\"noopener\">Video ><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/reliable-weighted-optimal-transport-for-unsupervised-domain-adaptation\/\"><strong>Reliable Weighted Optimal Transport for Unsupervised Domain Adaptation\u00a0&#8211; #70<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nRenjun\u00a0Xu,\u00a0Pelen\u00a0Liu,\u00a0Liyan\u00a0Wang, Chao Chen,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jindwang\/\">Jindong Wang<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/PDefvHcd3Hs\" target=\"_blank\" rel=\"noopener\">Video ><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<hr \/>\n<h2>Wednesday, June 17<\/h2>\n<p>Poster 2.1 &#8211; 3D From Multiview and Sensors; Face, Gesture, and Body Pose; Image and Video Synthesis | 10:00 \u2013 12:00 PDT<\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/higherhrnet-scale-aware-representation-learning-for-bottom-up-human-pose-estimation\/\"><strong>HigherHRNet: Scale-Aware Representation Learning for Bottom-Up Human Pose Estimation\u00a0&#8211; #53<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nBowen Cheng, Bin Xiao,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jingdw\/\">Jingdong\u00a0Wang<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0Honghui\u00a0Shi, Thomas Huang,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/leizhang\/\">Lei Zhang<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/n826oXKp5io\" target=\"_blank\" rel=\"noopener\">Video ><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/learning-texture-transformer-network-for-image-super-resolution\/\"><strong>Learning Texture Transformer Network for Image Super-Resolution\u00a0&#8211; #93<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nFuzhi\u00a0Yang,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/huayan\/\">Huan Yang<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jianf\/\">Jianlong\u00a0Fu<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0Hongtao\u00a0Lu,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/bainguo\/\">Baining\u00a0Guo<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/7PlN9q3qQP8\" target=\"_blank\" rel=\"noopener\">Video ><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/deep-shutter-unrolling-network\/\"><strong>Deep Shutter Unrolling Network\u00a0&#8211; #108<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nPeidong\u00a0Liu,\u00a0Zhaopeng\u00a0Cui, Viktor Larsson,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\">Marc Pollefeys<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/0-756nVAj2g\" target=\"_blank\" rel=\"noopener\">Video ><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<hr \/>\n<p>Poster 2.2 \u2013 Face, Gesture, and Body Pose; Motion and Tracking; Representation Learning | 12:00 \u2013 14:00 PDT<\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/a-transductive-approach-for-video-object-segmentation\/\"><strong>A\u00a0Transductive\u00a0Approach for Video Object Segmentation\u00a0&#8211; #84<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/wuzhiron\/\">Zhirong\u00a0Wu<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0Yizhuo\u00a0Zhang,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/hopeng\/\">Houwen Peng<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/stevelin\/\">Stephen Lin<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/N3upnIgUg-I\" target=\"_blank\" rel=\"noopener\">Video ><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<hr \/>\n<p>Poster 2.3 &#8211; Face, Gesture, and Body Pose; Motion and Tracking; Image and Video Synthesis; Nearal Generative Models; Optimization and Learning Methods | 14:00 \u2013 16:00 PDT<\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/deep-3d-portrait-from-a-single-image\/\"><strong>Deep 3D Portrait from a Single Image\u00a0&#8211; #36<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nSicheng\u00a0Xu,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jiaoyan\/\">Jiaolong\u00a0Yang<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/doch\/\">Dong Chen<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/fangwen\/\">Fang Wen<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Yu Deng,\u00a0Yunde\u00a0Jia,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/xtong\/\">Xin Tong<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/ex0VWotphy4\" target=\"_blank\" rel=\"noopener\">Video ><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/bachgan-high-resolution-image-synthesis-from-salient-object-layout\/\"><strong>BachGAN: High-Resolution Image Synthesis from Salient Object Layout\u00a0&#8211; #102<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nYandong\u00a0Li,\u00a0<b>Yu Cheng<\/b>,\u00a0<b>Zhe Gan,<\/b>\u00a0Licheng\u00a0Yu,\u00a0Liqiang\u00a0Wang,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jingjl\/\">Jingjing\u00a0Liu<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/AksJoLQl21k\" target=\"_blank\" rel=\"noopener\">Video ><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<hr \/>\n<h2>Thursday, June 18<\/h2>\n<p>Poster 3.1 \u2014 Recognition (Detection, Categorization); Video Analysis and Understanding; Vision + Language | 9:00 \u2013 11:00 PDT<\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/rethinking-classification-and-localization-for-object-detection\/\"><strong>Rethinking Classification and Localization for Object Detection\u00a0&#8211; #49<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nYue Wu,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/yiche\/\">Yinpeng\u00a0Chen<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/luyuan\/\">Lu Yuan<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/zliu\/\">Zicheng\u00a0Liu<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/lijuanw\/\">Lijuan\u00a0Wang<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/hongzl\/\">Hongzhi Li<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0Yun Fu<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/8EGKyeAZww4\" target=\"_blank\" rel=\"noopener\">Video ><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/memory-enhanced-global-local-aggregation-for-video-object-detection\/\"><strong>Memory Enhanced Global-Local Aggregation for Video Object Detection\u00a0&#8211; #64<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nYihong\u00a0Chen,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/yuecao\/\">Yue Cao<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/hanhu\/\">Han Hu<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0Liwei\u00a0Wang<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/Dr2uaeJJAms\" target=\"_blank\" rel=\"noopener\">Video ><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/multi-granularity-reference-aided-attentive-feature-aggregation-for-video-based-person-re-identificationmulti-granularity-reference-aided-attentive-feature-aggregation-for-video-based-person-re-identi\/\"><strong>Multi-Granularity Reference-Aided Attentive Feature Aggregation for Video-based Person Re-\u00a0identification\u00a0&#8211; #71<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nZhizheng\u00a0Zhang,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/culan\/\">Cuiling Lan<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/wezeng\/\">Wenjun Zeng<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0Zhibo\u00a0Chen<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/Zt5DShb7Pok\" target=\"_blank\" rel=\"noopener\">Video ><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/violin-a-large-scale-dataset-for-video-and-language-inference\/\"><strong>Violin: A Large-Scale Dataset for Video-and-Language Inference\u00a0&#8211; #120<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nJingzhou\u00a0Liu,\u00a0Wenhu\u00a0Chen,\u00a0<b>Yu Cheng<\/b>,\u00a0<b>Zhe Gan<\/b>,\u00a0Licheng\u00a0Yu,\u00a0Yiming\u00a0Yang,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jingjl\/\">Jingjing\u00a0Liu<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/tWZQ-OVrIUs\" target=\"_blank\" rel=\"noopener\">Video ><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<hr \/>\n<p>Poster 3.3 \u2014 Recognition (Detection, Categorization); Segmentation, Grouping and Shape; Vision Applications and Systems; Vision & Other Modalities; Transfer\/Low-Shot\/Semi\/Unsupervised Learning | 15:00 \u2013 17:00 PDT<\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/towards-learning-a-generic-agent-for-vision-and-language-navigation-via-pre-training\/\"><strong>Towards Learning a Generic Agent for Vision-and-Language Navigation via Pre-Training\u00a0&#8211; #96<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nWeituo\u00a0Hao,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/chunyl\/\">Chunyuan Li<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/xiul\/\">Xiujun\u00a0Li<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Lawrence Carin Duke,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jfgao\/\">Jianfeng Gao<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/Cif83ooccPs\" target=\"_blank\" rel=\"noopener\">Video ><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/mmtm-multimodal-transfer-module-for-cnn-fusion\/\"><strong>MMTM: Multimodal Transfer Module for CNN Fusion\u00a0&#8211; #111<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/hava\/\">Hamid\u00a0Vaezi\u00a0Joze<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0Amirreza\u00a0Shaban, Michael\u00a0Iuzzolino,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/kazukoi\/\">Kazuhito\u00a0Koishida<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/4aMetONExuc\" target=\"_blank\" rel=\"noopener\">Video ><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<hr \/>\n<p>Poster 3.4 &#8211; Miscellaneous | 17:00 \u2013 19:00 PDT<\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/density-aware-graph-for-deep-semi-supervised-visual-recognition\/\"><strong>Density-Aware Graph for Deep Semi-Supervised Visual Recognition\u00a0&#8211; #9<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nSuichan\u00a0Li, Bin Liu,\u00a0<b>Dongdong Chen<\/b>, Qi Chu,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/luyuan\/\">Lu Yuan<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0Nenghai\u00a0Yu<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/R7KH2dbVsI8\" target=\"_blank\" rel=\"noopener\">Video ><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/pfcnn-convolutional-neural-networks-on-3d-surfaces-using-parallel-frames\/\"><strong>PFCNN: Convolutional Neural Networks on 3D Surfaces Using Parallel Frames\u00a0&#8211; #27<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nYuqi\u00a0Yang,\u00a0Shilin\u00a0Liu,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/haopan\/\">Hao Pan<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/yangliu\/\">Yang Liu<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/xtong\/\">Xin Tong<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/ArXvN3V5WlI\" target=\"_blank\" rel=\"noopener\">Video ><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/metafuse-a-pre-trained-fusion-model-for-human-pose-estimation\/\"><strong>MetaFuse: A Pre-trained Fusion Model for Human Pose Estimation\u00a0&#8211; #38<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nRongchang\u00a0Xie,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/chnuwa\/\">Chunyu\u00a0Wang<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0Yizhou\u00a0Wang<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/cjT2MrAW1KM\" target=\"_blank\" rel=\"noopener\">Video ><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<h2>June 14 | Full Day<\/h2>\n<p><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"http:\/\/www.es.ele.tue.nl\/cvpm20\/\" target=\"_blank\" rel=\"noopener\"><strong>International Workshop and Challenge on Computer Vision for Physiological Measurement<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nCo-Organizer:\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/damcduff\/\">Daniel McDuff<\/a><\/p>\n<p><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/sites.google.com\/view\/vislocslamcvpr2020\/home\" target=\"_blank\" rel=\"noopener\"><strong>Joint workshop on Long Term Visual Localization, Visual Odometry and Geometric and Learning-based SLAM<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nCo-Organizers:\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\">Marc Pollefeys<\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/joschonb\/\">Johannes L.\u00a0Sch\u00f6nberger<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/paspecia\/\">Pablo\u00a0Speciale<\/a><\/p>\n<p><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.agriculture-vision.com\/home\" target=\"_blank\" rel=\"noopener\"><strong>The 1st International Workshop on Agriculture-Vision: Challenges & Opportunities for Computer Vision in Agriculture<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nInvited speakers and panelists: <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/ranveer\/\">Ranveer Chandra<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/sudipsin\/\">Sudipta Sinha<\/a><\/p>\n<p><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/vizwiz.org\/workshops\/2020-workshop\/\" target=\"_blank\" rel=\"noopener\"><strong>VizWiz\u00a0Grand Challenge: Describing Images from Blind People<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nCo-Organizers:\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/cutrell\/\">Ed Cutrell<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/merrie\/\">Meredith Morris<\/a><br \/>\nInvited Speaker:\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/merrie\/\">Meredith Morris<\/a><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/Dfi4TqIjUWU\" target=\"_blank\" rel=\"noopener\">Video ><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/IkZToxOs8N4\" target=\"_blank\" rel=\"noopener\">Speaker panel video ><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/f613diLbVAc\" target=\"_blank\" rel=\"noopener\">Panel discussion video ><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/fadetrcv.github.io\/\" target=\"_blank\" rel=\"noopener\"><strong>Workshop on Fair, Data-Efficient and Trusted Computer Vision<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nInvited Speaker:\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/dedey\/\">Debadeepta Dey<\/a><\/p>\n<hr \/>\n<h2>June 14 | Afternoon<\/h2>\n<p><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/sites.google.com\/view\/wicvworkshop-cvpr2020\/\" target=\"_blank\" rel=\"noopener\"><strong>Women in Computer Vision (WiCV)<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nCo-Organizer:\u00a0<b>Azadeh<\/b><b>\u00a0<\/b><b>Mobasher<\/b><\/p>\n<hr \/>\n<h2>June 15 | Full Day<\/h2>\n<p><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/scene-understanding.com\/\" target=\"_blank\" rel=\"noopener\"><strong>3D Scene Understanding for Vision, Graphics, and Robotics<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nInvited Speaker:\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\">Marc Pollefeys<\/a><\/p>\n<p><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/mixedreality.cs.cornell.edu\/workshop\/2020\" target=\"_blank\" rel=\"noopener\"><strong>Fourth Workshop on Computer Vision for AR\/VR<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nInvited Speaker: <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jamiesho\/\">Jamie Shotton<\/a><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/G4aZZhWmm4k\" target=\"_blank\" rel=\"noopener\">Video ><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/data.vision.ee.ethz.ch\/cvl\/ntire20\/\" target=\"_blank\" rel=\"noopener\"><strong>New Trends in Image Restoration and Enhancement Workshop and Challenges (NTIRE)<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nProgram Committee Members:\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/stevelin\/\">Stephen Lin<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/wezeng\/\">Wenjun Zeng<\/a><\/p>\n<hr \/>\n<h2>June 19 | Morning<\/h2>\n<p><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/image-matching-workshop.github.io\/\" target=\"_blank\" rel=\"noopener\"><strong>Image Matching: Local Features and Beyond<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nCo-Organizer:\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/joschonb\/\">Johannes L.\u00a0Sch\u00f6nberger<\/a><\/p>\n<hr \/>\n<h2>June 19 | Full Day<\/h2>\n<p><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"http:\/\/vcipl-okstate.org\/pbvs\/20\/\" target=\"_blank\" rel=\"noopener\"><strong>16th\u00a0IEEE Workshop on Perception Beyond the Visible Spectrum<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nProgram Committee Member:\u00a0<b>Katsu Ikeuchi<\/b><\/p>\n<p><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/sites.google.com\/view\/luv2020\" target=\"_blank\" rel=\"noopener\"><strong>Learning From Unlabeled Videos<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nCo-Organizer:\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/yalesong\/\">Yale Song<\/a><\/p>\n<p><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/cvmi2020.github.io\/\" target=\"_blank\" rel=\"noopener\"><strong>Computer Vision for Microscopy Image Analysis<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nChair:\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/meic\/\">Mei Chen<\/a><br \/>\nProgram Committee Members:\u00a0<b>Hao Jiang<\/b>,\u00a0<b>Guarav<\/b><b>\u00a0Mittal<\/b>, <b>Xi Yin<\/b><\/p>\n<p><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/sites.google.com\/view\/geometry-learning-foundation\/\" target=\"_blank\" rel=\"noopener\"><strong>First Workshop on Deep Learning Foundations of Geometric Shape Modeling and Reconstruction<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nCo-Organizer:\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/yangliu\/\">Yang Liu<\/a><\/p>\n<p><strong>Extreme classification in computer vision<\/strong><br \/>\nCo-Organizer:\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/manik\/\">Manik Varma<\/a><\/p>\n<p><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/languageandvision.github.io\/\" target=\"_blank\" rel=\"noopener\"><strong>Language & Vision with applications to Video Understanding<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nCo-Organizer:\u00a0<b>Licheng<\/b><b>\u00a0Yu<\/b><\/p>\n<p><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"http:\/\/cvpr2020.ug2challenge.org\/\" target=\"_blank\" rel=\"noopener\"><strong>The 3rd Workshop and Prize Challenge: Bridging the Gap between Computational Photography and Visual Recognition (UG2+) in conjunction with IEEE CVPR 2020<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nInvited Speaker:\u00a0<b>Xi Yin<\/b><\/p>\n<p><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.learning-with-limited-labels.com\/\" target=\"_blank\" rel=\"noopener\"><strong>Visual Learning with Limited Labels<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nAccepted Paper: <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/epillid-dataset-a-low-shot-fine-grained-benchmark-for-pill-identification\/\">ePillID Dataset: A Low-Shot Fine-Grained Benchmark for Pill Identification<\/a> <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/naotous\/\">Naoto Usuyama<\/a>, Natalia Larios Delgado, Amanda K. Hall, Jessica Lundin<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/p-Nn0RgwudE\" target=\"_blank\" rel=\"noopener\">Video ><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/mul-workshop.github.io\/\" target=\"_blank\" rel=\"noopener\"><strong>Workshop on Multimodal Learning<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nInvited Speaker:\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/awf\/\">Andrew Fitzgibbon<\/a><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<h2>Monday, June 15<\/h2>\n<p>13:15 \u2013 17:00 PDT<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/rohit497.github.io\/Recent-Advances-in-Vision-and-Language-Research\/\" target=\"_blank\" rel=\"noopener\"><strong>Recent Advances in Vision-and-Language Research<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nCo-organizers: <strong>Zhe Gan<\/strong>, <strong>Yu Cheng<\/strong>, <strong>Luowei Zhou<\/strong>, <strong>Linjie Li<\/strong>, <strong>Yen-Chun Chen<\/strong>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jingjl\/\">JJ Liu<\/a><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2019\/12\/Alchemy-with-Friends-Print-at-Home.pdf\" target=\"_blank\" rel=\"noopener\">Print your own copy<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> of Alchemy with Friends to play at home.<\/p>\n<p>Share your favorite card combinations using #AlchemyFriends on Twitter, Facebook, or Instagram. We now have three versions of the game available for you to play at home!<\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2019\/09\/MSR_Alchemy_1400x788.gif\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-626472\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2019\/09\/MSR_Alchemy_1400x788.gif\" alt=\"Animated illustration of how to play #AlchemyFriends\" width=\"1400\" height=\"788\" \/><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<div>\n\t<a\n\t\thref=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2019\/12\/Alchemy-with-Friends-Print-at-Home.pdf\"\n\t\tclass=\"button cta-link\"\n\t\tdata-bi-type=\"button\"\n\t\tdata-bi-cN=\"Alchemy with Friends Original (must have this deck)\"\n\t\tdata-bi-tN=\"shortcodes\/msr-button\"\n\t\ttarget=\"_blank\" rel=\"noopener noreferrer\">\n\t\tAlchemy with Friends Original (must have this deck)\t<\/a>\n\n\t<\/div>\n<div>\n\t<a\n\t\thref=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/06\/Alchemy-with-Friends-ML-Expansion-Pack.pdf\"\n\t\tclass=\"button cta-link\"\n\t\tdata-bi-type=\"button\"\n\t\tdata-bi-cN=\"Alchemy with Friends ML Expansion Pack\"\n\t\tdata-bi-tN=\"shortcodes\/msr-button\"\n\t\ttarget=\"_blank\" rel=\"noopener noreferrer\">\n\t\tAlchemy with Friends ML Expansion Pack\t<\/a>\n\n\t<\/div>\n<div>\n\t<a\n\t\thref=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/06\/Alchemy-with-Friends-CV-Expansion-Pack.pdf\"\n\t\tclass=\"button cta-link\"\n\t\tdata-bi-type=\"button\"\n\t\tdata-bi-cN=\"Alchemy with Friends CV Expansion Pack\"\n\t\tdata-bi-tN=\"shortcodes\/msr-button\"\n\t\ttarget=\"_blank\" rel=\"noopener noreferrer\">\n\t\tAlchemy with Friends CV Expansion Pack\t<\/a>\n\n\t<\/div>\n<div style=\"height: 20px\"><\/div>\n<ul>\n<li><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/www.facebook.com\/microsoftresearch\/\">Facebook<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>: MicrosoftResearch<\/li>\n<li><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/x.com\/MSFTResearch\">Twitter:<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> @MSFTResearch<\/li>\n<li><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/www.youtube.com\/user\/MicrosoftResearch\">YouTube<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>: microsoftresearch<\/li>\n<li><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/aka.ms\/LinkedInMSR\">LinkedIn<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>: aka.ms\/LinkedInMSR<\/li>\n<li><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/www.youtube.com\/user\/MicrosoftResearch\">Instagram<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>: @msft_research<\/li>\n<\/ul>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Microsoft is proud to be a Diamond Sponsor of CVPR 2020. Make sure to catch Satya Nadella\u2019s Fireside Chat at 9:00 PDT on Tuesday, June 16. Stop by our virtual booth to chat with our experts to learn more about our research and open opportunities.<\/p>\n","protected":false},"featured_media":635493,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr_startdate":"2020-06-14","msr_enddate":"2020-06-19","msr_location":"Virtual\/Online","msr_expirationdate":"","msr_event_recording_link":"","msr_event_link":"","msr_event_link_redirect":false,"msr_event_time":"","msr_hide_region":true,"msr_private_event":false,"msr_hide_image_in_river":0,"footnotes":""},"research-area":[13556,13562,13554],"msr-region":[256048],"msr-event-type":[197941],"msr-video-type":[],"msr-locale":[268875],"msr-program-audience":[],"msr-post-option":[],"msr-impact-theme":[],"class_list":["post-661083","msr-event","type-msr-event","status-publish","has-post-thumbnail","hentry","msr-research-area-artificial-intelligence","msr-research-area-computer-vision","msr-research-area-human-computer-interaction","msr-region-global","msr-event-type-conferences","msr-locale-en_us"],"msr_about":"<!-- wp:msr\/event-details {\"title\":\"Microsoft at CVPR 2020\",\"backgroundColor\":\"grey\",\"image\":{\"id\":635493,\"url\":\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/02\/Seattle.jpg\",\"alt\":\"\"}} \/-->\n\n<!-- wp:msr\/content-tabs --><!-- wp:msr\/content-tab {\"title\":\"About\"} --><!-- wp:freeform --><p><strong>Website:<\/strong> <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"http:\/\/cvpr2020.thecvf.com\/\" target=\"_blank\" rel=\"noopener noreferrer\">CVPR 2020<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<p>Microsoft is proud to be a Diamond Sponsor of CVPR 2020. Make sure to catch Satya Nadella\u2019s Fireside Chat at 9:00 PDT on Tuesday, June 16. Stop by our virtual booth to chat with our experts to learn more about our research and open opportunities.<\/p>\n<div class=\"video-wrapper margin-bottom-sp1\"><iframe loading=\"lazy\" title=\"CVPR 2020 Keynote: Satya Nadella\" width=\"500\" height=\"281\" src=\"https:\/\/www.youtube-nocookie.com\/embed\/vgdVIeQKH-E?feature=oembed&#038;rel=0\" frameborder=\"0\" allow=\"accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share\" referrerpolicy=\"strict-origin-when-cross-origin\" allowfullscreen><\/iframe><\/div>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<!-- \/wp:freeform --><!-- \/wp:msr\/content-tab --><!-- wp:msr\/content-tab {\"title\":\"Oral presentations\"} --><!-- wp:freeform --><h2>Tuesday, June 16<\/h2>\n<p>Oral 1.1A \u2013 3D From a Single Image and Shape-From-X (1)<br \/>\n10:50 \u2013 10:55 PDT<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/activemocap-optimized-viewpoint-selection-for-active-human-motion-capture\/\"><strong>ActiveMoCap: Optimized Viewpoint Selection for Active Human Motion Capture<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nSena\u00a0Kiciroglu, Helge\u00a0Rhodin,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/sudipsin\/\">Sudipta\u00a0Sinha<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Mathieu Salzmann, Pascal\u00a0Fua<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/i58Bu-hbZHs\" target=\"_blank\" rel=\"noopener\">Video &gt;<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<hr \/>\n<p>Oral 1.2A \u2013 3D From Multiview and Sensors (1)<br \/>\n12:10 \u2013 12:15 PDT<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/texturefusion-high-quality-texture-acquisition-for-real-time-rgb-d-scanning\/\"><strong>TextureFusion: High-Quality Texture Acquisition for Real-Time RGB-D Scanning<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nJoo\u00a0Ho Lee,\u00a0Hyunho\u00a0Ha,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/yuedong\/\">Yue Dong<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Xin Tong, Min H. Kim<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/dhnCd-hmGQc\" target=\"_blank\" rel=\"noopener\">Video &gt;<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<hr \/>\n<p>Oral 1.2C \u2013 Efficient Training and Inference<br \/>\n12:30 \u2013 12:35 PDT<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/towards-efficient-model-compression-via-learned-global-ranking\/\"><strong>Towards Efficient Model Compression via Learned Global Ranking<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nTing-Wu Chin,\u00a0Ruizhou\u00a0Ding,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/chazhang\/\">Cha Zhang<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Diana\u00a0Marculescu<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/yHGeY_zgtec\" target=\"_blank\" rel=\"noopener\">Video &gt;<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<hr \/>\n<p>Oral 1.3A &#8211; 3D From a Single Image and Shape-From-X (2); 3D From Multiview and Sensors (2)<br \/>\n14:40 \u2013 14:45 PDT<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/why-having-10000-parameters-in-your-camera-model-is-better-than-twelve\/\"><strong>Why Having 10,000 Parameters in Your Camera Model Is Better Than Twelve<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nThomas\u00a0Sch\u00f6ps, Viktor Larsson,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\">Marc Pollefeys<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0Torsten\u00a0Sattler<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/7iYlEA_ZYac\" target=\"_blank\" rel=\"noopener\">Video &gt;<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<hr \/>\n<p>Oral 1.3C \u2013 Low-Level and Physics-Based Vision<br \/>\n14:25 \u2013 14:30 PDT<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/bringing-old-photos-back-to-life\/\"><strong>Bringing Old Photos Back to Life<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nZiyu\u00a0Wan,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/zhanbo\/\">Bo Zhang<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<b>Dongdong Chen<\/b>,\u00a0Pan Zhang,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/doch\/\">Dong Chen<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Jing Liao,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/fangwen\/\">Fang Wen<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/Q5bhszQq9eA\" target=\"_blank\" rel=\"noopener\">Video &gt;<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p>14:30 \u2013 14:35 PDT<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/a-physics-based-noise-formation-model-for-extreme-low-light-raw-denoising\/\"><strong>A Physics-based Noise Formation Model for Extreme Low-light Raw Denoising<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nKaixuan\u00a0Wei, Ying Fu,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jiaoyan\/\">Jiaolong<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>\u00a0Yang,\u00a0Hua Huang<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/DMDKPRozdeo\" target=\"_blank\" rel=\"noopener\">Video &gt;<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<hr \/>\n<h2>Wednesday, June 17<\/h2>\n<p>Oral 2.1A \u2013 3D From Multiview and Sensors (3)<br \/>\n10:15 \u2013 10:20 PDT<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/routedfusion-learning-real-time-depth-map-fusion\/\"><strong>RoutedFusion: Learning Real-Time Depth Map Fusion<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nSilvan\u00a0Weder,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/joschonb\/\">Johannes\u00a0Sch\u00f6nberger<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\">Marc Pollefeys<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Martin R. Oswald<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/oA4TGDQVfls\" target=\"_blank\" rel=\"noopener\">Video &gt;<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<hr \/>\n<p>Oral 2.1B \u2013 Face, Gesture, and Body Pose (1)<br \/>\n10:00 \u2013 10:05 PDT<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/redareinforced-differentiable-attribute-for-3d-face-reconstruction\/\"><strong>ReDA:Reinforced\u00a0Differentiable Attribute for 3D Face Reconstruction<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<b>Wenbin<\/b><b>\u00a0Zhu<\/b>,\u00a0<strong>HsiangTao\u00a0Wu<\/strong>,\u00a0<b>Zeyu<\/b><b>\u00a0Chen<\/b>,\u00a0<b>Noranart<\/b><b>\u00a0<\/b><b>Vesdapunt<\/b>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/baoyuanw\/\">Baoyuan\u00a0Wang<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/NhmGNfILDyw\" target=\"_blank\" rel=\"noopener\">Video &gt;<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p>10:20 \u2013 10:25 PDT<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/face-x-ray-for-more-general-face-forgery-detection\/\"><strong>Face X-ray for More General Face Forgery Detection<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nLingzhi\u00a0Li,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jianbao\/\">Jianmin\u00a0Bao<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/tinzhan\/\">Ting Zhang<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/haya\/\">Hao Yang<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/doch\/\">Dong Chen<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/fangwen\/\">Fang Wen<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/bainguo\/\">Baining\u00a0Guo<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/0p5No4447Mc\" target=\"_blank\" rel=\"noopener\">Video &gt;<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p>10:55 \u2013 11:00 PDT<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/advancing-high-fidelity-identity-swapping-for-forgery-detection\/\"><strong>Advancing High Fidelity Identity Swapping for Forgery Detection<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nLingzhi\u00a0Li,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jianbao\/\">Jianmin\u00a0Bao<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/haya\/\">Hao Yang<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/doch\/\">Dong Chen<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/fangwen\/\">Fang Wen<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/qNvpNuqfNZs\" target=\"_blank\" rel=\"noopener\">Video &gt;<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<hr \/>\n<p>Oral 2.2B \u2013 Motion and Tracking (1)<br \/>\n12:00 \u2013 12:05 PDT<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/lsm-learning-subspace-minimization-for-low-level-vision\/\"><strong>LSM: Learning Subspace Minimization for Low-level Vision<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nChengzhou\u00a0Tang,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/luyuan\/\">Lu Yuan<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Ping Tan<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/4zOMGz38vBo\" target=\"_blank\" rel=\"noopener\">Video &gt;<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p>12:20 \u2013 12:25 PDT<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/maskflownet-asymmetric-feature-matching-with-learnable-occlusion-mask\/\"><strong>MaskFlownet: Asymmetric Feature Matching with Learnable Occlusion Mask<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nShengyu Zhao,\u00a0Yilun\u00a0Sheng, Yue Dong,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/echang\/\">Eric Chang<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0Yan Xu<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/As0B-ubM4NE\" target=\"_blank\" rel=\"noopener\">Video &gt;<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p>12:25 \u2013 12:30 PDT<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/tracking-by-instance-detection-a-meta-learning-approach\/\"><strong>Tracking by Instance Detection: A Meta-Learning Approach<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nGuangting\u00a0Wang,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/cluo\/\">Chong Luo<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<b>Xiaoyan<\/b><b>\u00a0Sun<\/b>,\u00a0Zhiwei\u00a0Xiong,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/wezeng\/\">Wenjun Zeng<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/HRbVwuuD6g0\" target=\"_blank\" rel=\"noopener\">Video &gt;<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<hr \/>\n<p>Oral 2.1C \u2013 Image and Video Synthesis (1)<br \/>\n10:30 \u2013 10:35 PDT<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/cross-domain-correspondence-learning-for-exemplar-based-image-translation\/\"><strong>Cross-domain Correspondence Learning for Exemplar-based Image Translation<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nPan Zhang,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/zhanbo\/\">Bo Zhang<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/doch\/\">Dong Chen<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/luyuan\/\">Lu Yuan<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/fangwen\/\">Fang Wen<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/BdopAApRSgo\" target=\"_blank\" rel=\"noopener\">Video &gt;<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p>10:35 \u2013 10:40 PDT<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/disentangled-and-controllable-face-image-generation-via-3d-imitative-contrastive-learning\/\"><strong>Disentangled and Controllable Face Image Generation via 3D Imitative-Contrastive Learning<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nYu Deng,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jiaoyan\/\">Jiaolong\u00a0Yang<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/doch\/\">Dong Chen<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/fangwen\/\">Fang Wen<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/xtong\/\">Xin Tong<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/l1KCgjJ2Bcc\" target=\"_blank\" rel=\"noopener\">Video &gt;<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<hr \/>\n<p>Oral 2.3A \u2013 Face, Gesture, and Body Pose (3); Motion and Tracking (2)<br \/>\n14:15 \u2013 14:20 PDT<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/recursive-least-squares-estimator-aided-online-learning-for-visual-tracking\/\"><strong>Recursive Least-Squares Estimator-Aided Online Learning for Visual Tracking<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nJin\u00a0Gao,\u00a0Weiming\u00a0Hu,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/yanlu\/\">Yan Lu<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/Hy74EL7fiNA\" target=\"_blank\" rel=\"noopener\">Video &gt;<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<hr \/>\n<p>Oral 2.4C \u2013 Transfer\/Low-Shot\/Semi\/Unsupervised Learning (2)<br \/>\n16:10 \u2013 16:15 PDT<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/hyper-star-task-aware-hyperparameters-for-deep-networks\/\"><strong>HyperSTAR: Task-Aware Hyperparameters for Deep Networks<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<b>Gaurav Mittal<\/b>, Chang Liu, Nikolaos\u00a0Karianakis,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/vifragos\/\">Victor Fragoso<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/meic\/\">Mei Chen<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Yun Fu<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/bvDyoh8vd04\" target=\"_blank\" rel=\"noopener\">Video &gt;<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<hr \/>\n<h2>Thursday, June 18<\/h2>\n<p>Oral 3.1B \u2013 Video Analysis and Understanding<br \/>\n9:05 \u2013 9:10 PDT<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/spatiotemporal-fusion-in-3d-cnns-a-probabilistic-view\/\"><strong>Spatiotemporal Fusion in 3D CNNs: A Probabilistic View<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nYizhou\u00a0Zhou,\u00a0<b>Xiaoyan<\/b><b>\u00a0Sun<\/b>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/cluo\/\">Chong Luo<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0Zheng-Jun\u00a0Zha,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/wezeng\/\">Wenjun Zeng<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/NwMQxFCuPtc\" target=\"_blank\" rel=\"noopener\">Video &gt;<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<hr \/>\n<p>Oral 3.1C \u2013 Vision &amp; Language<br \/>\n9:30 \u2013 9:35 PDT<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/squinting-at-vqa-models-introspecting-vqa-models-with-sub-questions\/\"><strong>SQuINTing\u00a0at VQA Models: Introspecting VQA Models with Sub-Questions<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nRamprasaath\u00a0Ramasamy\u00a0Selvaraju,\u00a0Purva\u00a0Tendulkar, Devi Parikh,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/horvitz\/\">Eric Horvitz<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/marcotcr\/\">Marco Ribeiro<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/benushi\/\">Besmira Nushi<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/eckamar\/\">Ece Kamar<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/k1kOms3eGBA\" target=\"_blank\" rel=\"noopener\">Video &gt;<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p>9:40 \u2013 9:45\u00a0PDT<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/sign-language-transformers-joint-end-to-end-sign-language-recognition-and-translation\/\"><strong>Sign Language Transformers: Joint End-to-end Sign Language Recognition and Translation<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nNecati\u00a0Cihan\u00a0Camgoz, Simon Hadfield,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/oskoller\/\">Oscar Koller<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Richard Bowden<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/J8uytAf5ZR4\" target=\"_blank\" rel=\"noopener\">Video &gt;<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<hr \/>\n<p>Oral 3.2A \u2013 Recognition (Detection, Categorization) (2)<br \/>\n11:25 \u2013 11:30 PDT<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/dynamic-convolution-attention-over-convolution-kernels\/\"><strong>Dynamic Convolution: Attention over Convolution Kernels<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/yiche\/\">Yinpeng\u00a0Chen<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<b>Xiyang<\/b><b>\u00a0Dai<\/b>,\u00a0<b>Mengchen<\/b><b>\u00a0Liu<\/b>,\u00a0<b>Dongdong Chen<\/b>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/luyuan\/\">Lu Yuan<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/zliu\/\">Zicheng\u00a0Liu<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/FNkY7I2R_zM\" target=\"_blank\" rel=\"noopener\">Video &gt;<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<hr \/>\n<p>Oral 3.2C \u2013 Machine Learning Architectures and Formulations<br \/>\n11:40 \u2013 11:45 PDT<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/local-context-normalization-revisiting-local-normalization\/\"><strong>Local Context Normalization: Revisiting Local Normalization<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<strong>Anthony Ortiz<\/strong>, Caleb Robinson, Md\u00a0Mahmudulla\u00a0Hassan,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/dan\/\">Dan Morris<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0Olac\u00a0Fuentes, Christopher\u00a0Kiekintveld,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jojic\/\">Nebojsa Jojic<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/tpmgHb0JTrM\" target=\"_blank\" rel=\"noopener\">Video &gt;<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<!-- \/wp:freeform --><!-- \/wp:msr\/content-tab --><!-- wp:msr\/content-tab {\"title\":\"Posters\"} --><!-- wp:freeform --><h2>Tuesday, June 16<\/h2>\n<p>Poster 1.1 \u2013 3D From a Single Image and Shape-From-X; Action and Behavior Recognition; Adversarial Learning |\u00a010:00 \u2013 12:00 PDT<\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/leveraging-photometric-consistency-over-time-for-sparsely-supervised-hand-object-reconstruction\/\"><strong>Leveraging Photometric Consistency over Time for Sparsely Supervised Hand-Object Reconstruction\u00a0&#8211; #58<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nYana Hasson,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/butekin\/\">Bugra\u00a0Tekin<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/febogo\/\">Federica Bogo<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0Ivan Laptev,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\">Marc Pollefeys<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Cordelia Schmid<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/tea_KWltF_U\" target=\"_blank\" rel=\"noopener\">Video &gt;<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/self-supervised-human-depth-estimation-from-monocular-videos\/\"><strong>Self-Supervised Human Depth Estimation From Monocular Videos\u00a0&#8211; #66<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nFeitong\u00a0Tan, Hao Zhu,\u00a0Zhaopeng\u00a0Cui, Siyu Zhu,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\">Marc Pollefeys<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0Ping Tan<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/b_YTf4FwYA8\" target=\"_blank\" rel=\"noopener\">Video &gt;<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/adversarial-robustness-from-self-supervised-pre-training-to-fine-tuning\/\"><strong>Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning\u00a0&#8211; #71<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nTianlong Chen,\u00a0Sijia\u00a0Liu,\u00a0Shiyu\u00a0Chang,\u00a0<b>Yu Cheng<\/b>,\u00a0Lisa\u00a0Amini,\u00a0Zhangyang\u00a0Wang<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/JGBmz4RtC18\" target=\"_blank\" rel=\"noopener\">Video &gt;<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/geometry-aware-satellite-to-ground-image-synthesis-for-urban-areas\/\"><strong>Geometry-Aware Satellite-to-Ground Image Synthesis for Urban Areas\u00a0&#8211; #87<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nXiaohu\u00a0Lu,\u00a0Zuoyue\u00a0Li,\u00a0Zhaopeng\u00a0Cui, Martin R. Oswald,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\">Marc Pollefeys<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0Rongjun\u00a0Qin<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/s2_EpPpKuAE\" target=\"_blank\" rel=\"noopener\">Video &gt;<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/weakly-supervised-action-localization-by-generative-attention-modeling\/\"><strong>Weakly-Supervised Action Localization by Generative Attention Modeling\u00a0&#8211; #102<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<b>Baifeng<\/b><b>\u00a0Shi<\/b>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/qid\/\">Qi Dai<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0Yadong\u00a0Mu,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jingdw\/\">Jingdong\u00a0Wang<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/FVWgUUicz_c\" target=\"_blank\" rel=\"noopener\">Video &gt;<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/semantics-guided-neural-networks-for-efficient-skeleton-based-human-action-recognition\/\"><strong>Semantics-Guided Neural Networks for Efficient Skeleton-Based Human Action Recognition\u00a0&#8211; #112<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nPengfei\u00a0Zhang,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/culan\/\">Cuiling Lan<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/wezeng\/\">Wenjun Zeng<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0Junliang\u00a0Xing,\u00a0Jianru\u00a0Xue, Nanning Zheng<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/tLNb9oH5NBw\" target=\"_blank\" rel=\"noopener\">Video &gt;<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<hr \/>\n<p>Poster 1.2 \u2013 3D From Multiview and Sensors; Computational Photography; Efficient Training and Inference Methods for Networks | 12:00 \u2013 14:00 PDT<\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/dist-rendering-deep-implicit-signed-distance-function-with-differentiable-sphere-tracing\/\"><strong>DIST: Rendering Deep Implicit Signed Distance Function With Differentiable Sphere Tracing\u00a0&#8211; #77<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nShaohui\u00a0Liu,\u00a0Yinda\u00a0Zhang,\u00a0Songyou\u00a0Peng,\u00a0Boxin\u00a0Shi,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\">Marc Pollefeys<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0Zhaopeng\u00a0Cui<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/Fm7lFQ_F1Ww\" target=\"_blank\" rel=\"noopener\">Video &gt;<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/orientation-regularized-3d-human-pose-estimation\/\"><strong>Fusing Wearable IMUs with Multi-View Images for Human Pose Estimation: A Geometric Approach\u00a0&#8211; #95<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nZhe Zhang,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/chnuwa\/\">Chunyu\u00a0Wang<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0Wenhu\u00a0Qin,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/wezeng\/\">Wenjun Zeng<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/07fKdiHkjEE\" target=\"_blank\" rel=\"noopener\">Video &gt;<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/gdls-generalized-pose-and-scale-estimation-given-scale-and-gravity-priors\/\"><strong>gDLS*: Generalized Pose-and-Scale Estimation Given Scale and Gravity Priors\u00a0&#8211; #96<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/vifragos\/\">Victor Fragoso<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<b>Joseph\u00a0<\/b><b>Degol<\/b>, Gang Hua<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/ETydEFZqe9w\" target=\"_blank\" rel=\"noopener\">Video &gt;<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<hr \/>\n<p>Poster 1.3 \u2014 3D From a Single Image and Shape-From-X; 3D From Multiview and Sensors; Image Retrieval; Datasets and Evaluation; Low-Level and Physics-Based Vision | 14:00 \u2013 16:00 PDT<\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/style-normalization-and-restitution-for-generalizable-person-re-identification\/\"><strong>Style Normalization and Restitution for Generalizable Person Re-identification\u00a0&#8211; #69<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nXin\u00a0Jin,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/culan\/\">Cuiling Lan<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/wezeng\/\">Wenjun Zeng<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0Zhibo\u00a0Chen, Li Zhang<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/7rA6ZgRHv68\" target=\"_blank\" rel=\"noopener\">Video &gt;<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/relation-aware-global-attention-for-person-re-identification\/\"><strong>Relation-aware Global Attention for Person Re-identification\u00a0&#8211; #73<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nZhizheng\u00a0Zhang,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/culan\/\">Cuiling Lan<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/wezeng\/\">Wenjun Zeng<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Xin\u00a0Jin,\u00a0Zhibo\u00a0Chen<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/XxfN3thqgzU\" target=\"_blank\" rel=\"noopener\">Video &gt;<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/single-image-reflection-removal-through-cascaded-refinement\/\"><strong>Single Image Reflection Removal through Cascaded Refinement\u00a0&#8211; #110<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nChao Li,\u00a0Yixiao\u00a0Yang,\u00a0Kun\u00a0He,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/stevelin\/\">Stephen Lin<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, John Hopcroft<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/HjJ9wffM2No\" target=\"_blank\" rel=\"noopener\">Video &gt;<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<hr \/>\n<p>Poster 1.4 \u2014 Scene Analysis and Understanding; Medical, Biological and Cell Microscopy; Transfer\/Low-Shot\/Semi\/Unsupervised Learning | 16:00 \u2013 18:00 PDT<\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/unsupervised-instance-segmentation-in-microscopy-images-via-panoptic-domain-adaptation-and-task-re-weighting\/\"><strong>Unsupervised Instance Segmentation in Microscopy Images via Panoptic Domain Adaptation and Task Re-Weighting\u00a0&#8211; #55<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nDongnan\u00a0Liu,\u00a0Donghao\u00a0Zhang, Yang Song, Fan Zhang, Lauren O\u2019Donnell, Heng Huang,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/meic\/\">Mei Chen<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0Weidong Cai<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/xh5ftH8-Fc0\" target=\"_blank\" rel=\"noopener\">Video &gt;<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/reliable-weighted-optimal-transport-for-unsupervised-domain-adaptation\/\"><strong>Reliable Weighted Optimal Transport for Unsupervised Domain Adaptation\u00a0&#8211; #70<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nRenjun\u00a0Xu,\u00a0Pelen\u00a0Liu,\u00a0Liyan\u00a0Wang, Chao Chen,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jindwang\/\">Jindong Wang<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/PDefvHcd3Hs\" target=\"_blank\" rel=\"noopener\">Video &gt;<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<hr \/>\n<h2>Wednesday, June 17<\/h2>\n<p>Poster 2.1 &#8211; 3D From Multiview and Sensors; Face, Gesture, and Body Pose; Image and Video Synthesis | 10:00 \u2013 12:00 PDT<\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/higherhrnet-scale-aware-representation-learning-for-bottom-up-human-pose-estimation\/\"><strong>HigherHRNet: Scale-Aware Representation Learning for Bottom-Up Human Pose Estimation\u00a0&#8211; #53<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nBowen Cheng, Bin Xiao,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jingdw\/\">Jingdong\u00a0Wang<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0Honghui\u00a0Shi, Thomas Huang,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/leizhang\/\">Lei Zhang<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/n826oXKp5io\" target=\"_blank\" rel=\"noopener\">Video &gt;<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/learning-texture-transformer-network-for-image-super-resolution\/\"><strong>Learning Texture Transformer Network for Image Super-Resolution\u00a0&#8211; #93<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nFuzhi\u00a0Yang,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/huayan\/\">Huan Yang<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jianf\/\">Jianlong\u00a0Fu<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0Hongtao\u00a0Lu,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/bainguo\/\">Baining\u00a0Guo<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/7PlN9q3qQP8\" target=\"_blank\" rel=\"noopener\">Video &gt;<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/deep-shutter-unrolling-network\/\"><strong>Deep Shutter Unrolling Network\u00a0&#8211; #108<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nPeidong\u00a0Liu,\u00a0Zhaopeng\u00a0Cui, Viktor Larsson,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\">Marc Pollefeys<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/0-756nVAj2g\" target=\"_blank\" rel=\"noopener\">Video &gt;<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<hr \/>\n<p>Poster 2.2 \u2013 Face, Gesture, and Body Pose; Motion and Tracking; Representation Learning | 12:00 \u2013 14:00 PDT<\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/a-transductive-approach-for-video-object-segmentation\/\"><strong>A\u00a0Transductive\u00a0Approach for Video Object Segmentation\u00a0&#8211; #84<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/wuzhiron\/\">Zhirong\u00a0Wu<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0Yizhuo\u00a0Zhang,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/hopeng\/\">Houwen Peng<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/stevelin\/\">Stephen Lin<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/N3upnIgUg-I\" target=\"_blank\" rel=\"noopener\">Video &gt;<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<hr \/>\n<p>Poster 2.3 &#8211; Face, Gesture, and Body Pose; Motion and Tracking; Image and Video Synthesis; Nearal Generative Models; Optimization and Learning Methods | 14:00 \u2013 16:00 PDT<\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/deep-3d-portrait-from-a-single-image\/\"><strong>Deep 3D Portrait from a Single Image\u00a0&#8211; #36<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nSicheng\u00a0Xu,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jiaoyan\/\">Jiaolong\u00a0Yang<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/doch\/\">Dong Chen<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/fangwen\/\">Fang Wen<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Yu Deng,\u00a0Yunde\u00a0Jia,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/xtong\/\">Xin Tong<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/ex0VWotphy4\" target=\"_blank\" rel=\"noopener\">Video &gt;<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/bachgan-high-resolution-image-synthesis-from-salient-object-layout\/\"><strong>BachGAN: High-Resolution Image Synthesis from Salient Object Layout\u00a0&#8211; #102<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nYandong\u00a0Li,\u00a0<b>Yu Cheng<\/b>,\u00a0<b>Zhe Gan,<\/b>\u00a0Licheng\u00a0Yu,\u00a0Liqiang\u00a0Wang,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jingjl\/\">Jingjing\u00a0Liu<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/AksJoLQl21k\" target=\"_blank\" rel=\"noopener\">Video &gt;<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<hr \/>\n<h2>Thursday, June 18<\/h2>\n<p>Poster 3.1 \u2014 Recognition (Detection, Categorization); Video Analysis and Understanding; Vision + Language | 9:00 \u2013 11:00 PDT<\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/rethinking-classification-and-localization-for-object-detection\/\"><strong>Rethinking Classification and Localization for Object Detection\u00a0&#8211; #49<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nYue Wu,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/yiche\/\">Yinpeng\u00a0Chen<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/luyuan\/\">Lu Yuan<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/zliu\/\">Zicheng\u00a0Liu<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/lijuanw\/\">Lijuan\u00a0Wang<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/hongzl\/\">Hongzhi Li<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0Yun Fu<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/8EGKyeAZww4\" target=\"_blank\" rel=\"noopener\">Video &gt;<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/memory-enhanced-global-local-aggregation-for-video-object-detection\/\"><strong>Memory Enhanced Global-Local Aggregation for Video Object Detection\u00a0&#8211; #64<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nYihong\u00a0Chen,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/yuecao\/\">Yue Cao<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/hanhu\/\">Han Hu<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0Liwei\u00a0Wang<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/Dr2uaeJJAms\" target=\"_blank\" rel=\"noopener\">Video &gt;<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/multi-granularity-reference-aided-attentive-feature-aggregation-for-video-based-person-re-identificationmulti-granularity-reference-aided-attentive-feature-aggregation-for-video-based-person-re-identi\/\"><strong>Multi-Granularity Reference-Aided Attentive Feature Aggregation for Video-based Person Re-\u00a0identification\u00a0&#8211; #71<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nZhizheng\u00a0Zhang,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/culan\/\">Cuiling Lan<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/wezeng\/\">Wenjun Zeng<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0Zhibo\u00a0Chen<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/Zt5DShb7Pok\" target=\"_blank\" rel=\"noopener\">Video &gt;<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/violin-a-large-scale-dataset-for-video-and-language-inference\/\"><strong>Violin: A Large-Scale Dataset for Video-and-Language Inference\u00a0&#8211; #120<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nJingzhou\u00a0Liu,\u00a0Wenhu\u00a0Chen,\u00a0<b>Yu Cheng<\/b>,\u00a0<b>Zhe Gan<\/b>,\u00a0Licheng\u00a0Yu,\u00a0Yiming\u00a0Yang,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jingjl\/\">Jingjing\u00a0Liu<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/tWZQ-OVrIUs\" target=\"_blank\" rel=\"noopener\">Video &gt;<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<hr \/>\n<p>Poster 3.3 \u2014 Recognition (Detection, Categorization); Segmentation, Grouping and Shape; Vision Applications and Systems; Vision &amp; Other Modalities; Transfer\/Low-Shot\/Semi\/Unsupervised Learning | 15:00 \u2013 17:00 PDT<\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/towards-learning-a-generic-agent-for-vision-and-language-navigation-via-pre-training\/\"><strong>Towards Learning a Generic Agent for Vision-and-Language Navigation via Pre-Training\u00a0&#8211; #96<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nWeituo\u00a0Hao,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/chunyl\/\">Chunyuan Li<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/xiul\/\">Xiujun\u00a0Li<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Lawrence Carin Duke,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jfgao\/\">Jianfeng Gao<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/Cif83ooccPs\" target=\"_blank\" rel=\"noopener\">Video &gt;<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/mmtm-multimodal-transfer-module-for-cnn-fusion\/\"><strong>MMTM: Multimodal Transfer Module for CNN Fusion\u00a0&#8211; #111<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/hava\/\">Hamid\u00a0Vaezi\u00a0Joze<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0Amirreza\u00a0Shaban, Michael\u00a0Iuzzolino,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/kazukoi\/\">Kazuhito\u00a0Koishida<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/4aMetONExuc\" target=\"_blank\" rel=\"noopener\">Video &gt;<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<hr \/>\n<p>Poster 3.4 &#8211; Miscellaneous | 17:00 \u2013 19:00 PDT<\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/density-aware-graph-for-deep-semi-supervised-visual-recognition\/\"><strong>Density-Aware Graph for Deep Semi-Supervised Visual Recognition\u00a0&#8211; #9<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nSuichan\u00a0Li, Bin Liu,\u00a0<b>Dongdong Chen<\/b>, Qi Chu,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/luyuan\/\">Lu Yuan<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0Nenghai\u00a0Yu<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/R7KH2dbVsI8\" target=\"_blank\" rel=\"noopener\">Video &gt;<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/pfcnn-convolutional-neural-networks-on-3d-surfaces-using-parallel-frames\/\"><strong>PFCNN: Convolutional Neural Networks on 3D Surfaces Using Parallel Frames\u00a0&#8211; #27<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nYuqi\u00a0Yang,\u00a0Shilin\u00a0Liu,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/haopan\/\">Hao Pan<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/yangliu\/\">Yang Liu<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/xtong\/\">Xin Tong<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/ArXvN3V5WlI\" target=\"_blank\" rel=\"noopener\">Video &gt;<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/metafuse-a-pre-trained-fusion-model-for-human-pose-estimation\/\"><strong>MetaFuse: A Pre-trained Fusion Model for Human Pose Estimation\u00a0&#8211; #38<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nRongchang\u00a0Xie,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/chnuwa\/\">Chunyu\u00a0Wang<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>,\u00a0Yizhou\u00a0Wang<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/cjT2MrAW1KM\" target=\"_blank\" rel=\"noopener\">Video &gt;<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<!-- \/wp:freeform --><!-- \/wp:msr\/content-tab --><!-- wp:msr\/content-tab {\"title\":\"Workshops\"} --><!-- wp:freeform --><h2>June 14 | Full Day<\/h2>\n<p><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"http:\/\/www.es.ele.tue.nl\/cvpm20\/\" target=\"_blank\" rel=\"noopener\"><strong>International Workshop and Challenge on Computer Vision for Physiological Measurement<\/strong><\/a><br \/>\nCo-Organizer:\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/damcduff\/\">Daniel McDuff<\/a><\/p>\n<p><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/sites.google.com\/view\/vislocslamcvpr2020\/home\" target=\"_blank\" rel=\"noopener\"><strong>Joint workshop on Long Term Visual Localization, Visual Odometry and Geometric and Learning-based SLAM<\/strong><\/a><br \/>\nCo-Organizers:\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\">Marc Pollefeys<\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/joschonb\/\">Johannes L.\u00a0Sch\u00f6nberger<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/paspecia\/\">Pablo\u00a0Speciale<\/a><\/p>\n<p><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.agriculture-vision.com\/home\" target=\"_blank\" rel=\"noopener\"><strong>The 1st International Workshop on Agriculture-Vision: Challenges &amp; Opportunities for Computer Vision in Agriculture<\/strong><\/a><br \/>\nInvited speakers and panelists: <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/ranveer\/\">Ranveer Chandra<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/sudipsin\/\">Sudipta Sinha<\/a><\/p>\n<p><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/vizwiz.org\/workshops\/2020-workshop\/\" target=\"_blank\" rel=\"noopener\"><strong>VizWiz\u00a0Grand Challenge: Describing Images from Blind People<\/strong><\/a><br \/>\nCo-Organizers:\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/cutrell\/\">Ed Cutrell<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/merrie\/\">Meredith Morris<\/a><br \/>\nInvited Speaker:\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/merrie\/\">Meredith Morris<\/a><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/Dfi4TqIjUWU\" target=\"_blank\" rel=\"noopener\">Video &gt;<\/a><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/IkZToxOs8N4\" target=\"_blank\" rel=\"noopener\">Speaker panel video &gt;<\/a><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/f613diLbVAc\" target=\"_blank\" rel=\"noopener\">Panel discussion video &gt;<\/a><\/p>\n<p><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/fadetrcv.github.io\/\" target=\"_blank\" rel=\"noopener\"><strong>Workshop on Fair, Data-Efficient and Trusted Computer Vision<\/strong><\/a><br \/>\nInvited Speaker:\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/dedey\/\">Debadeepta Dey<\/a><\/p>\n<hr \/>\n<h2>June 14 | Afternoon<\/h2>\n<p><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/sites.google.com\/view\/wicvworkshop-cvpr2020\/\" target=\"_blank\" rel=\"noopener\"><strong>Women in Computer Vision (WiCV)<\/strong><\/a><br \/>\nCo-Organizer:\u00a0<b>Azadeh<\/b><b>\u00a0<\/b><b>Mobasher<\/b><\/p>\n<hr \/>\n<h2>June 15 | Full Day<\/h2>\n<p><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/scene-understanding.com\/\" target=\"_blank\" rel=\"noopener\"><strong>3D Scene Understanding for Vision, Graphics, and Robotics<\/strong><\/a><br \/>\nInvited Speaker:\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\">Marc Pollefeys<\/a><\/p>\n<p><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/mixedreality.cs.cornell.edu\/workshop\/2020\" target=\"_blank\" rel=\"noopener\"><strong>Fourth Workshop on Computer Vision for AR\/VR<\/strong><\/a><br \/>\nInvited Speaker: <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jamiesho\/\">Jamie Shotton<\/a><br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/G4aZZhWmm4k\" target=\"_blank\" rel=\"noopener\">Video &gt;<\/a><\/p>\n<p><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/data.vision.ee.ethz.ch\/cvl\/ntire20\/\" target=\"_blank\" rel=\"noopener\"><strong>New Trends in Image Restoration and Enhancement Workshop and Challenges (NTIRE)<\/strong><\/a><br \/>\nProgram Committee Members:\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/stevelin\/\">Stephen Lin<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/wezeng\/\">Wenjun Zeng<\/a><\/p>\n<hr \/>\n<h2>June 19 | Morning<\/h2>\n<p><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/image-matching-workshop.github.io\/\" target=\"_blank\" rel=\"noopener\"><strong>Image Matching: Local Features and Beyond<\/strong><\/a><br \/>\nCo-Organizer:\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/joschonb\/\">Johannes L.\u00a0Sch\u00f6nberger<\/a><\/p>\n<hr \/>\n<h2>June 19 | Full Day<\/h2>\n<p><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"http:\/\/vcipl-okstate.org\/pbvs\/20\/\" target=\"_blank\" rel=\"noopener\"><strong>16th\u00a0IEEE Workshop on Perception Beyond the Visible Spectrum<\/strong><\/a><br \/>\nProgram Committee Member:\u00a0<b>Katsu Ikeuchi<\/b><\/p>\n<p><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/sites.google.com\/view\/luv2020\" target=\"_blank\" rel=\"noopener\"><strong>Learning From Unlabeled Videos<\/strong><\/a><br \/>\nCo-Organizer:\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/yalesong\/\">Yale Song<\/a><\/p>\n<p><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/cvmi2020.github.io\/\" target=\"_blank\" rel=\"noopener\"><strong>Computer Vision for Microscopy Image Analysis<\/strong><\/a><br \/>\nChair:\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/meic\/\">Mei Chen<\/a><br \/>\nProgram Committee Members:\u00a0<b>Hao Jiang<\/b>,\u00a0<b>Guarav<\/b><b>\u00a0Mittal<\/b>, <b>Xi Yin<\/b><\/p>\n<p><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/sites.google.com\/view\/geometry-learning-foundation\/\" target=\"_blank\" rel=\"noopener\"><strong>First Workshop on Deep Learning Foundations of Geometric Shape Modeling and Reconstruction<\/strong><\/a><br \/>\nCo-Organizer:\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/yangliu\/\">Yang Liu<\/a><\/p>\n<p><strong>Extreme classification in computer vision<\/strong><br \/>\nCo-Organizer:\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/manik\/\">Manik Varma<\/a><\/p>\n<p><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/languageandvision.github.io\/\" target=\"_blank\" rel=\"noopener\"><strong>Language &amp; Vision with applications to Video Understanding<\/strong><\/a><br \/>\nCo-Organizer:\u00a0<b>Licheng<\/b><b>\u00a0Yu<\/b><\/p>\n<p><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"http:\/\/cvpr2020.ug2challenge.org\/\" target=\"_blank\" rel=\"noopener\"><strong>The 3rd Workshop and Prize Challenge: Bridging the Gap between Computational Photography and Visual Recognition (UG2+) in conjunction with IEEE CVPR 2020<\/strong><\/a><br \/>\nInvited Speaker:\u00a0<b>Xi Yin<\/b><\/p>\n<p><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.learning-with-limited-labels.com\/\" target=\"_blank\" rel=\"noopener\"><strong>Visual Learning with Limited Labels<\/strong><\/a><br \/>\nAccepted Paper: <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/epillid-dataset-a-low-shot-fine-grained-benchmark-for-pill-identification\/\">ePillID Dataset: A Low-Shot Fine-Grained Benchmark for Pill Identification<\/a> <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/naotous\/\">Naoto Usuyama<\/a>, Natalia Larios Delgado, Amanda K. Hall, Jessica Lundin<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/youtu.be\/p-Nn0RgwudE\" target=\"_blank\" rel=\"noopener\">Video &gt;<\/a><\/p>\n<p><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/mul-workshop.github.io\/\" target=\"_blank\" rel=\"noopener\"><strong>Workshop on Multimodal Learning<\/strong><\/a><br \/>\nInvited Speaker:\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/awf\/\">Andrew Fitzgibbon<\/a><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<!-- \/wp:freeform --><!-- \/wp:msr\/content-tab --><!-- wp:msr\/content-tab {\"title\":\"Tutorials\"} --><!-- wp:freeform --><h2>Monday, June 15<\/h2>\n<p>13:15 \u2013 17:00 PDT<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/rohit497.github.io\/Recent-Advances-in-Vision-and-Language-Research\/\" target=\"_blank\" rel=\"noopener\"><strong>Recent Advances in Vision-and-Language Research<\/strong><\/a><br \/>\nCo-organizers: <strong>Zhe Gan<\/strong>, <strong>Yu Cheng<\/strong>, <strong>Luowei Zhou<\/strong>, <strong>Linjie Li<\/strong>, <strong>Yen-Chun Chen<\/strong>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jingjl\/\">JJ Liu<\/a><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<!-- \/wp:freeform --><!-- \/wp:msr\/content-tab --><!-- wp:msr\/content-tab {\"title\":\"#AlchemyFriends\"} --><!-- wp:freeform --><p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2019\/12\/Alchemy-with-Friends-Print-at-Home.pdf\" target=\"_blank\" rel=\"noopener\">Print your own copy<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> of Alchemy with Friends to play at home.<\/p>\n<p>Share your favorite card combinations using #AlchemyFriends on Twitter, Facebook, or Instagram. We now have three versions of the game available for you to play at home!<\/p>\n<p><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2019\/09\/MSR_Alchemy_1400x788.gif\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-626472\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2019\/09\/MSR_Alchemy_1400x788.gif\" alt=\"Animated illustration of how to play #AlchemyFriends\" width=\"1400\" height=\"788\" \/><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<div>\n\t<a\n\t\thref=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2019\/12\/Alchemy-with-Friends-Print-at-Home.pdf\"\n\t\tclass=\"button cta-link\"\n\t\tdata-bi-type=\"button\"\n\t\tdata-bi-cN=\"Alchemy with Friends Original (must have this deck)\"\n\t\tdata-bi-tN=\"shortcodes\/msr-button\"\n\t\ttarget=\"_blank\" rel=\"noopener noreferrer\">\n\t\tAlchemy with Friends Original (must have this deck)\t<\/a>\n\n\t<\/div>\n<div>\n\t<a\n\t\thref=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/06\/Alchemy-with-Friends-ML-Expansion-Pack.pdf\"\n\t\tclass=\"button cta-link\"\n\t\tdata-bi-type=\"button\"\n\t\tdata-bi-cN=\"Alchemy with Friends ML Expansion Pack\"\n\t\tdata-bi-tN=\"shortcodes\/msr-button\"\n\t\ttarget=\"_blank\" rel=\"noopener noreferrer\">\n\t\tAlchemy with Friends ML Expansion Pack\t<\/a>\n\n\t<\/div>\n<div>\n\t<a\n\t\thref=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/06\/Alchemy-with-Friends-CV-Expansion-Pack.pdf\"\n\t\tclass=\"button cta-link\"\n\t\tdata-bi-type=\"button\"\n\t\tdata-bi-cN=\"Alchemy with Friends CV Expansion Pack\"\n\t\tdata-bi-tN=\"shortcodes\/msr-button\"\n\t\ttarget=\"_blank\" rel=\"noopener noreferrer\">\n\t\tAlchemy with Friends CV Expansion Pack\t<\/a>\n\n\t<\/div>\n<div style=\"height: 20px\"><\/div>\n<ul>\n<li><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/www.facebook.com\/microsoftresearch\/\">Facebook<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>: MicrosoftResearch<\/li>\n<li><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/x.com\/MSFTResearch\">Twitter:<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> @MSFTResearch<\/li>\n<li><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/www.youtube.com\/user\/MicrosoftResearch\">YouTube<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>: microsoftresearch<\/li>\n<li><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/aka.ms\/LinkedInMSR\">LinkedIn<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>: aka.ms\/LinkedInMSR<\/li>\n<li><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" target=\"_blank\" href=\"https:\/\/www.youtube.com\/user\/MicrosoftResearch\">Instagram<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>: @msft_research<\/li>\n<\/ul>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<!-- \/wp:freeform --><!-- \/wp:msr\/content-tab --><!-- \/wp:msr\/content-tabs -->","tab-content":[{"id":0,"name":"About","content":"Microsoft is proud to be a Diamond Sponsor of CVPR 2020. Make sure to catch Satya Nadella\u2019s Fireside Chat at 9:00 PDT on Tuesday, June 16. Stop by our virtual booth to chat with our experts to learn more about our research and open opportunities.\r\n\r\nhttps:\/\/youtu.be\/vgdVIeQKH-E"},{"id":1,"name":"Oral presentations","content":"<h2>Tuesday, June 16<\/h2>\r\nOral 1.1A \u2013 3D From a Single Image and Shape-From-X (1)\r\n10:50 \u2013 10:55 PDT\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/activemocap-optimized-viewpoint-selection-for-active-human-motion-capture\/\"><strong>ActiveMoCap: Optimized Viewpoint Selection for Active Human Motion Capture<\/strong><\/a>\r\nSena\u00a0Kiciroglu, Helge\u00a0Rhodin,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/sudipsin\/\">Sudipta\u00a0Sinha<\/a>, Mathieu Salzmann, Pascal\u00a0Fua\r\n<a href=\"https:\/\/youtu.be\/i58Bu-hbZHs\" target=\"_blank\" rel=\"noopener\">Video &gt;<\/a>\r\n\r\n<hr \/>\r\n\r\nOral 1.2A \u2013 3D From Multiview and Sensors (1)\r\n12:10 \u2013 12:15 PDT\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/texturefusion-high-quality-texture-acquisition-for-real-time-rgb-d-scanning\/\"><strong>TextureFusion: High-Quality Texture Acquisition for Real-Time RGB-D Scanning<\/strong><\/a>\r\nJoo\u00a0Ho Lee,\u00a0Hyunho\u00a0Ha,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/yuedong\/\">Yue Dong<\/a>, Xin Tong, Min H. Kim\r\n<a href=\"https:\/\/youtu.be\/dhnCd-hmGQc\" target=\"_blank\" rel=\"noopener\">Video &gt;<\/a>\r\n\r\n<hr \/>\r\n\r\nOral 1.2C \u2013 Efficient Training and Inference\r\n12:30 \u2013 12:35 PDT\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/towards-efficient-model-compression-via-learned-global-ranking\/\"><strong>Towards Efficient Model Compression via Learned Global Ranking<\/strong><\/a>\r\nTing-Wu Chin,\u00a0Ruizhou\u00a0Ding,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/chazhang\/\">Cha Zhang<\/a>, Diana\u00a0Marculescu\r\n<a href=\"https:\/\/youtu.be\/yHGeY_zgtec\" target=\"_blank\" rel=\"noopener\">Video &gt;<\/a>\r\n\r\n<hr \/>\r\n\r\nOral 1.3A - 3D From a Single Image and Shape-From-X (2); 3D From Multiview and Sensors (2)\r\n14:40 \u2013 14:45 PDT\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/why-having-10000-parameters-in-your-camera-model-is-better-than-twelve\/\"><strong>Why Having 10,000 Parameters in Your Camera Model Is Better Than Twelve<\/strong><\/a>\r\nThomas\u00a0Sch\u00f6ps, Viktor Larsson,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\">Marc Pollefeys<\/a>,\u00a0Torsten\u00a0Sattler\r\n<a href=\"https:\/\/youtu.be\/7iYlEA_ZYac\" target=\"_blank\" rel=\"noopener\">Video &gt;<\/a>\r\n\r\n<hr \/>\r\n\r\nOral 1.3C \u2013 Low-Level and Physics-Based Vision\r\n14:25 \u2013 14:30 PDT\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/bringing-old-photos-back-to-life\/\"><strong>Bringing Old Photos Back to Life<\/strong><\/a>\r\nZiyu\u00a0Wan,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/zhanbo\/\">Bo Zhang<\/a>,\u00a0<b>Dongdong Chen<\/b>,\u00a0Pan Zhang,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/doch\/\">Dong Chen<\/a>, Jing Liao,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/fangwen\/\">Fang Wen<\/a>\r\n<a href=\"https:\/\/youtu.be\/Q5bhszQq9eA\" target=\"_blank\" rel=\"noopener\">Video &gt;<\/a>\r\n\r\n14:30 \u2013 14:35 PDT\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/a-physics-based-noise-formation-model-for-extreme-low-light-raw-denoising\/\"><strong>A Physics-based Noise Formation Model for Extreme Low-light Raw Denoising<\/strong><\/a>\r\nKaixuan\u00a0Wei, Ying Fu,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jiaoyan\/\">Jiaolong<\/a>\u00a0Yang,\u00a0Hua Huang\r\n<a href=\"https:\/\/youtu.be\/DMDKPRozdeo\" target=\"_blank\" rel=\"noopener\">Video &gt;<\/a>\r\n\r\n<hr \/>\r\n\r\n<h2>Wednesday, June 17<\/h2>\r\nOral 2.1A \u2013 3D From Multiview and Sensors (3)\r\n10:15 \u2013 10:20 PDT\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/routedfusion-learning-real-time-depth-map-fusion\/\"><strong>RoutedFusion: Learning Real-Time Depth Map Fusion<\/strong><\/a>\r\nSilvan\u00a0Weder,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/joschonb\/\">Johannes\u00a0Sch\u00f6nberger<\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\">Marc Pollefeys<\/a>, Martin R. Oswald\r\n<a href=\"https:\/\/youtu.be\/oA4TGDQVfls\" target=\"_blank\" rel=\"noopener\">Video &gt;<\/a>\r\n\r\n<hr \/>\r\n\r\nOral 2.1B \u2013 Face, Gesture, and Body Pose (1)\r\n10:00 \u2013 10:05 PDT\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/redareinforced-differentiable-attribute-for-3d-face-reconstruction\/\"><strong>ReDA:Reinforced\u00a0Differentiable Attribute for 3D Face Reconstruction<\/strong><\/a>\r\n<b>Wenbin<\/b><b>\u00a0Zhu<\/b>,\u00a0<strong>HsiangTao\u00a0Wu<\/strong>,\u00a0<b>Zeyu<\/b><b>\u00a0Chen<\/b>,\u00a0<b>Noranart<\/b><b>\u00a0<\/b><b>Vesdapunt<\/b>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/baoyuanw\/\">Baoyuan\u00a0Wang<\/a>\r\n<a href=\"https:\/\/youtu.be\/NhmGNfILDyw\" target=\"_blank\" rel=\"noopener\">Video &gt;<\/a>\r\n\r\n10:20 \u2013 10:25 PDT\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/face-x-ray-for-more-general-face-forgery-detection\/\"><strong>Face X-ray for More General Face Forgery Detection<\/strong><\/a>\r\nLingzhi\u00a0Li,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jianbao\/\">Jianmin\u00a0Bao<\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/tinzhan\/\">Ting Zhang<\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/haya\/\">Hao Yang<\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/doch\/\">Dong Chen<\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/fangwen\/\">Fang Wen<\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/bainguo\/\">Baining\u00a0Guo<\/a>\r\n<a href=\"https:\/\/youtu.be\/0p5No4447Mc\" target=\"_blank\" rel=\"noopener\">Video &gt;<\/a>\r\n\r\n10:55 \u2013 11:00 PDT\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/advancing-high-fidelity-identity-swapping-for-forgery-detection\/\"><strong>Advancing High Fidelity Identity Swapping for Forgery Detection<\/strong><\/a>\r\nLingzhi\u00a0Li,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jianbao\/\">Jianmin\u00a0Bao<\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/haya\/\">Hao Yang<\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/doch\/\">Dong Chen<\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/fangwen\/\">Fang Wen<\/a>\r\n<a href=\"https:\/\/youtu.be\/qNvpNuqfNZs\" target=\"_blank\" rel=\"noopener\">Video &gt;<\/a>\r\n\r\n<hr \/>\r\n\r\nOral 2.2B \u2013 Motion and Tracking (1)\r\n12:00 \u2013 12:05 PDT\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/lsm-learning-subspace-minimization-for-low-level-vision\/\"><strong>LSM: Learning Subspace Minimization for Low-level Vision<\/strong><\/a>\r\nChengzhou\u00a0Tang,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/luyuan\/\">Lu Yuan<\/a>, Ping Tan\r\n<a href=\"https:\/\/youtu.be\/4zOMGz38vBo\" target=\"_blank\" rel=\"noopener\">Video &gt;<\/a>\r\n\r\n12:20 \u2013 12:25 PDT\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/maskflownet-asymmetric-feature-matching-with-learnable-occlusion-mask\/\"><strong>MaskFlownet: Asymmetric Feature Matching with Learnable Occlusion Mask<\/strong><\/a>\r\nShengyu Zhao,\u00a0Yilun\u00a0Sheng, Yue Dong,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/echang\/\">Eric Chang<\/a>,\u00a0Yan Xu\r\n<a href=\"https:\/\/youtu.be\/As0B-ubM4NE\" target=\"_blank\" rel=\"noopener\">Video &gt;<\/a>\r\n\r\n12:25 \u2013 12:30 PDT\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/tracking-by-instance-detection-a-meta-learning-approach\/\"><strong>Tracking by Instance Detection: A Meta-Learning Approach<\/strong><\/a>\r\nGuangting\u00a0Wang,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/cluo\/\">Chong Luo<\/a>,\u00a0<b>Xiaoyan<\/b><b>\u00a0Sun<\/b>,\u00a0Zhiwei\u00a0Xiong,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/wezeng\/\">Wenjun Zeng<\/a>\r\n<a href=\"https:\/\/youtu.be\/HRbVwuuD6g0\" target=\"_blank\" rel=\"noopener\">Video &gt;<\/a>\r\n\r\n<hr \/>\r\n\r\nOral 2.1C \u2013 Image and Video Synthesis (1)\r\n10:30 \u2013 10:35 PDT\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/cross-domain-correspondence-learning-for-exemplar-based-image-translation\/\"><strong>Cross-domain Correspondence Learning for Exemplar-based Image Translation<\/strong><\/a>\r\nPan Zhang,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/zhanbo\/\">Bo Zhang<\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/doch\/\">Dong Chen<\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/luyuan\/\">Lu Yuan<\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/fangwen\/\">Fang Wen<\/a>\r\n<a href=\"https:\/\/youtu.be\/BdopAApRSgo\" target=\"_blank\" rel=\"noopener\">Video &gt;<\/a>\r\n\r\n10:35 \u2013 10:40 PDT\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/disentangled-and-controllable-face-image-generation-via-3d-imitative-contrastive-learning\/\"><strong>Disentangled and Controllable Face Image Generation via 3D Imitative-Contrastive Learning<\/strong><\/a>\r\nYu Deng,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jiaoyan\/\">Jiaolong\u00a0Yang<\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/doch\/\">Dong Chen<\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/fangwen\/\">Fang Wen<\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/xtong\/\">Xin Tong<\/a>\r\n<a href=\"https:\/\/youtu.be\/l1KCgjJ2Bcc\" target=\"_blank\" rel=\"noopener\">Video &gt;<\/a>\r\n\r\n<hr \/>\r\n\r\nOral 2.3A \u2013 Face, Gesture, and Body Pose (3); Motion and Tracking (2)\r\n14:15 \u2013 14:20 PDT\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/recursive-least-squares-estimator-aided-online-learning-for-visual-tracking\/\"><strong>Recursive Least-Squares Estimator-Aided Online Learning for Visual Tracking<\/strong><\/a>\r\nJin\u00a0Gao,\u00a0Weiming\u00a0Hu,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/yanlu\/\">Yan Lu<\/a>\r\n<a href=\"https:\/\/youtu.be\/Hy74EL7fiNA\" target=\"_blank\" rel=\"noopener\">Video &gt;<\/a>\r\n\r\n<hr \/>\r\n\r\nOral 2.4C \u2013 Transfer\/Low-Shot\/Semi\/Unsupervised Learning (2)\r\n16:10 \u2013 16:15 PDT\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/hyper-star-task-aware-hyperparameters-for-deep-networks\/\"><strong>HyperSTAR: Task-Aware Hyperparameters for Deep Networks<\/strong><\/a>\r\n<b>Gaurav Mittal<\/b>, Chang Liu, Nikolaos\u00a0Karianakis,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/vifragos\/\">Victor Fragoso<\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/meic\/\">Mei Chen<\/a>, Yun Fu\r\n<a href=\"https:\/\/youtu.be\/bvDyoh8vd04\" target=\"_blank\" rel=\"noopener\">Video &gt;<\/a>\r\n\r\n<hr \/>\r\n\r\n<h2>Thursday, June 18<\/h2>\r\nOral 3.1B \u2013 Video Analysis and Understanding\r\n9:05 \u2013 9:10 PDT\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/spatiotemporal-fusion-in-3d-cnns-a-probabilistic-view\/\"><strong>Spatiotemporal Fusion in 3D CNNs: A Probabilistic View<\/strong><\/a>\r\nYizhou\u00a0Zhou,\u00a0<b>Xiaoyan<\/b><b>\u00a0Sun<\/b>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/cluo\/\">Chong Luo<\/a>,\u00a0Zheng-Jun\u00a0Zha,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/wezeng\/\">Wenjun Zeng<\/a>\r\n<a href=\"https:\/\/youtu.be\/NwMQxFCuPtc\" target=\"_blank\" rel=\"noopener\">Video &gt;<\/a>\r\n\r\n<hr \/>\r\n\r\nOral 3.1C \u2013 Vision &amp; Language\r\n9:30 \u2013 9:35 PDT\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/squinting-at-vqa-models-introspecting-vqa-models-with-sub-questions\/\"><strong>SQuINTing\u00a0at VQA Models: Introspecting VQA Models with Sub-Questions<\/strong><\/a>\r\nRamprasaath\u00a0Ramasamy\u00a0Selvaraju,\u00a0Purva\u00a0Tendulkar, Devi Parikh,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/horvitz\/\">Eric Horvitz<\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/marcotcr\/\">Marco Ribeiro<\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/benushi\/\">Besmira Nushi<\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/eckamar\/\">Ece Kamar<\/a>\r\n<a href=\"https:\/\/youtu.be\/k1kOms3eGBA\" target=\"_blank\" rel=\"noopener\">Video &gt;<\/a>\r\n\r\n9:40 \u2013 9:45\u00a0PDT\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/sign-language-transformers-joint-end-to-end-sign-language-recognition-and-translation\/\"><strong>Sign Language Transformers: Joint End-to-end Sign Language Recognition and Translation<\/strong><\/a>\r\nNecati\u00a0Cihan\u00a0Camgoz, Simon Hadfield,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/oskoller\/\">Oscar Koller<\/a>, Richard Bowden\r\n<a href=\"https:\/\/youtu.be\/J8uytAf5ZR4\" target=\"_blank\" rel=\"noopener\">Video &gt;<\/a>\r\n\r\n<hr \/>\r\n\r\nOral 3.2A \u2013 Recognition (Detection, Categorization) (2)\r\n11:25 \u2013 11:30 PDT\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/dynamic-convolution-attention-over-convolution-kernels\/\"><strong>Dynamic Convolution: Attention over Convolution Kernels<\/strong><\/a>\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/yiche\/\">Yinpeng\u00a0Chen<\/a>,\u00a0<b>Xiyang<\/b><b>\u00a0Dai<\/b>,\u00a0<b>Mengchen<\/b><b>\u00a0Liu<\/b>,\u00a0<b>Dongdong Chen<\/b>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/luyuan\/\">Lu Yuan<\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/zliu\/\">Zicheng\u00a0Liu<\/a>\r\n<a href=\"https:\/\/youtu.be\/FNkY7I2R_zM\" target=\"_blank\" rel=\"noopener\">Video &gt;<\/a>\r\n\r\n<hr \/>\r\n\r\nOral 3.2C \u2013 Machine Learning Architectures and Formulations\r\n11:40 \u2013 11:45 PDT\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/local-context-normalization-revisiting-local-normalization\/\"><strong>Local Context Normalization: Revisiting Local Normalization<\/strong><\/a>\r\n<strong>Anthony Ortiz<\/strong>, Caleb Robinson, Md\u00a0Mahmudulla\u00a0Hassan,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/dan\/\">Dan Morris<\/a>,\u00a0Olac\u00a0Fuentes, Christopher\u00a0Kiekintveld,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jojic\/\">Nebojsa Jojic<\/a>\r\n<a href=\"https:\/\/youtu.be\/tpmgHb0JTrM\" target=\"_blank\" rel=\"noopener\">Video &gt;<\/a>"},{"id":2,"name":"Posters","content":"<h2>Tuesday, June 16<\/h2>\r\nPoster 1.1 \u2013 3D From a Single Image and Shape-From-X; Action and Behavior Recognition; Adversarial Learning |\u00a010:00 \u2013 12:00 PDT\r\n\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/leveraging-photometric-consistency-over-time-for-sparsely-supervised-hand-object-reconstruction\/\"><strong>Leveraging Photometric Consistency over Time for Sparsely Supervised Hand-Object Reconstruction\u00a0- #58<\/strong><\/a>\r\nYana Hasson,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/butekin\/\">Bugra\u00a0Tekin<\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/febogo\/\">Federica Bogo<\/a>,\u00a0Ivan Laptev,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\">Marc Pollefeys<\/a>, Cordelia Schmid\r\n<a href=\"https:\/\/youtu.be\/tea_KWltF_U\" target=\"_blank\" rel=\"noopener\">Video &gt;<\/a>\r\n\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/self-supervised-human-depth-estimation-from-monocular-videos\/\"><strong>Self-Supervised Human Depth Estimation From Monocular Videos\u00a0- #66<\/strong><\/a>\r\nFeitong\u00a0Tan, Hao Zhu,\u00a0Zhaopeng\u00a0Cui, Siyu Zhu,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\">Marc Pollefeys<\/a>,\u00a0Ping Tan\r\n<a href=\"https:\/\/youtu.be\/b_YTf4FwYA8\" target=\"_blank\" rel=\"noopener\">Video &gt;<\/a>\r\n\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/adversarial-robustness-from-self-supervised-pre-training-to-fine-tuning\/\"><strong>Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning\u00a0- #71<\/strong><\/a>\r\nTianlong Chen,\u00a0Sijia\u00a0Liu,\u00a0Shiyu\u00a0Chang,\u00a0<b>Yu Cheng<\/b>,\u00a0Lisa\u00a0Amini,\u00a0Zhangyang\u00a0Wang\r\n<a href=\"https:\/\/youtu.be\/JGBmz4RtC18\" target=\"_blank\" rel=\"noopener\">Video &gt;<\/a>\r\n\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/geometry-aware-satellite-to-ground-image-synthesis-for-urban-areas\/\"><strong>Geometry-Aware Satellite-to-Ground Image Synthesis for Urban Areas\u00a0- #87<\/strong><\/a>\r\nXiaohu\u00a0Lu,\u00a0Zuoyue\u00a0Li,\u00a0Zhaopeng\u00a0Cui, Martin R. Oswald,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\">Marc Pollefeys<\/a>,\u00a0Rongjun\u00a0Qin\r\n<a href=\"https:\/\/youtu.be\/s2_EpPpKuAE\" target=\"_blank\" rel=\"noopener\">Video &gt;<\/a>\r\n\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/weakly-supervised-action-localization-by-generative-attention-modeling\/\"><strong>Weakly-Supervised Action Localization by Generative Attention Modeling\u00a0- #102<\/strong><\/a>\r\n<b>Baifeng<\/b><b>\u00a0Shi<\/b>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/qid\/\">Qi Dai<\/a>,\u00a0Yadong\u00a0Mu,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jingdw\/\">Jingdong\u00a0Wang<\/a>\r\n<a href=\"https:\/\/youtu.be\/FVWgUUicz_c\" target=\"_blank\" rel=\"noopener\">Video &gt;<\/a>\r\n\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/semantics-guided-neural-networks-for-efficient-skeleton-based-human-action-recognition\/\"><strong>Semantics-Guided Neural Networks for Efficient Skeleton-Based Human Action Recognition\u00a0- #112<\/strong><\/a>\r\nPengfei\u00a0Zhang,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/culan\/\">Cuiling Lan<\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/wezeng\/\">Wenjun Zeng<\/a>,\u00a0Junliang\u00a0Xing,\u00a0Jianru\u00a0Xue, Nanning Zheng\r\n<a href=\"https:\/\/youtu.be\/tLNb9oH5NBw\" target=\"_blank\" rel=\"noopener\">Video &gt;<\/a>\r\n\r\n<hr \/>\r\n\r\nPoster 1.2 \u2013 3D From Multiview and Sensors; Computational Photography; Efficient Training and Inference Methods for Networks | 12:00 \u2013 14:00 PDT\r\n\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/dist-rendering-deep-implicit-signed-distance-function-with-differentiable-sphere-tracing\/\"><strong>DIST: Rendering Deep Implicit Signed Distance Function With Differentiable Sphere Tracing\u00a0- #77<\/strong><\/a>\r\nShaohui\u00a0Liu,\u00a0Yinda\u00a0Zhang,\u00a0Songyou\u00a0Peng,\u00a0Boxin\u00a0Shi,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\">Marc Pollefeys<\/a>,\u00a0Zhaopeng\u00a0Cui\r\n<a href=\"https:\/\/youtu.be\/Fm7lFQ_F1Ww\" target=\"_blank\" rel=\"noopener\">Video &gt;<\/a>\r\n\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/orientation-regularized-3d-human-pose-estimation\/\"><strong>Fusing Wearable IMUs with Multi-View Images for Human Pose Estimation: A Geometric Approach\u00a0- #95<\/strong><\/a>\r\nZhe Zhang,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/chnuwa\/\">Chunyu\u00a0Wang<\/a>,\u00a0Wenhu\u00a0Qin,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/wezeng\/\">Wenjun Zeng<\/a>\r\n<a href=\"https:\/\/youtu.be\/07fKdiHkjEE\" target=\"_blank\" rel=\"noopener\">Video &gt;<\/a>\r\n\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/gdls-generalized-pose-and-scale-estimation-given-scale-and-gravity-priors\/\"><strong>gDLS*: Generalized Pose-and-Scale Estimation Given Scale and Gravity Priors\u00a0- #96<\/strong><\/a>\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/vifragos\/\">Victor Fragoso<\/a>,\u00a0<b>Joseph\u00a0<\/b><b>Degol<\/b>, Gang Hua\r\n<a href=\"https:\/\/youtu.be\/ETydEFZqe9w\" target=\"_blank\" rel=\"noopener\">Video &gt;<\/a>\r\n\r\n<hr \/>\r\n\r\nPoster 1.3 \u2014 3D From a Single Image and Shape-From-X; 3D From Multiview and Sensors; Image Retrieval; Datasets and Evaluation; Low-Level and Physics-Based Vision | 14:00 \u2013 16:00 PDT\r\n\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/style-normalization-and-restitution-for-generalizable-person-re-identification\/\"><strong>Style Normalization and Restitution for Generalizable Person Re-identification\u00a0- #69<\/strong><\/a>\r\nXin\u00a0Jin,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/culan\/\">Cuiling Lan<\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/wezeng\/\">Wenjun Zeng<\/a>,\u00a0Zhibo\u00a0Chen, Li Zhang\r\n<a href=\"https:\/\/youtu.be\/7rA6ZgRHv68\" target=\"_blank\" rel=\"noopener\">Video &gt;<\/a>\r\n\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/relation-aware-global-attention-for-person-re-identification\/\"><strong>Relation-aware Global Attention for Person Re-identification\u00a0- #73<\/strong><\/a>\r\nZhizheng\u00a0Zhang,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/culan\/\">Cuiling Lan<\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/wezeng\/\">Wenjun Zeng<\/a>, Xin\u00a0Jin,\u00a0Zhibo\u00a0Chen\r\n<a href=\"https:\/\/youtu.be\/XxfN3thqgzU\" target=\"_blank\" rel=\"noopener\">Video &gt;<\/a>\r\n\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/single-image-reflection-removal-through-cascaded-refinement\/\"><strong>Single Image Reflection Removal through Cascaded Refinement\u00a0- #110<\/strong><\/a>\r\nChao Li,\u00a0Yixiao\u00a0Yang,\u00a0Kun\u00a0He,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/stevelin\/\">Stephen Lin<\/a>, John Hopcroft\r\n<a href=\"https:\/\/youtu.be\/HjJ9wffM2No\" target=\"_blank\" rel=\"noopener\">Video &gt;<\/a>\r\n\r\n<hr \/>\r\n\r\nPoster 1.4 \u2014 Scene Analysis and Understanding; Medical, Biological and Cell Microscopy; Transfer\/Low-Shot\/Semi\/Unsupervised Learning | 16:00 \u2013 18:00 PDT\r\n\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/unsupervised-instance-segmentation-in-microscopy-images-via-panoptic-domain-adaptation-and-task-re-weighting\/\"><strong>Unsupervised Instance Segmentation in Microscopy Images via Panoptic Domain Adaptation and Task Re-Weighting\u00a0- #55<\/strong><\/a>\r\nDongnan\u00a0Liu,\u00a0Donghao\u00a0Zhang, Yang Song, Fan Zhang, Lauren O\u2019Donnell, Heng Huang,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/meic\/\">Mei Chen<\/a>,\u00a0Weidong Cai\r\n<a href=\"https:\/\/youtu.be\/xh5ftH8-Fc0\" target=\"_blank\" rel=\"noopener\">Video &gt;<\/a>\r\n\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/reliable-weighted-optimal-transport-for-unsupervised-domain-adaptation\/\"><strong>Reliable Weighted Optimal Transport for Unsupervised Domain Adaptation\u00a0- #70<\/strong><\/a>\r\nRenjun\u00a0Xu,\u00a0Pelen\u00a0Liu,\u00a0Liyan\u00a0Wang, Chao Chen,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jindwang\/\">Jindong Wang<\/a>\r\n<a href=\"https:\/\/youtu.be\/PDefvHcd3Hs\" target=\"_blank\" rel=\"noopener\">Video &gt;<\/a>\r\n\r\n<hr \/>\r\n\r\n<h2>Wednesday, June 17<\/h2>\r\nPoster 2.1 - 3D From Multiview and Sensors; Face, Gesture, and Body Pose; Image and Video Synthesis | 10:00 \u2013 12:00 PDT\r\n\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/higherhrnet-scale-aware-representation-learning-for-bottom-up-human-pose-estimation\/\"><strong>HigherHRNet: Scale-Aware Representation Learning for Bottom-Up Human Pose Estimation\u00a0- #53<\/strong><\/a>\r\nBowen Cheng, Bin Xiao,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jingdw\/\">Jingdong\u00a0Wang<\/a>,\u00a0Honghui\u00a0Shi, Thomas Huang,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/leizhang\/\">Lei Zhang<\/a>\r\n<a href=\"https:\/\/youtu.be\/n826oXKp5io\" target=\"_blank\" rel=\"noopener\">Video &gt;<\/a>\r\n\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/learning-texture-transformer-network-for-image-super-resolution\/\"><strong>Learning Texture Transformer Network for Image Super-Resolution\u00a0- #93<\/strong><\/a>\r\nFuzhi\u00a0Yang,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/huayan\/\">Huan Yang<\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jianf\/\">Jianlong\u00a0Fu<\/a>,\u00a0Hongtao\u00a0Lu,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/bainguo\/\">Baining\u00a0Guo<\/a>\r\n<a href=\"https:\/\/youtu.be\/7PlN9q3qQP8\" target=\"_blank\" rel=\"noopener\">Video &gt;<\/a>\r\n\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/deep-shutter-unrolling-network\/\"><strong>Deep Shutter Unrolling Network\u00a0- #108<\/strong><\/a>\r\nPeidong\u00a0Liu,\u00a0Zhaopeng\u00a0Cui, Viktor Larsson,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\">Marc Pollefeys<\/a>\r\n<a href=\"https:\/\/youtu.be\/0-756nVAj2g\" target=\"_blank\" rel=\"noopener\">Video &gt;<\/a>\r\n\r\n<hr \/>\r\n\r\nPoster 2.2 \u2013 Face, Gesture, and Body Pose; Motion and Tracking; Representation Learning | 12:00 \u2013 14:00 PDT\r\n\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/a-transductive-approach-for-video-object-segmentation\/\"><strong>A\u00a0Transductive\u00a0Approach for Video Object Segmentation\u00a0- #84<\/strong><\/a>\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/wuzhiron\/\">Zhirong\u00a0Wu<\/a>,\u00a0Yizhuo\u00a0Zhang,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/hopeng\/\">Houwen Peng<\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/stevelin\/\">Stephen Lin<\/a>\r\n<a href=\"https:\/\/youtu.be\/N3upnIgUg-I\" target=\"_blank\" rel=\"noopener\">Video &gt;<\/a>\r\n\r\n<hr \/>\r\n\r\nPoster 2.3 - Face, Gesture, and Body Pose; Motion and Tracking; Image and Video Synthesis; Nearal Generative Models; Optimization and Learning Methods | 14:00 \u2013 16:00 PDT\r\n\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/deep-3d-portrait-from-a-single-image\/\"><strong>Deep 3D Portrait from a Single Image\u00a0- #36<\/strong><\/a>\r\nSicheng\u00a0Xu,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jiaoyan\/\">Jiaolong\u00a0Yang<\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/doch\/\">Dong Chen<\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/fangwen\/\">Fang Wen<\/a>, Yu Deng,\u00a0Yunde\u00a0Jia,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/xtong\/\">Xin Tong<\/a>\r\n<a href=\"https:\/\/youtu.be\/ex0VWotphy4\" target=\"_blank\" rel=\"noopener\">Video &gt;<\/a>\r\n\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/bachgan-high-resolution-image-synthesis-from-salient-object-layout\/\"><strong>BachGAN: High-Resolution Image Synthesis from Salient Object Layout\u00a0- #102<\/strong><\/a>\r\nYandong\u00a0Li,\u00a0<b>Yu Cheng<\/b>,\u00a0<b>Zhe Gan,<\/b>\u00a0Licheng\u00a0Yu,\u00a0Liqiang\u00a0Wang,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jingjl\/\">Jingjing\u00a0Liu<\/a>\r\n<a href=\"https:\/\/youtu.be\/AksJoLQl21k\" target=\"_blank\" rel=\"noopener\">Video &gt;<\/a>\r\n\r\n<hr \/>\r\n\r\n<h2>Thursday, June 18<\/h2>\r\nPoster 3.1 \u2014 Recognition (Detection, Categorization); Video Analysis and Understanding; Vision + Language | 9:00 \u2013 11:00 PDT\r\n\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/rethinking-classification-and-localization-for-object-detection\/\"><strong>Rethinking Classification and Localization for Object Detection\u00a0- #49<\/strong><\/a>\r\nYue Wu,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/yiche\/\">Yinpeng\u00a0Chen<\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/luyuan\/\">Lu Yuan<\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/zliu\/\">Zicheng\u00a0Liu<\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/lijuanw\/\">Lijuan\u00a0Wang<\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/hongzl\/\">Hongzhi Li<\/a>,\u00a0Yun Fu\r\n<a href=\"https:\/\/youtu.be\/8EGKyeAZww4\" target=\"_blank\" rel=\"noopener\">Video &gt;<\/a>\r\n\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/memory-enhanced-global-local-aggregation-for-video-object-detection\/\"><strong>Memory Enhanced Global-Local Aggregation for Video Object Detection\u00a0- #64<\/strong><\/a>\r\nYihong\u00a0Chen,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/yuecao\/\">Yue Cao<\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/hanhu\/\">Han Hu<\/a>,\u00a0Liwei\u00a0Wang\r\n<a href=\"https:\/\/youtu.be\/Dr2uaeJJAms\" target=\"_blank\" rel=\"noopener\">Video &gt;<\/a>\r\n\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/multi-granularity-reference-aided-attentive-feature-aggregation-for-video-based-person-re-identificationmulti-granularity-reference-aided-attentive-feature-aggregation-for-video-based-person-re-identi\/\"><strong>Multi-Granularity Reference-Aided Attentive Feature Aggregation for Video-based Person Re-\u00a0identification\u00a0- #71<\/strong><\/a>\r\nZhizheng\u00a0Zhang,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/culan\/\">Cuiling Lan<\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/wezeng\/\">Wenjun Zeng<\/a>,\u00a0Zhibo\u00a0Chen\r\n<a href=\"https:\/\/youtu.be\/Zt5DShb7Pok\" target=\"_blank\" rel=\"noopener\">Video &gt;<\/a>\r\n\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/violin-a-large-scale-dataset-for-video-and-language-inference\/\"><strong>Violin: A Large-Scale Dataset for Video-and-Language Inference\u00a0- #120<\/strong><\/a>\r\nJingzhou\u00a0Liu,\u00a0Wenhu\u00a0Chen,\u00a0<b>Yu Cheng<\/b>,\u00a0<b>Zhe Gan<\/b>,\u00a0Licheng\u00a0Yu,\u00a0Yiming\u00a0Yang,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jingjl\/\">Jingjing\u00a0Liu<\/a>\r\n<a href=\"https:\/\/youtu.be\/tWZQ-OVrIUs\" target=\"_blank\" rel=\"noopener\">Video &gt;<\/a>\r\n\r\n<hr \/>\r\n\r\nPoster 3.3 \u2014 Recognition (Detection, Categorization); Segmentation, Grouping and Shape; Vision Applications and Systems; Vision &amp; Other Modalities; Transfer\/Low-Shot\/Semi\/Unsupervised Learning | 15:00 \u2013 17:00 PDT\r\n\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/towards-learning-a-generic-agent-for-vision-and-language-navigation-via-pre-training\/\"><strong>Towards Learning a Generic Agent for Vision-and-Language Navigation via Pre-Training\u00a0- #96<\/strong><\/a>\r\nWeituo\u00a0Hao,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/chunyl\/\">Chunyuan Li<\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/xiul\/\">Xiujun\u00a0Li<\/a>, Lawrence Carin Duke,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jfgao\/\">Jianfeng Gao<\/a>\r\n<a href=\"https:\/\/youtu.be\/Cif83ooccPs\" target=\"_blank\" rel=\"noopener\">Video &gt;<\/a>\r\n\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/mmtm-multimodal-transfer-module-for-cnn-fusion\/\"><strong>MMTM: Multimodal Transfer Module for CNN Fusion\u00a0- #111<\/strong><\/a>\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/hava\/\">Hamid\u00a0Vaezi\u00a0Joze<\/a>,\u00a0Amirreza\u00a0Shaban, Michael\u00a0Iuzzolino,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/kazukoi\/\">Kazuhito\u00a0Koishida<\/a>\r\n<a href=\"https:\/\/youtu.be\/4aMetONExuc\" target=\"_blank\" rel=\"noopener\">Video &gt;<\/a>\r\n\r\n<hr \/>\r\n\r\nPoster 3.4 - Miscellaneous | 17:00 \u2013 19:00 PDT\r\n\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/density-aware-graph-for-deep-semi-supervised-visual-recognition\/\"><strong>Density-Aware Graph for Deep Semi-Supervised Visual Recognition\u00a0- #9<\/strong><\/a>\r\nSuichan\u00a0Li, Bin Liu,\u00a0<b>Dongdong Chen<\/b>, Qi Chu,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/luyuan\/\">Lu Yuan<\/a>,\u00a0Nenghai\u00a0Yu\r\n<a href=\"https:\/\/youtu.be\/R7KH2dbVsI8\" target=\"_blank\" rel=\"noopener\">Video &gt;<\/a>\r\n\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/pfcnn-convolutional-neural-networks-on-3d-surfaces-using-parallel-frames\/\"><strong>PFCNN: Convolutional Neural Networks on 3D Surfaces Using Parallel Frames\u00a0- #27<\/strong><\/a>\r\nYuqi\u00a0Yang,\u00a0Shilin\u00a0Liu,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/haopan\/\">Hao Pan<\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/yangliu\/\">Yang Liu<\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/xtong\/\">Xin Tong<\/a>\r\n<a href=\"https:\/\/youtu.be\/ArXvN3V5WlI\" target=\"_blank\" rel=\"noopener\">Video &gt;<\/a>\r\n\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/metafuse-a-pre-trained-fusion-model-for-human-pose-estimation\/\"><strong>MetaFuse: A Pre-trained Fusion Model for Human Pose Estimation\u00a0- #38<\/strong><\/a>\r\nRongchang\u00a0Xie,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/chnuwa\/\">Chunyu\u00a0Wang<\/a>,\u00a0Yizhou\u00a0Wang\r\n<a href=\"https:\/\/youtu.be\/cjT2MrAW1KM\" target=\"_blank\" rel=\"noopener\">Video &gt;<\/a>"},{"id":3,"name":"Workshops","content":"<h2>June 14 | Full Day<\/h2>\r\n<a href=\"http:\/\/www.es.ele.tue.nl\/cvpm20\/\" target=\"_blank\" rel=\"noopener\"><strong>International Workshop and Challenge on Computer Vision for Physiological Measurement<\/strong><\/a>\r\nCo-Organizer:\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/damcduff\/\">Daniel McDuff<\/a>\r\n\r\n<a href=\"https:\/\/sites.google.com\/view\/vislocslamcvpr2020\/home\" target=\"_blank\" rel=\"noopener\"><strong>Joint workshop on Long Term Visual Localization, Visual Odometry and Geometric and Learning-based SLAM<\/strong><\/a>\r\nCo-Organizers:\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\">Marc Pollefeys<\/a>,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/joschonb\/\">Johannes L.\u00a0Sch\u00f6nberger<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/paspecia\/\">Pablo\u00a0Speciale<\/a>\r\n\r\n<a href=\"https:\/\/www.agriculture-vision.com\/home\" target=\"_blank\" rel=\"noopener\"><strong>The 1st International Workshop on Agriculture-Vision: Challenges &amp; Opportunities for Computer Vision in Agriculture<\/strong><\/a>\r\nInvited speakers and panelists: <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/ranveer\/\">Ranveer Chandra<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/sudipsin\/\">Sudipta Sinha<\/a>\r\n\r\n<a href=\"https:\/\/vizwiz.org\/workshops\/2020-workshop\/\" target=\"_blank\" rel=\"noopener\"><strong>VizWiz\u00a0Grand Challenge: Describing Images from Blind People<\/strong><\/a>\r\nCo-Organizers:\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/cutrell\/\">Ed Cutrell<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/merrie\/\">Meredith Morris<\/a>\r\nInvited Speaker:\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/merrie\/\">Meredith Morris<\/a>\r\n<a href=\"https:\/\/youtu.be\/Dfi4TqIjUWU\" target=\"_blank\" rel=\"noopener\">Video &gt;<\/a>\r\n<a href=\"https:\/\/youtu.be\/IkZToxOs8N4\" target=\"_blank\" rel=\"noopener\">Speaker panel video &gt;<\/a>\r\n<a href=\"https:\/\/youtu.be\/f613diLbVAc\" target=\"_blank\" rel=\"noopener\">Panel discussion video &gt;<\/a>\r\n\r\n<a href=\"https:\/\/fadetrcv.github.io\/\" target=\"_blank\" rel=\"noopener\"><strong>Workshop on Fair, Data-Efficient and Trusted Computer Vision<\/strong><\/a>\r\nInvited Speaker:\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/dedey\/\">Debadeepta Dey<\/a>\r\n\r\n<hr \/>\r\n\r\n<h2>June 14 | Afternoon<\/h2>\r\n<a href=\"https:\/\/sites.google.com\/view\/wicvworkshop-cvpr2020\/\" target=\"_blank\" rel=\"noopener\"><strong>Women in Computer Vision (WiCV)<\/strong><\/a>\r\nCo-Organizer:\u00a0<b>Azadeh<\/b><b>\u00a0<\/b><b>Mobasher<\/b>\r\n\r\n<hr \/>\r\n\r\n<h2>June 15 | Full Day<\/h2>\r\n<a href=\"https:\/\/scene-understanding.com\/\" target=\"_blank\" rel=\"noopener\"><strong>3D Scene Understanding for Vision, Graphics, and Robotics<\/strong><\/a>\r\nInvited Speaker:\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\">Marc Pollefeys<\/a>\r\n\r\n<a href=\"https:\/\/mixedreality.cs.cornell.edu\/workshop\/2020\" target=\"_blank\" rel=\"noopener\"><strong>Fourth Workshop on Computer Vision for AR\/VR<\/strong><\/a>\r\nInvited Speaker: <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jamiesho\/\">Jamie Shotton<\/a>\r\n<a href=\"https:\/\/youtu.be\/G4aZZhWmm4k\" target=\"_blank\" rel=\"noopener\">Video &gt;<\/a>\r\n\r\n<a href=\"https:\/\/data.vision.ee.ethz.ch\/cvl\/ntire20\/\" target=\"_blank\" rel=\"noopener\"><strong>New Trends in Image Restoration and Enhancement Workshop and Challenges (NTIRE)<\/strong><\/a>\r\nProgram Committee Members:\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/stevelin\/\">Stephen Lin<\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/wezeng\/\">Wenjun Zeng<\/a>\r\n\r\n<hr \/>\r\n\r\n<h2>June 19 | Morning<\/h2>\r\n<a href=\"https:\/\/image-matching-workshop.github.io\/\" target=\"_blank\" rel=\"noopener\"><strong>Image Matching: Local Features and Beyond<\/strong><\/a>\r\nCo-Organizer:\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/joschonb\/\">Johannes L.\u00a0Sch\u00f6nberger<\/a>\r\n\r\n<hr \/>\r\n\r\n<h2>June 19 | Full Day<\/h2>\r\n<a href=\"http:\/\/vcipl-okstate.org\/pbvs\/20\/\" target=\"_blank\" rel=\"noopener\"><strong>16th\u00a0IEEE Workshop on Perception Beyond the Visible Spectrum<\/strong><\/a>\r\nProgram Committee Member:\u00a0<b>Katsu Ikeuchi<\/b>\r\n\r\n<a href=\"https:\/\/sites.google.com\/view\/luv2020\" target=\"_blank\" rel=\"noopener\"><strong>Learning From Unlabeled Videos<\/strong><\/a>\r\nCo-Organizer:\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/yalesong\/\">Yale Song<\/a>\r\n\r\n<a href=\"https:\/\/cvmi2020.github.io\/\" target=\"_blank\" rel=\"noopener\"><strong>Computer Vision for Microscopy Image Analysis<\/strong><\/a>\r\nChair:\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/meic\/\">Mei Chen<\/a>\r\nProgram Committee Members:\u00a0<b>Hao Jiang<\/b>,\u00a0<b>Guarav<\/b><b>\u00a0Mittal<\/b>, <b>Xi Yin<\/b>\r\n\r\n<a href=\"https:\/\/sites.google.com\/view\/geometry-learning-foundation\/\" target=\"_blank\" rel=\"noopener\"><strong>First Workshop on Deep Learning Foundations of Geometric Shape Modeling and Reconstruction<\/strong><\/a>\r\nCo-Organizer:\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/yangliu\/\">Yang Liu<\/a>\r\n\r\n<strong>Extreme classification in computer vision<\/strong>\r\nCo-Organizer:\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/manik\/\">Manik Varma<\/a>\r\n\r\n<a href=\"https:\/\/languageandvision.github.io\/\" target=\"_blank\" rel=\"noopener\"><strong>Language &amp; Vision with applications to Video Understanding<\/strong><\/a>\r\nCo-Organizer:\u00a0<b>Licheng<\/b><b>\u00a0Yu<\/b>\r\n\r\n<a href=\"http:\/\/cvpr2020.ug2challenge.org\/\" target=\"_blank\" rel=\"noopener\"><strong>The 3rd Workshop and Prize Challenge: Bridging the Gap between Computational Photography and Visual Recognition (UG2+) in conjunction with IEEE CVPR 2020<\/strong><\/a>\r\nInvited Speaker:\u00a0<b>Xi Yin<\/b>\r\n\r\n<a href=\"https:\/\/www.learning-with-limited-labels.com\/\" target=\"_blank\" rel=\"noopener\"><strong>Visual Learning with Limited Labels<\/strong><\/a>\r\nAccepted Paper: <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/epillid-dataset-a-low-shot-fine-grained-benchmark-for-pill-identification\/\">ePillID Dataset: A Low-Shot Fine-Grained Benchmark for Pill Identification<\/a> <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/naotous\/\">Naoto Usuyama<\/a>, Natalia Larios Delgado, Amanda K. Hall, Jessica Lundin\r\n<a href=\"https:\/\/youtu.be\/p-Nn0RgwudE\" target=\"_blank\" rel=\"noopener\">Video &gt;<\/a>\r\n\r\n<a href=\"https:\/\/mul-workshop.github.io\/\" target=\"_blank\" rel=\"noopener\"><strong>Workshop on Multimodal Learning<\/strong><\/a>\r\nInvited Speaker:\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/awf\/\">Andrew Fitzgibbon<\/a>"},{"id":4,"name":"Tutorials","content":"<h2>Monday, June 15<\/h2>\r\n13:15 \u2013 17:00 PDT\r\n<a href=\"https:\/\/rohit497.github.io\/Recent-Advances-in-Vision-and-Language-Research\/\" target=\"_blank\" rel=\"noopener\"><strong>Recent Advances in Vision-and-Language Research<\/strong><\/a>\r\nCo-organizers: <strong>Zhe Gan<\/strong>, <strong>Yu Cheng<\/strong>, <strong>Luowei Zhou<\/strong>, <strong>Linjie Li<\/strong>, <strong>Yen-Chun Chen<\/strong>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jingjl\/\">JJ Liu<\/a>"},{"id":5,"name":"#AlchemyFriends","content":"<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2019\/12\/Alchemy-with-Friends-Print-at-Home.pdf\" target=\"_blank\" rel=\"noopener\">Print your own copy<\/a> of Alchemy with Friends to play at home.\r\n\r\nShare your favorite card combinations using #AlchemyFriends on Twitter, Facebook, or Instagram. We now have three versions of the game available for you to play at home!\r\n\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2019\/09\/MSR_Alchemy_1400x788.gif\"><img class=\"alignnone size-full wp-image-626472\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2019\/09\/MSR_Alchemy_1400x788.gif\" alt=\"Animated illustration of how to play #AlchemyFriends\" width=\"1400\" height=\"788\" \/><\/a>\r\n<div>[msr-button text=\"Alchemy with Friends Original (must have this deck)\" url=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2019\/12\/Alchemy-with-Friends-Print-at-Home.pdf\" new-window=\"true\" ]<\/div>\r\n<div>[msr-button text=\"Alchemy with Friends ML Expansion Pack\" url=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/06\/Alchemy-with-Friends-ML-Expansion-Pack.pdf\" new-window=\"true\" ]<\/div>\r\n<div>[msr-button text=\"Alchemy with Friends CV Expansion Pack\" url=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/06\/Alchemy-with-Friends-CV-Expansion-Pack.pdf\" new-window=\"true\" ]<\/div>\r\n<div style=\"height: 20px\"><\/div>\r\n<ul>\r\n \t<li><a href=\"https:\/\/www.facebook.com\/microsoftresearch\/\">Facebook<\/a>: MicrosoftResearch<\/li>\r\n \t<li><a href=\"https:\/\/x.com\/MSFTResearch\">Twitter:<\/a> @MSFTResearch<\/li>\r\n \t<li><a href=\"https:\/\/www.youtube.com\/user\/MicrosoftResearch\">YouTube<\/a>: microsoftresearch<\/li>\r\n \t<li><a href=\"https:\/\/aka.ms\/LinkedInMSR\">LinkedIn<\/a>: aka.ms\/LinkedInMSR<\/li>\r\n \t<li><a href=\"https:\/\/www.youtube.com\/user\/MicrosoftResearch\">Instagram<\/a>: @msft_research<\/li>\r\n<\/ul>"}],"msr_startdate":"2020-06-14","msr_enddate":"2020-06-19","msr_event_time":"","msr_location":"Virtual\/Online","msr_event_link":"","msr_event_recording_link":"","msr_startdate_formatted":"June 14, 2020","msr_register_text":"Watch now","msr_cta_link":"","msr_cta_text":"","msr_cta_bi_name":"","featured_image_thumbnail":"<img width=\"960\" height=\"540\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/02\/Seattle-960x540.jpg\" class=\"img-object-cover\" alt=\"Seattle cityscape\" decoding=\"async\" loading=\"lazy\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/02\/Seattle-960x540.jpg 960w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/02\/Seattle-1066x600.jpg 1066w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/02\/Seattle-655x368.jpg 655w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/02\/Seattle-343x193.jpg 343w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/02\/Seattle-640x360.jpg 640w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2020\/02\/Seattle-1280x720.jpg 1280w\" sizes=\"auto, (max-width: 960px) 100vw, 960px\" \/>","event_excerpt":"Microsoft is proud to be a Diamond Sponsor of CVPR 2020. Make sure to catch Satya Nadella\u2019s Fireside Chat at 9:00 PDT on Tuesday, June 16. Stop by our virtual booth to chat with our experts to learn more about our research and open opportunities.","msr_research_lab":[199565],"related-researchers":[],"msr_impact_theme":[],"related-academic-programs":[],"related-groups":[283244],"related-projects":[638784,708502],"related-opportunities":[],"related-publications":[639192,666816,669567,759652,759739],"related-videos":[670266],"related-posts":[666120,668466,669336],"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/661083","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-event"}],"version-history":[{"count":4,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/661083\/revisions"}],"predecessor-version":[{"id":1146962,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/661083\/revisions\/1146962"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/635493"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=661083"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=661083"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=661083"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=661083"},{"taxonomy":"msr-video-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video-type?post=661083"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=661083"},{"taxonomy":"msr-program-audience","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-program-audience?post=661083"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=661083"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=661083"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}