{"id":488849,"date":"2018-06-01T12:30:00","date_gmt":"2018-06-01T19:30:00","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-event&#038;p=488849"},"modified":"2025-08-06T11:57:09","modified_gmt":"2025-08-06T18:57:09","slug":"microsoft-cvpr-2018","status":"publish","type":"msr-event","link":"https:\/\/www.microsoft.com\/en-us\/research\/event\/microsoft-cvpr-2018\/","title":{"rendered":"Microsoft @ CVPR 2018"},"content":{"rendered":"\n\n<p><strong>Venue:<\/strong> <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.visitsaltlake.com\/salt-palace-convention-center\/\" target=\"_blank\" rel=\"noopener\">Calvin L. Rampton Salt Palace Convention Center<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p><strong>Website:<\/strong> <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"http:\/\/cvpr2018.thecvf.com\/\" target=\"_blank\" rel=\"noopener\">CVPR 2018<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<p>Microsoft is proud to be a diamond sponsor of the Conference on Computer Vision and Pattern Recognition (<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"http:\/\/cvpr2018.thecvf.com\/\" target=\"_blank\" rel=\"noopener\">CVPR<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>) June 18 \u2013 22 in Salt Lake City, Utah. Please visit us at booth 537 to chat with our experts, see demos of our latest research and find out about career opportunities with Microsoft.<\/p>\n<h2>Program Committee members<\/h2>\n<p>Marc Pollefeys \u2013 Robust Vision Challenge Organizers<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/sbkang\/\">Sing Bing Kang<\/a>, Stephen Lin, Sebastian Nowozin, and Wenjun Zeng \u2013\u00a0NTIRE 2018 Program Committee<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/ganghua\/\">Gang Hua<\/a>\u00a0\u2013 PBVS 2018 Program Committee<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/sbkang\/\">Daniel McDuff<\/a>\u00a0\u2013 CVPM 2018 Program Co-Chair<br \/>\nTimnit Gebru \u2013 CV-COPS 2018 Program Committee<br \/>\nZhengyou Zhang \u2013 Sight and Sound Workshop Organizers<\/p>\n<h2>Tutorials<\/h2>\n<h4><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/docs.microsoft.com\/en-us\/windows\/mixed-reality\/cvpr-2018\" rel=\"noopener\" target=\"_blank\">New from HoloLens: Research Mode<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nTuesday | 1:30 \u2013 2:50 | Room 151 &#8211; ABCG<\/h4>\n<p style=\"padding-left: 30px\"><strong>Marc Pollefeys<\/strong>, <strong>Pawel Olszta<\/strong><\/p>\n<h4><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.here.com\/en\/secvs-cvpr-2018\" rel=\"noopener\" target=\"_blank\">Software Engineering in Computer Vision Systems<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nFriday | 8:30 \u2013 12:30 | Ballroom C<\/h4>\n<p style=\"padding-left: 30px\">David Doria, <strong>Tim Franklin<\/strong>, Matt Turek, Jan Ernst, Wei Xia, Stephen Miller, Ben Kadlec<\/p>\n<h2>Workshops<\/h2>\n<h4><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/sites.google.com\/view\/fgvc5\/home\" rel=\"noopener\" target=\"_blank\">The Fifth Workshop on Fine-Grained Visual Categorization<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nFriday | 9:00 \u2013 5:00 | Room 151 A-C<\/h4>\n<p style=\"padding-left: 30px\">Why FGVC5 Folks Should be Interested in the Microsoft AI for Earth Program<br \/>\n9:45 \u2013 10:00<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/dan\/\">Dan Morris<\/a><\/p>\n<h2>Microsoft attendees<\/h2>\n<p>Aijun Bai<br \/>\nLuca Ballan<br \/>\nMi\u0107o Banovi\u0107<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/febogo\/\">Federica Bogo<\/a><br \/>\nBogdan Burlacu<br \/>\nNick Burton<br \/>\nIshani Chakraborty<br \/>\nTemo Chalasani<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/doch\/\">Dong Chen<\/a><br \/>\nXi Chen<br \/>\nArti Chhajta<br \/>\nJohn Corring<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jifdai\/\">Jifeng Dai<\/a><br \/>\nQi Dai<br \/>\nMandar Dixit<br \/>\nLiang Du<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/nanduan\/\">Nan Duan<\/a><br \/>\nXin Duan<br \/>\nGoran Dubajic<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/awf\/\">Andrew Fitzgibbon<\/a><br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/dinei\/\">Dinei Florencio<\/a><br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jianf\/\">Jianlong Fu<\/a><br \/>\nSean Goldberg<br \/>\nYandong Guo<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/hanhu\/\">Han Hu<\/a><br \/>\nHoudong Hu<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/ganghua\/\">Gang Hua<\/a><br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/qihua\/\">Qiuyuan Huang<\/a><br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/sbkang\/\">Sing Bing Kang<\/a><br \/>\nNikolaos Karianakis<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/nkuno\/\">Noboru Kuno<\/a><br \/>\nNabil Lathiff<br \/>\nKuang-Huei Lee<br \/>\nXing Li<br \/>\nOlga Liakhovich<br \/>\nTongliang Liao<br \/>\nStephen Lin<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/zliu\/\">Zicheng Liu<\/a><br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/yanlu\/\">Yan Lu<\/a><br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/cluo\/\">Chong Luo<\/a><br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/damcduff\/\">Daniel McDuff<\/a><br \/>\nMeenaz Merchant<br \/>\nLeonardo Nunes<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\">Marc Pollefeys<\/a><br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/taoqin\/\">Tao Qin<\/a><br \/>\nArun Sacheti<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/pablosa\/\">Pablo Sala<\/a><br \/>\nHarpreet Sawhney<br \/>\nPramod Sharma<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/yeshen\/\">Yelong Shen<\/a><br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jamiesho\/\">Jamie Shotton<\/a><br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/yalesong\/\">Yale Song<\/a><br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/baochens\/\">Baochen Sun<\/a><br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/xysun\/\">Xiaoyan Sun<\/a><br \/>\nRavi Theja Yada<br \/>\nAli Osman Ulusoy<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/hava\/\">Hamidreza Vaezi Joze<\/a><br \/>\nAlon Vinnikov<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/baoyuanw\/\">Baoyuan Wang<\/a><br \/>\nJianfeng Wang<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jingdw\/\">Jingdong Wang<\/a><br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/lijuanw\/\">Lijuan Wang<\/a><br \/>\nZhe Wang<br \/>\nZhirong Wu<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jiaoyan\/\">Jiaolong Yang<\/a><br \/>\nTing Yao<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/sayoon\/\">Sang Ho Yoon<\/a><br \/>\nQuanzeng You<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/chazhang\/\">Cha Zhang<\/a><br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/leizhang\/\">Lei Zhang<\/a><br \/>\nMingxue Zhang<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/penzhan\/\">Pengchuan Zhang<\/a><br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/tinzhan\/\">Ting Zhang<\/a><br \/>\nYatao Zhong<br \/>\nXiaoyong Zhu<span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<h4>Hybrid Camera Pose Estimation<\/h4>\n<p>Tuesday | 8:50-10:10 | Room 255<br \/>\nFederico Camposeco, Andrea Cohen, <strong>Marc Pollefeys<\/strong>, Torsten Sattler<\/p>\n<h4><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/relation-networks-object-detection\/\">Relation Networks for Object Detection<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h4>\n<p>Wednesday | 8:30-10:10 | Ballroom<br \/>\nHan Hu, <strong>Jiayuan Gu<\/strong>, <strong>Zheng Zhang<\/strong>, <strong>Jifeng Dai<\/strong>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/yichenw\/\"><strong>Yichen Wei<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<h4><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/raynet-learning-volumetric-3d-reconstruction-ray-potentials\/\">RayNet: Learning Volumetric 3D Reconstruction With Ray Potentials<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h4>\n<p>Wednesday | 8:30-10:10 | Room 255<br \/>\nDespoina Paschalidou, <strong>Ali Osman Ulusoy<\/strong>, Carolin Schmitt, Luc Van Gool, Andreas Geiger<\/p>\n<h4><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/automatic-3d-indoor-scene-modeling-single-panorama\/\">Automatic 3D Indoor Scene Modeling From Single Panorama<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h4>\n<p>Wednesday | 8:30-10:10 | Room 255<br \/>\nYang Yang, Shi Jin, Ruiyang Liu, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/sbkang\/\"><strong>Sing Bing Kang<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Jingyi Yu<\/p>\n<h4><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/bottom-top-attention-image-captioning-visual-question-answering\/\">Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h4>\n<p>Wednesday | 2:50-4:30 | Room 155<br \/>\nPeter Anderson, Xiaodong He, <strong>Chris Buehler<\/strong>, Damien Teney, Mark Johnson, Stephen Gould, <strong>Lei Zhang<\/strong><\/p>\n<h4><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/visual-question-generation-dual-task-visual-question-answering\/\">Visual Question Generation as Dual Task of Visual Question Answering<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h4>\n<p>Wednesday | 2:50-4:30 | Room 155<br \/>\nYikang Li, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/nanduan\/\"><strong>Nan Duan<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Bolei Zhou, Xiao Chu, Wanli Ouyang, Xiaogang Wang, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mingzhou\/\"><strong>Ming Zhou<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<h4><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/towards-high-performance-video-object-detection\/\">Towards High Performance Video Object Detection<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h4>\n<p>Thursday | 8:30-10:10 | Ballroom<br \/>\nXizhou Zhu, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jifdai\/\"><strong>Jifeng Dai<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/luyuan\/\"><strong>Lu Yuan<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/yichenw\/\"><strong>Yichen Wei<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<h4>Consensus Maximization for Semantic Region Correspondences<\/h4>\n<p>Thursday | 8:30-10:10 | Room 155<br \/>\nPablo Speciale, Danda P. Paudel, Martin R. Oswald, Hayko Riemenschneider, Luc Van Gool, <strong>Marc Pollefeys<\/strong><\/p>\n<h4>InLoc: Indoor Visual Localization With Dense Matching and View Synthesis<\/h4>\n<p>Thursday | 8:30-10:10 | Ballroom<br \/>\nHajime Taira, Masatoshi Okutomi, Torsten Sattler, Mircea Cimpoi, <strong>Marc Pollefeys<\/strong>, Josef Sivic, Tomas Pajdla, Akihiko Torii<\/p>\n<h4><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/language-based-image-editing-with-recurrent-attentive-models\/\">Language-Based Image Editing With Recurrent Attentive Models<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h4>\n<p>Thursday | 12:50-2:30 | Room 255<br \/>\nJianbo Chen, <strong>Yelong Shen<\/strong>, <strong>Jianfeng Gao<\/strong>, <strong>Jingjing Liu<\/strong>, <strong>Xiaodong Liu<\/strong><\/p>\n<h4>Benchmarking 6DOF Outdoor Visual Localization in Changing Conditions<\/h4>\n<p>Thursday | 12:50-2:30 | Room 155<br \/>\nTorsten Sattler, Will Maddern, Carl Toft, Akihiko Torii, Lars Hammarstrand, Erik Stenborg, Daniel Safari, Masatoshi Okutomi, <strong>Marc Pollefeys<strong>, Josef Sivic, Fredrik Kahl, Tomas Pajdla<\/strong><\/strong><\/p>\n<h4>Feature Space Transfer for Data Augmentation<\/h4>\n<p>Thursday | 2:50-4:30 | Room 255<br \/>\nBo Liu, Xudong Wang, <strong>Mandar Dixit<\/strong>, Roland Kwitt, and Nuno Vasconcelos<\/p>\n<h4><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/interleaved-structured-sparse-convolutional-neural-networks\/\">Interleaved Structured Sparse Convolutional Neural Networks<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h4>\n<p>Thursday | 2:50-4:30 | Ballroom<br \/>\nGuotian Xie, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jingdw\/\"><strong>Jingdong Wang<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/tinzhan\/\"><strong>Ting Zhang<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Jianhuang Lai, Richang Hong, Guo-Jun Qi<\/p>\n<h4><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/revisiting-deep-intrinsic-image-decompositions\/\">Revisiting Deep Intrinsic Image Decompositions<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h4>\n<p>Thursday | 2:50-4:30 | Room 155<br \/>\nQingnan Fan, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jiaoyan\/\"><strong>Jiaolong Yang<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/ganghua\/\"><strong>Gang Hua<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Baoquan Chen, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/davidwip\/\"><strong>David Wipf<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<h3><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.cc.gatech.edu\/~parikh\/citizenofcvpr\/\" target=\"_blank\" rel=\"noopener\">Good Citizen of CVPR Panel<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h3>\n<p>Friday | 9:30-9:50 | Ballroom E<br \/>\nRights and Obligations (Good review and bad review, constructive criticism)<br \/>\n<strong>Katsu lkeuchi<\/strong><\/p>\n<p>Friday | 9:50-10:10 | Ballroom E<br \/>\nHow to create an inclusive and welcoming culture at CVPR and not have a &#8220;clique&#8221; culture<br \/>\n<strong>Timnit Gebru<\/strong><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<h2>Posters<\/h2>\n<p>Tuesday | 10:10-12:30 | Halls C-E<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/real-time-seamless-single-shot-6d-object-pose-prediction\/\">Real-Time Seamless Single Shot 6D Object Pose Prediction<\/a><br \/>\nBugra Tekin, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/sudipsin\/\"><strong>Sudipta Sinha<\/strong><\/a>, Pascal Fua<\/p>\n<p>Tuesday | 10:10-12:30 | Halls C-E<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/mict-mixed-3d-2d-convolutional-tube-human-action-recognition\/\">MiCT: Mixed 3D\/2D Convolutional Tube for Human Action Recognition<\/a><br \/>\nYizhou Zhou, <strong>Xiaoyan Sun<\/strong>, Zheng-Jun Zha, <strong>Wenjun Zeng<\/strong><\/p>\n<p>Tuesday | 10:10-12:30 | Halls C-E<br \/>\nHybrid Camera Pose Estimation<br \/>\nFederico Camposeco, Andrea Cohen, <strong>Marc Pollefeys<\/strong>, Torsten Sattler<\/p>\n<p>Tuesday | 12:30-2:50 | Halls C-E<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/global-versus-localized-generative-adversarial-nets\/\">Global Versus Localized Generative Adversarial Nets<\/a><br \/>\nGuo-Jun Qi, Liheng Zhang, Hao Hu, Marzieh Edraki, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jingdw\/\"><strong>Jingdong Wang<\/strong><\/a>, Xian-Sheng Hua<\/p>\n<p>Tuesday | 12:30-2:50 | Halls C-E<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/high-quality-denoising-dataset-smartphone-cameras\/\">A High-Quality Denoising Dataset for Smartphone Cameras<\/a><br \/>\nAbdelrahman Abdelhamed, <strong>Stephen Lin<\/strong>, Michael S. Brown<\/p>\n<p>Tuesday | 12:30-2:50 | Halls C-E<br \/>\nAugmenting Crowd-Sourced 3D Reconstructions Using Semantic Detections<br \/>\nTrue Price, <strong>Johannes L. Sch\u00f6nberger<\/strong>, Zhen Wei, <strong>Marc Pollefeys<\/strong>, Jan-Michael Frahm<\/p>\n<p>Wednesday | 10:10-12:30 | Halls C-E<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/relation-networks-object-detection\/\">Relation Networks for Object Detection<\/a><br \/>\nHan Hu, <strong>Jiayuan Gu<\/strong>, <strong>Zheng Zhang<\/strong>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jifdai\/\"><strong>Jifeng Dai<\/strong><\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/yichenw\/\"><strong>Yichen Wei<\/strong><\/a><\/p>\n<p>Wednesday | 10:10-12:30 | Halls C-E<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/raynet-learning-volumetric-3d-reconstruction-ray-potentials\/\">RayNet: Learning Volumetric 3D Reconstruction With Ray Potentials<\/a><br \/>\nDespoina Paschalidou, <strong>Ali Osman Ulusoy,<\/strong> Carolin Schmitt, Luc Van Gool, Andreas Geiger<\/p>\n<p>Wednesday | 10:10-12:30 | Halls C-E<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/automatic-3d-indoor-scene-modeling-single-panorama\/\">Automatic 3D Indoor Scene Modeling From Single Panorama<\/a><br \/>\nYang Yang, Shi Jin, Ruiyang Liu, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/sbkang\/\"><strong>Sing Bing Kang<\/strong><\/a>, Jingyi Yu<\/p>\n<p>Wednesday | 10:10-12:30 | Halls C-E<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/pseudo-mask-augmented-object-detection\/\">Pseudo Mask Augmented Object Detection<\/a><br \/>\nXiangyun Zhao, Shuang Liang, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/yichenw\/\"><strong>Yichen Wei<\/strong><\/a><\/p>\n<p>Wednesday | 12:30-2:50 | Halls C-E<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/twofold-siamese-network-real-time-object-tracking\/\">A Twofold Siamese Network for Real-Time Object Tracking<\/a><br \/>\nAnfeng He, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/cluo\/\"><strong>Chong Luo<\/strong><\/a>, Xinmei Tian, <strong>Wenjun Zeng<\/strong><\/p>\n<p>Wednesday | 12:30-2:50 | Halls C-E<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/cleannet-transfer-learning-scalable-image-classifier-training-label-noise\/\">CleanNet: Transfer Learning for Scalable Image Classifier Training With Label Noise<\/a><br \/>\n<strong>Kuang-Huei Lee<\/strong>, Xiaodong He, <strong>Lei Zhang<\/strong>, Linjun Yang<\/p>\n<p>Wednesday | 12:30-2:50 | Halls C-E<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/end-end-convolutional-semantic-embeddings\/\">End-to-End Convolutional Semantic Embeddings<\/a><br \/>\n<strong>Quanzeng You<\/strong>, <strong>Zhengyou Zhang<\/strong>, Jiebo Luo<\/p>\n<p>Wednesday | 12:30-2:50 | Halls C-E<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/generative-adversarial-learning-towards-fast-weakly-supervised-detection\/\">Generative Adversarial Learning Towards Fast Weakly Supervised Detection<\/a><br \/>\nYunhan Shen, Rongrong Ji, Shengchuan Zhang, Wangmeng Zuo, <strong>Yan Wang<\/strong><\/p>\n<p>Wednesday | 4:30-6:30 | Halls C-E<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/bottom-top-attention-image-captioning-visual-question-answering\/\">Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering<\/a><br \/>\nPeter Anderson, Xiaodong He, <strong>Chris Buehler<\/strong>, Damien Teney, Mark Johnson, Stephen Gould, <strong>Lei Zhang<\/strong><\/p>\n<p>Wednesday | 4:30-6:30 | Halls C-E<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/visual-question-generation-dual-task-visual-question-answering\/\">Visual Question Generation as Dual Task of Visual Question Answering<\/a><br \/>\nYikang Li, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/nanduan\/\"><strong>Nan Duan<\/strong><\/a>, Bolei Zhou, Xiao Chu, Wanli Ouyang, Xiaogang Wang, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mingzhou\/\"><strong>Ming Zhou<\/strong><\/a><\/p>\n<p>Wednesday | 4:30-6:30 | Halls C-E<br \/>\nSemantic Visual Localization<br \/>\n<strong>Johannes L. Sch\u00f6nberger<\/strong>, <strong>Marc Pollefeys<\/strong>, Andreas Geiger, Torsten Sattler<\/p>\n<p>Wednesday | 4:30-6:30 | Halls C-E<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/stereoscopic-neural-style-transfer\/\">Stereoscopic Neural Style Transfer<\/a><br \/>\nDongdong Chen, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/luyuan\/\"><strong>Lu Yuan<\/strong><\/a>, <strong>Jing Liao<\/strong>, Nenghai Yu, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/ganghua\/\"><strong>Gang Hua<\/strong><\/a><\/p>\n<p>Wednesday | 4:30-6:30 | Halls C-E<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/towards-open-set-identity-preserving-face-synthesis\/\">Towards Open-Set Identity Preserving Face Synthesis<\/a><br \/>\nJianmin Bao, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/doch\/\"><strong>Dong Chen<\/strong><\/a>, <strong>Fang Wen<\/strong>, Houqiang Li, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/ganghua\/\"><strong>Gang Hua<\/strong><\/a><\/p>\n<p>Wednesday | 4:30-6:30 | Halls C-E<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/weakly-supervised-semantic-segmentation-network-deep-seeded-region-growing\/\">Weakly-Supervised Semantic Segmentation Network With Deep Seeded Region Growing<\/a><br \/>\nZilong Huang, Xinggang Wang, Jiasi Wang, Wenyu Liu, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jingdw\/\"><strong>Jingdong Wang<\/strong><\/a><\/p>\n<p>Thursday | 10:10-12:30 | Halls D-E<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/towards-high-performance-video-object-detection\/\">Towards High Performance Video Object Detection<\/a><br \/>\nXizhou Zhu, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jifdai\/\"><strong>Jifeng Dai<\/strong><\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/luyuan\/\"><strong>Lu Yuan<\/strong><\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/yichenw\/\"><strong>Yichen Wei<\/strong><\/a><\/p>\n<p>Thursday | 10:10-12:30 | Halls D-E<br \/>\nInLoc: Indoor Visual Localization With Dense Matching and View Synthesis<br \/>\nHajime Taira, Masatoshi Okutomi, Torsten Sattler, Mircea Cimpoi, <strong>Marc Pollefeys<\/strong>, Josef Sivic, Tomas Pajdla, Akihiko Torii<\/p>\n<p>Thursday | 10:10-12:30 | Halls D-E<br \/>\nConsensus Maximization for Semantic Region Correspondences<br \/>\nPablo Speciale, Danda P. Paudel, Martin R. Oswald, Hayko Riemenschneider, Luc Van Gool, <strong>Marc Pollefeys<\/strong><\/p>\n<p>Thursday | 10:10-12:30 | Halls D-E<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/arbitrary-style-transfer-deep-feature-reshuffle\/\">Arbitrary Style Transfer With Deep Feature Reshuffle<\/a><br \/>\nShuyang Gu, Congliang Chen, <strong>Jing Liao<\/strong>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/luyuan\/\"><strong>Lu Yuan<\/strong><\/a><\/p>\n<p>Thursday | 4:30-6:30 | Halls D-E<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/language-based-image-editing-with-recurrent-attentive-models\/\">Language-Based Image Editing With Recurrent Attentive Models<\/a><br \/>\nJianbo Chen, <strong>Yelong Shen,<\/strong> <strong>Jianfeng Gao<\/strong>, <strong>Jingjing Liu<\/strong>, <strong>Xiaodong Liu <\/strong><\/p>\n<p>Thursday | 4:30-6:30 | Halls D-E<br \/>\nBenchmarking 6DOF Outdoor Visual Localization in Changing Conditions<br \/>\nTorsten Sattler, Will Maddern, Carl Toft, Akihiko Torii, Lars Hammarstrand, Erik Stenborg, Daniel Safari, Masatoshi Okutomi, <strong>Marc Pollefeys<\/strong>, Josef Sivic, Fredrik Kahl, Tomas Pajdla<\/p>\n<p>Thursday | 4:30-6:30 | Halls D-E<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/interleaved-structured-sparse-convolutional-neural-networks\/\">Interleaved Structured Sparse Convolutional Neural Networks<\/a><br \/>\nGuotian Xie, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jingdw\/\"><strong>Jingdong Wang<\/strong><\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/tinzhan\/\"><strong>Ting Zhang<\/strong><\/a>, Jianhuang Lai, Richang Hong, Guo-Jun Qi<\/p>\n<p>Thursday | 4:30-6:30 |\u00a0Halls D-E<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/revisiting-deep-intrinsic-image-decompositions\/\">Revisiting Deep Intrinsic Image Decompositions<\/a><br \/>\nQingnan Fan, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jiaoyan\/\"><strong>Jiaolong Yang<\/strong><\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/ganghua\/\"><strong>Gang Hua<\/strong><\/a>, Baoquan Chen, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/davidwip\/\"><strong>David Wipf<\/strong><\/a><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<p>\t\t\t<div class=\"ms-grid \">\n\t\t\t<div class=\"ms-row\">\n\t\t\t\t<p>\n<article class=\"msr-light-gray-bgc m-col-12-24 l-col-8-24 bg-clip-content white-bgc margin-bottom-sp3 msr-project-card\" data-bi-slot=\"46\">\n\t<div class=\"padding-horizontal padding-vertical-sp2\">\n\t\t<h3 class=\"subtitle\">\n\t\t\t\t\t\t\t<a href=\"https:\/\/careers.microsoft.com\/us\/en\/job\/411303\/Computer-Vision-Scientist\" class=\"semibold\">Computer Vision Scientist<\/a>\n\t\t\t\t\t<\/h3>\n\n\t\t<div class=\"gray-d1-c\">\n\t\t\t<div class=\"body-alt tight\">\n\t\t\t\t\t\t\t\tIn Mixed Reality, people\u2014not devices\u2014are at the center of everything we do. We are a growing team of talented engineers and artists putting technology on a human path across all Windows devices, including Microsoft HoloLens, the Internet of Things, phones, tablets, desktops, and Xbox, and the larger World of all devices. There will be a better way for people to work and play effectively in a human and physical world through Human Augmentation via Mixed Reality. Come join us in creating this future!\t\t\t<\/div>\n\t\t<\/div>\n\n\t<\/div>\n<\/article>\n<\/p><p>\n<article class=\"msr-light-gray-bgc m-col-12-24 l-col-8-24 bg-clip-content white-bgc margin-bottom-sp3 msr-project-card\" data-bi-slot=\"47\">\n\t<div class=\"padding-horizontal padding-vertical-sp2\">\n\t\t<h3 class=\"subtitle\">\n\t\t\t\t\t\t\t<a href=\"https:\/\/careers.microsoft.com\/us\/en\/job\/399293\/Research-SDE\" class=\"semibold\">Research SDE<\/a>\n\t\t\t\t\t<\/h3>\n\n\t\t<div class=\"gray-d1-c\">\n\t\t\t<div class=\"body-alt tight\">\n\t\t\t\t\t\t\t\tThe Computer Vision Technology Group is a vital part of the Artificial Intelligence and Research division, which mobilizes research and advanced technology projects by creating and building state-of-the-art AI technology in areas such as computer vision and machine learning. The team is growing, and we are looking for talented people who have background in research and\/or engineering, and love to develop new technology that can be deployed to millions of users worldwide.\t\t\t<\/div>\n\t\t<\/div>\n\n\t<\/div>\n<\/article>\n<\/p><p>\n<article class=\"msr-light-gray-bgc m-col-12-24 l-col-8-24 bg-clip-content white-bgc margin-bottom-sp3 msr-project-card\" data-bi-slot=\"48\">\n\t<div class=\"padding-horizontal padding-vertical-sp2\">\n\t\t<h3 class=\"subtitle\">\n\t\t\t\t\t\t\t<a href=\"https:\/\/careers.microsoft.com\/us\/en\/job\/428460\/Post-Doc-Researcher-Deep-Learning\" class=\"semibold\">Post Doc Researcher &#8211; Deep Learning<\/a>\n\t\t\t\t\t<\/h3>\n\n\t\t<div class=\"gray-d1-c\">\n\t\t\t<div class=\"body-alt tight\">\n\t\t\t\t\t\t\t\tMicrosoft Research AI (MSR AI) is comprised of researchers, engineers, and postdocs who take a broad perspective on the next-generation of intelligent systems. We seek exceptional postdoc researchers from all areas of deep learning, reinforcement learning, machine learning, artificial intelligence, and related fields with a passion and demonstrated ability for independent research, including a strong publication record at top international research venues.\t\t\t<\/div>\n\t\t<\/div>\n\n\t<\/div>\n<\/article>\n<\/p>\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<div class=\"ms-grid \">\n\t\t\t<div class=\"ms-row\">\n\t\t\t\t<p>\n<article class=\"msr-light-gray-bgc m-col-12-24 l-col-8-24 bg-clip-content white-bgc margin-bottom-sp3 msr-project-card\" data-bi-slot=\"49\">\n\t<div class=\"padding-horizontal padding-vertical-sp2\">\n\t\t<h3 class=\"subtitle\">\n\t\t\t\t\t\t\t<a href=\"https:\/\/careers.microsoft.com\/us\/en\/job\/412334\/Researcher\" class=\"semibold\">Researcher<\/a>\n\t\t\t\t\t<\/h3>\n\n\t\t<div class=\"gray-d1-c\">\n\t\t\t<div class=\"body-alt tight\">\n\t\t\t\t\t\t\t\tThe HoloLens team in Cambridge, UK, is building the future for mixed reality. We are passionate about using computer vision to make interaction with our devices and communication with other people more intuitive and personal. The team has a strong track record of shipping ground-breaking technologies in Microsoft products including Kinect and HoloLens. The team is growing, and we are looking for talented computer vision and machine learning researchers and software engineers: people who love to invent and build new stuff that really works and can be deployed to millions of users.\t\t\t<\/div>\n\t\t<\/div>\n\n\t<\/div>\n<\/article>\n<\/p>\t\t\t<\/div>\n\t\t<\/div>\n\t\t<span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Venue: Calvin L. Rampton Salt Palace Convention Center (opens in new tab) Website: CVPR 2018 (opens in new tab)Opens in a new tab Microsoft is proud to be a diamond sponsor of the Conference on Computer Vision and Pattern Recognition (CVPR (opens in new tab)) June 18 \u2013 22 in Salt Lake City, Utah. Please [&hellip;]<\/p>\n","protected":false},"featured_media":489278,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr_startdate":"2018-06-18","msr_enddate":"2018-06-22","msr_location":"Salt Lake City, Utah","msr_expirationdate":"","msr_event_recording_link":"","msr_event_link":"http:\/\/cvpr2018.thecvf.com\/attend\/registration","msr_event_link_redirect":false,"msr_event_time":"","msr_hide_region":true,"msr_private_event":false,"msr_hide_image_in_river":0,"footnotes":""},"research-area":[13562],"msr-region":[197900],"msr-event-type":[197941],"msr-video-type":[],"msr-locale":[268875],"msr-program-audience":[],"msr-post-option":[],"msr-impact-theme":[],"class_list":["post-488849","msr-event","type-msr-event","status-publish","has-post-thumbnail","hentry","msr-research-area-computer-vision","msr-region-north-america","msr-event-type-conferences","msr-locale-en_us"],"msr_about":"<!-- wp:msr\/event-details {\"title\":\"Microsoft @ CVPR 2018\",\"backgroundColor\":\"grey\",\"image\":{\"id\":489278,\"url\":\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/06\/CVPR2018-2.jpg\",\"alt\":\"\"}} \/-->\n\n<!-- wp:msr\/content-tabs --><!-- wp:msr\/content-tab {\"title\":\"About\"} --><!-- wp:freeform --><p><strong>Venue:<\/strong> <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.visitsaltlake.com\/salt-palace-convention-center\/\" target=\"_blank\" rel=\"noopener\">Calvin L. Rampton Salt Palace Convention Center<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<p><strong>Website:<\/strong> <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"http:\/\/cvpr2018.thecvf.com\/\" target=\"_blank\" rel=\"noopener\">CVPR 2018<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<p>Microsoft is proud to be a diamond sponsor of the Conference on Computer Vision and Pattern Recognition (<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"http:\/\/cvpr2018.thecvf.com\/\" target=\"_blank\" rel=\"noopener\">CVPR<\/a>) June 18 \u2013 22 in Salt Lake City, Utah. Please visit us at booth 537 to chat with our experts, see demos of our latest research and find out about career opportunities with Microsoft.<\/p>\n<h2>Program Committee members<\/h2>\n<p>Marc Pollefeys \u2013 Robust Vision Challenge Organizers<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/sbkang\/\">Sing Bing Kang<\/a>, Stephen Lin, Sebastian Nowozin, and Wenjun Zeng \u2013\u00a0NTIRE 2018 Program Committee<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/ganghua\/\">Gang Hua<\/a>\u00a0\u2013 PBVS 2018 Program Committee<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/sbkang\/\">Daniel McDuff<\/a>\u00a0\u2013 CVPM 2018 Program Co-Chair<br \/>\nTimnit Gebru \u2013 CV-COPS 2018 Program Committee<br \/>\nZhengyou Zhang \u2013 Sight and Sound Workshop Organizers<\/p>\n<h2>Tutorials<\/h2>\n<h4><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/docs.microsoft.com\/en-us\/windows\/mixed-reality\/cvpr-2018\" rel=\"noopener\" target=\"_blank\">New from HoloLens: Research Mode<\/a><br \/>\nTuesday | 1:30 \u2013 2:50 | Room 151 &#8211; ABCG<\/h4>\n<p style=\"padding-left: 30px\"><strong>Marc Pollefeys<\/strong>, <strong>Pawel Olszta<\/strong><\/p>\n<h4><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.here.com\/en\/secvs-cvpr-2018\" rel=\"noopener\" target=\"_blank\">Software Engineering in Computer Vision Systems<\/a><br \/>\nFriday | 8:30 \u2013 12:30 | Ballroom C<\/h4>\n<p style=\"padding-left: 30px\">David Doria, <strong>Tim Franklin<\/strong>, Matt Turek, Jan Ernst, Wei Xia, Stephen Miller, Ben Kadlec<\/p>\n<h2>Workshops<\/h2>\n<h4><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/sites.google.com\/view\/fgvc5\/home\" rel=\"noopener\" target=\"_blank\">The Fifth Workshop on Fine-Grained Visual Categorization<\/a><br \/>\nFriday | 9:00 \u2013 5:00 | Room 151 A-C<\/h4>\n<p style=\"padding-left: 30px\">Why FGVC5 Folks Should be Interested in the Microsoft AI for Earth Program<br \/>\n9:45 \u2013 10:00<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/dan\/\">Dan Morris<\/a><\/p>\n<h2>Microsoft attendees<\/h2>\n<p>Aijun Bai<br \/>\nLuca Ballan<br \/>\nMi\u0107o Banovi\u0107<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/febogo\/\">Federica Bogo<\/a><br \/>\nBogdan Burlacu<br \/>\nNick Burton<br \/>\nIshani Chakraborty<br \/>\nTemo Chalasani<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/doch\/\">Dong Chen<\/a><br \/>\nXi Chen<br \/>\nArti Chhajta<br \/>\nJohn Corring<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jifdai\/\">Jifeng Dai<\/a><br \/>\nQi Dai<br \/>\nMandar Dixit<br \/>\nLiang Du<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/nanduan\/\">Nan Duan<\/a><br \/>\nXin Duan<br \/>\nGoran Dubajic<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/awf\/\">Andrew Fitzgibbon<\/a><br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/dinei\/\">Dinei Florencio<\/a><br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jianf\/\">Jianlong Fu<\/a><br \/>\nSean Goldberg<br \/>\nYandong Guo<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/hanhu\/\">Han Hu<\/a><br \/>\nHoudong Hu<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/ganghua\/\">Gang Hua<\/a><br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/qihua\/\">Qiuyuan Huang<\/a><br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/sbkang\/\">Sing Bing Kang<\/a><br \/>\nNikolaos Karianakis<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/nkuno\/\">Noboru Kuno<\/a><br \/>\nNabil Lathiff<br \/>\nKuang-Huei Lee<br \/>\nXing Li<br \/>\nOlga Liakhovich<br \/>\nTongliang Liao<br \/>\nStephen Lin<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/zliu\/\">Zicheng Liu<\/a><br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/yanlu\/\">Yan Lu<\/a><br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/cluo\/\">Chong Luo<\/a><br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/damcduff\/\">Daniel McDuff<\/a><br \/>\nMeenaz Merchant<br \/>\nLeonardo Nunes<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\">Marc Pollefeys<\/a><br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/taoqin\/\">Tao Qin<\/a><br \/>\nArun Sacheti<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/pablosa\/\">Pablo Sala<\/a><br \/>\nHarpreet Sawhney<br \/>\nPramod Sharma<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/yeshen\/\">Yelong Shen<\/a><br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jamiesho\/\">Jamie Shotton<\/a><br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/yalesong\/\">Yale Song<\/a><br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/baochens\/\">Baochen Sun<\/a><br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/xysun\/\">Xiaoyan Sun<\/a><br \/>\nRavi Theja Yada<br \/>\nAli Osman Ulusoy<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/hava\/\">Hamidreza Vaezi Joze<\/a><br \/>\nAlon Vinnikov<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/baoyuanw\/\">Baoyuan Wang<\/a><br \/>\nJianfeng Wang<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jingdw\/\">Jingdong Wang<\/a><br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/lijuanw\/\">Lijuan Wang<\/a><br \/>\nZhe Wang<br \/>\nZhirong Wu<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jiaoyan\/\">Jiaolong Yang<\/a><br \/>\nTing Yao<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/sayoon\/\">Sang Ho Yoon<\/a><br \/>\nQuanzeng You<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/chazhang\/\">Cha Zhang<\/a><br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/leizhang\/\">Lei Zhang<\/a><br \/>\nMingxue Zhang<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/penzhan\/\">Pengchuan Zhang<\/a><br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/tinzhan\/\">Ting Zhang<\/a><br \/>\nYatao Zhong<br \/>\nXiaoyong Zhu<span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<!-- \/wp:freeform --><!-- \/wp:msr\/content-tab --><!-- wp:msr\/content-tab {\"title\":\"Presentations\"} --><!-- wp:freeform --><h4>Hybrid Camera Pose Estimation<\/h4>\n<p>Tuesday | 8:50-10:10 | Room 255<br \/>\nFederico Camposeco, Andrea Cohen, <strong>Marc Pollefeys<\/strong>, Torsten Sattler<\/p>\n<h4><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/relation-networks-object-detection\/\">Relation Networks for Object Detection<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h4>\n<p>Wednesday | 8:30-10:10 | Ballroom<br \/>\nHan Hu, <strong>Jiayuan Gu<\/strong>, <strong>Zheng Zhang<\/strong>, <strong>Jifeng Dai<\/strong>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/yichenw\/\"><strong>Yichen Wei<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<h4><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/raynet-learning-volumetric-3d-reconstruction-ray-potentials\/\">RayNet: Learning Volumetric 3D Reconstruction With Ray Potentials<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h4>\n<p>Wednesday | 8:30-10:10 | Room 255<br \/>\nDespoina Paschalidou, <strong>Ali Osman Ulusoy<\/strong>, Carolin Schmitt, Luc Van Gool, Andreas Geiger<\/p>\n<h4><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/automatic-3d-indoor-scene-modeling-single-panorama\/\">Automatic 3D Indoor Scene Modeling From Single Panorama<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h4>\n<p>Wednesday | 8:30-10:10 | Room 255<br \/>\nYang Yang, Shi Jin, Ruiyang Liu, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/sbkang\/\"><strong>Sing Bing Kang<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Jingyi Yu<\/p>\n<h4><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/bottom-top-attention-image-captioning-visual-question-answering\/\">Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h4>\n<p>Wednesday | 2:50-4:30 | Room 155<br \/>\nPeter Anderson, Xiaodong He, <strong>Chris Buehler<\/strong>, Damien Teney, Mark Johnson, Stephen Gould, <strong>Lei Zhang<\/strong><\/p>\n<h4><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/visual-question-generation-dual-task-visual-question-answering\/\">Visual Question Generation as Dual Task of Visual Question Answering<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h4>\n<p>Wednesday | 2:50-4:30 | Room 155<br \/>\nYikang Li, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/nanduan\/\"><strong>Nan Duan<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Bolei Zhou, Xiao Chu, Wanli Ouyang, Xiaogang Wang, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mingzhou\/\"><strong>Ming Zhou<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<h4><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/towards-high-performance-video-object-detection\/\">Towards High Performance Video Object Detection<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h4>\n<p>Thursday | 8:30-10:10 | Ballroom<br \/>\nXizhou Zhu, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jifdai\/\"><strong>Jifeng Dai<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/luyuan\/\"><strong>Lu Yuan<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/yichenw\/\"><strong>Yichen Wei<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<h4>Consensus Maximization for Semantic Region Correspondences<\/h4>\n<p>Thursday | 8:30-10:10 | Room 155<br \/>\nPablo Speciale, Danda P. Paudel, Martin R. Oswald, Hayko Riemenschneider, Luc Van Gool, <strong>Marc Pollefeys<\/strong><\/p>\n<h4>InLoc: Indoor Visual Localization With Dense Matching and View Synthesis<\/h4>\n<p>Thursday | 8:30-10:10 | Ballroom<br \/>\nHajime Taira, Masatoshi Okutomi, Torsten Sattler, Mircea Cimpoi, <strong>Marc Pollefeys<\/strong>, Josef Sivic, Tomas Pajdla, Akihiko Torii<\/p>\n<h4><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/language-based-image-editing-with-recurrent-attentive-models\/\">Language-Based Image Editing With Recurrent Attentive Models<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h4>\n<p>Thursday | 12:50-2:30 | Room 255<br \/>\nJianbo Chen, <strong>Yelong Shen<\/strong>, <strong>Jianfeng Gao<\/strong>, <strong>Jingjing Liu<\/strong>, <strong>Xiaodong Liu<\/strong><\/p>\n<h4>Benchmarking 6DOF Outdoor Visual Localization in Changing Conditions<\/h4>\n<p>Thursday | 12:50-2:30 | Room 155<br \/>\nTorsten Sattler, Will Maddern, Carl Toft, Akihiko Torii, Lars Hammarstrand, Erik Stenborg, Daniel Safari, Masatoshi Okutomi, <strong>Marc Pollefeys<strong>, Josef Sivic, Fredrik Kahl, Tomas Pajdla<\/strong><\/strong><\/p>\n<h4>Feature Space Transfer for Data Augmentation<\/h4>\n<p>Thursday | 2:50-4:30 | Room 255<br \/>\nBo Liu, Xudong Wang, <strong>Mandar Dixit<\/strong>, Roland Kwitt, and Nuno Vasconcelos<\/p>\n<h4><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/interleaved-structured-sparse-convolutional-neural-networks\/\">Interleaved Structured Sparse Convolutional Neural Networks<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h4>\n<p>Thursday | 2:50-4:30 | Ballroom<br \/>\nGuotian Xie, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jingdw\/\"><strong>Jingdong Wang<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/tinzhan\/\"><strong>Ting Zhang<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Jianhuang Lai, Richang Hong, Guo-Jun Qi<\/p>\n<h4><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/revisiting-deep-intrinsic-image-decompositions\/\">Revisiting Deep Intrinsic Image Decompositions<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h4>\n<p>Thursday | 2:50-4:30 | Room 155<br \/>\nQingnan Fan, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jiaoyan\/\"><strong>Jiaolong Yang<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/ganghua\/\"><strong>Gang Hua<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a>, Baoquan Chen, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/davidwip\/\"><strong>David Wipf<\/strong><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/p>\n<h3><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.cc.gatech.edu\/~parikh\/citizenofcvpr\/\" target=\"_blank\" rel=\"noopener\">Good Citizen of CVPR Panel<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h3>\n<p>Friday | 9:30-9:50 | Ballroom E<br \/>\nRights and Obligations (Good review and bad review, constructive criticism)<br \/>\n<strong>Katsu lkeuchi<\/strong><\/p>\n<p>Friday | 9:50-10:10 | Ballroom E<br \/>\nHow to create an inclusive and welcoming culture at CVPR and not have a &#8220;clique&#8221; culture<br \/>\n<strong>Timnit Gebru<\/strong><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<!-- \/wp:freeform --><!-- \/wp:msr\/content-tab --><!-- wp:msr\/content-tab {\"title\":\"Posters\"} --><!-- wp:freeform --><h2>Posters<\/h2>\n<p>Tuesday | 10:10-12:30 | Halls C-E<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/real-time-seamless-single-shot-6d-object-pose-prediction\/\">Real-Time Seamless Single Shot 6D Object Pose Prediction<\/a><br \/>\nBugra Tekin, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/sudipsin\/\"><strong>Sudipta Sinha<\/strong><\/a>, Pascal Fua<\/p>\n<p>Tuesday | 10:10-12:30 | Halls C-E<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/mict-mixed-3d-2d-convolutional-tube-human-action-recognition\/\">MiCT: Mixed 3D\/2D Convolutional Tube for Human Action Recognition<\/a><br \/>\nYizhou Zhou, <strong>Xiaoyan Sun<\/strong>, Zheng-Jun Zha, <strong>Wenjun Zeng<\/strong><\/p>\n<p>Tuesday | 10:10-12:30 | Halls C-E<br \/>\nHybrid Camera Pose Estimation<br \/>\nFederico Camposeco, Andrea Cohen, <strong>Marc Pollefeys<\/strong>, Torsten Sattler<\/p>\n<p>Tuesday | 12:30-2:50 | Halls C-E<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/global-versus-localized-generative-adversarial-nets\/\">Global Versus Localized Generative Adversarial Nets<\/a><br \/>\nGuo-Jun Qi, Liheng Zhang, Hao Hu, Marzieh Edraki, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jingdw\/\"><strong>Jingdong Wang<\/strong><\/a>, Xian-Sheng Hua<\/p>\n<p>Tuesday | 12:30-2:50 | Halls C-E<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/high-quality-denoising-dataset-smartphone-cameras\/\">A High-Quality Denoising Dataset for Smartphone Cameras<\/a><br \/>\nAbdelrahman Abdelhamed, <strong>Stephen Lin<\/strong>, Michael S. Brown<\/p>\n<p>Tuesday | 12:30-2:50 | Halls C-E<br \/>\nAugmenting Crowd-Sourced 3D Reconstructions Using Semantic Detections<br \/>\nTrue Price, <strong>Johannes L. Sch\u00f6nberger<\/strong>, Zhen Wei, <strong>Marc Pollefeys<\/strong>, Jan-Michael Frahm<\/p>\n<p>Wednesday | 10:10-12:30 | Halls C-E<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/relation-networks-object-detection\/\">Relation Networks for Object Detection<\/a><br \/>\nHan Hu, <strong>Jiayuan Gu<\/strong>, <strong>Zheng Zhang<\/strong>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jifdai\/\"><strong>Jifeng Dai<\/strong><\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/yichenw\/\"><strong>Yichen Wei<\/strong><\/a><\/p>\n<p>Wednesday | 10:10-12:30 | Halls C-E<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/raynet-learning-volumetric-3d-reconstruction-ray-potentials\/\">RayNet: Learning Volumetric 3D Reconstruction With Ray Potentials<\/a><br \/>\nDespoina Paschalidou, <strong>Ali Osman Ulusoy,<\/strong> Carolin Schmitt, Luc Van Gool, Andreas Geiger<\/p>\n<p>Wednesday | 10:10-12:30 | Halls C-E<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/automatic-3d-indoor-scene-modeling-single-panorama\/\">Automatic 3D Indoor Scene Modeling From Single Panorama<\/a><br \/>\nYang Yang, Shi Jin, Ruiyang Liu, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/sbkang\/\"><strong>Sing Bing Kang<\/strong><\/a>, Jingyi Yu<\/p>\n<p>Wednesday | 10:10-12:30 | Halls C-E<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/pseudo-mask-augmented-object-detection\/\">Pseudo Mask Augmented Object Detection<\/a><br \/>\nXiangyun Zhao, Shuang Liang, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/yichenw\/\"><strong>Yichen Wei<\/strong><\/a><\/p>\n<p>Wednesday | 12:30-2:50 | Halls C-E<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/twofold-siamese-network-real-time-object-tracking\/\">A Twofold Siamese Network for Real-Time Object Tracking<\/a><br \/>\nAnfeng He, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/cluo\/\"><strong>Chong Luo<\/strong><\/a>, Xinmei Tian, <strong>Wenjun Zeng<\/strong><\/p>\n<p>Wednesday | 12:30-2:50 | Halls C-E<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/cleannet-transfer-learning-scalable-image-classifier-training-label-noise\/\">CleanNet: Transfer Learning for Scalable Image Classifier Training With Label Noise<\/a><br \/>\n<strong>Kuang-Huei Lee<\/strong>, Xiaodong He, <strong>Lei Zhang<\/strong>, Linjun Yang<\/p>\n<p>Wednesday | 12:30-2:50 | Halls C-E<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/end-end-convolutional-semantic-embeddings\/\">End-to-End Convolutional Semantic Embeddings<\/a><br \/>\n<strong>Quanzeng You<\/strong>, <strong>Zhengyou Zhang<\/strong>, Jiebo Luo<\/p>\n<p>Wednesday | 12:30-2:50 | Halls C-E<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/generative-adversarial-learning-towards-fast-weakly-supervised-detection\/\">Generative Adversarial Learning Towards Fast Weakly Supervised Detection<\/a><br \/>\nYunhan Shen, Rongrong Ji, Shengchuan Zhang, Wangmeng Zuo, <strong>Yan Wang<\/strong><\/p>\n<p>Wednesday | 4:30-6:30 | Halls C-E<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/bottom-top-attention-image-captioning-visual-question-answering\/\">Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering<\/a><br \/>\nPeter Anderson, Xiaodong He, <strong>Chris Buehler<\/strong>, Damien Teney, Mark Johnson, Stephen Gould, <strong>Lei Zhang<\/strong><\/p>\n<p>Wednesday | 4:30-6:30 | Halls C-E<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/visual-question-generation-dual-task-visual-question-answering\/\">Visual Question Generation as Dual Task of Visual Question Answering<\/a><br \/>\nYikang Li, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/nanduan\/\"><strong>Nan Duan<\/strong><\/a>, Bolei Zhou, Xiao Chu, Wanli Ouyang, Xiaogang Wang, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mingzhou\/\"><strong>Ming Zhou<\/strong><\/a><\/p>\n<p>Wednesday | 4:30-6:30 | Halls C-E<br \/>\nSemantic Visual Localization<br \/>\n<strong>Johannes L. Sch\u00f6nberger<\/strong>, <strong>Marc Pollefeys<\/strong>, Andreas Geiger, Torsten Sattler<\/p>\n<p>Wednesday | 4:30-6:30 | Halls C-E<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/stereoscopic-neural-style-transfer\/\">Stereoscopic Neural Style Transfer<\/a><br \/>\nDongdong Chen, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/luyuan\/\"><strong>Lu Yuan<\/strong><\/a>, <strong>Jing Liao<\/strong>, Nenghai Yu, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/ganghua\/\"><strong>Gang Hua<\/strong><\/a><\/p>\n<p>Wednesday | 4:30-6:30 | Halls C-E<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/towards-open-set-identity-preserving-face-synthesis\/\">Towards Open-Set Identity Preserving Face Synthesis<\/a><br \/>\nJianmin Bao, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/doch\/\"><strong>Dong Chen<\/strong><\/a>, <strong>Fang Wen<\/strong>, Houqiang Li, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/ganghua\/\"><strong>Gang Hua<\/strong><\/a><\/p>\n<p>Wednesday | 4:30-6:30 | Halls C-E<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/weakly-supervised-semantic-segmentation-network-deep-seeded-region-growing\/\">Weakly-Supervised Semantic Segmentation Network With Deep Seeded Region Growing<\/a><br \/>\nZilong Huang, Xinggang Wang, Jiasi Wang, Wenyu Liu, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jingdw\/\"><strong>Jingdong Wang<\/strong><\/a><\/p>\n<p>Thursday | 10:10-12:30 | Halls D-E<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/towards-high-performance-video-object-detection\/\">Towards High Performance Video Object Detection<\/a><br \/>\nXizhou Zhu, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jifdai\/\"><strong>Jifeng Dai<\/strong><\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/luyuan\/\"><strong>Lu Yuan<\/strong><\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/yichenw\/\"><strong>Yichen Wei<\/strong><\/a><\/p>\n<p>Thursday | 10:10-12:30 | Halls D-E<br \/>\nInLoc: Indoor Visual Localization With Dense Matching and View Synthesis<br \/>\nHajime Taira, Masatoshi Okutomi, Torsten Sattler, Mircea Cimpoi, <strong>Marc Pollefeys<\/strong>, Josef Sivic, Tomas Pajdla, Akihiko Torii<\/p>\n<p>Thursday | 10:10-12:30 | Halls D-E<br \/>\nConsensus Maximization for Semantic Region Correspondences<br \/>\nPablo Speciale, Danda P. Paudel, Martin R. Oswald, Hayko Riemenschneider, Luc Van Gool, <strong>Marc Pollefeys<\/strong><\/p>\n<p>Thursday | 10:10-12:30 | Halls D-E<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/arbitrary-style-transfer-deep-feature-reshuffle\/\">Arbitrary Style Transfer With Deep Feature Reshuffle<\/a><br \/>\nShuyang Gu, Congliang Chen, <strong>Jing Liao<\/strong>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/luyuan\/\"><strong>Lu Yuan<\/strong><\/a><\/p>\n<p>Thursday | 4:30-6:30 | Halls D-E<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/language-based-image-editing-with-recurrent-attentive-models\/\">Language-Based Image Editing With Recurrent Attentive Models<\/a><br \/>\nJianbo Chen, <strong>Yelong Shen,<\/strong> <strong>Jianfeng Gao<\/strong>, <strong>Jingjing Liu<\/strong>, <strong>Xiaodong Liu <\/strong><\/p>\n<p>Thursday | 4:30-6:30 | Halls D-E<br \/>\nBenchmarking 6DOF Outdoor Visual Localization in Changing Conditions<br \/>\nTorsten Sattler, Will Maddern, Carl Toft, Akihiko Torii, Lars Hammarstrand, Erik Stenborg, Daniel Safari, Masatoshi Okutomi, <strong>Marc Pollefeys<\/strong>, Josef Sivic, Fredrik Kahl, Tomas Pajdla<\/p>\n<p>Thursday | 4:30-6:30 | Halls D-E<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/interleaved-structured-sparse-convolutional-neural-networks\/\">Interleaved Structured Sparse Convolutional Neural Networks<\/a><br \/>\nGuotian Xie, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jingdw\/\"><strong>Jingdong Wang<\/strong><\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/tinzhan\/\"><strong>Ting Zhang<\/strong><\/a>, Jianhuang Lai, Richang Hong, Guo-Jun Qi<\/p>\n<p>Thursday | 4:30-6:30 |\u00a0Halls D-E<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/revisiting-deep-intrinsic-image-decompositions\/\">Revisiting Deep Intrinsic Image Decompositions<\/a><br \/>\nQingnan Fan, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jiaoyan\/\"><strong>Jiaolong Yang<\/strong><\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/ganghua\/\"><strong>Gang Hua<\/strong><\/a>, Baoquan Chen, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/davidwip\/\"><strong>David Wipf<\/strong><\/a><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<!-- \/wp:freeform --><!-- \/wp:msr\/content-tab --><!-- wp:msr\/content-tab {\"title\":\"Careers\"} --><!-- wp:freeform --><p>\t\t\t<div class=\"ms-grid \">\n\t\t\t<div class=\"ms-row\">\n\t\t\t\t<p>\n<article class=\"msr-light-gray-bgc m-col-12-24 l-col-8-24 bg-clip-content white-bgc margin-bottom-sp3 msr-project-card\" data-bi-slot=\"46\">\n\t<div class=\"padding-horizontal padding-vertical-sp2\">\n\t\t<h3 class=\"subtitle\">\n\t\t\t\t\t\t\t<a href=\"https:\/\/careers.microsoft.com\/us\/en\/job\/411303\/Computer-Vision-Scientist\" class=\"semibold\">Computer Vision Scientist<\/a>\n\t\t\t\t\t<\/h3>\n\n\t\t<div class=\"gray-d1-c\">\n\t\t\t<div class=\"body-alt tight\">\n\t\t\t\t\t\t\t\tIn Mixed Reality, people\u2014not devices\u2014are at the center of everything we do. We are a growing team of talented engineers and artists putting technology on a human path across all Windows devices, including Microsoft HoloLens, the Internet of Things, phones, tablets, desktops, and Xbox, and the larger World of all devices. There will be a better way for people to work and play effectively in a human and physical world through Human Augmentation via Mixed Reality. Come join us in creating this future!\t\t\t<\/div>\n\t\t<\/div>\n\n\t<\/div>\n<\/article>\n<\/p><p>\n<article class=\"msr-light-gray-bgc m-col-12-24 l-col-8-24 bg-clip-content white-bgc margin-bottom-sp3 msr-project-card\" data-bi-slot=\"47\">\n\t<div class=\"padding-horizontal padding-vertical-sp2\">\n\t\t<h3 class=\"subtitle\">\n\t\t\t\t\t\t\t<a href=\"https:\/\/careers.microsoft.com\/us\/en\/job\/399293\/Research-SDE\" class=\"semibold\">Research SDE<\/a>\n\t\t\t\t\t<\/h3>\n\n\t\t<div class=\"gray-d1-c\">\n\t\t\t<div class=\"body-alt tight\">\n\t\t\t\t\t\t\t\tThe Computer Vision Technology Group is a vital part of the Artificial Intelligence and Research division, which mobilizes research and advanced technology projects by creating and building state-of-the-art AI technology in areas such as computer vision and machine learning. The team is growing, and we are looking for talented people who have background in research and\/or engineering, and love to develop new technology that can be deployed to millions of users worldwide.\t\t\t<\/div>\n\t\t<\/div>\n\n\t<\/div>\n<\/article>\n<\/p><p>\n<article class=\"msr-light-gray-bgc m-col-12-24 l-col-8-24 bg-clip-content white-bgc margin-bottom-sp3 msr-project-card\" data-bi-slot=\"48\">\n\t<div class=\"padding-horizontal padding-vertical-sp2\">\n\t\t<h3 class=\"subtitle\">\n\t\t\t\t\t\t\t<a href=\"https:\/\/careers.microsoft.com\/us\/en\/job\/428460\/Post-Doc-Researcher-Deep-Learning\" class=\"semibold\">Post Doc Researcher - Deep Learning<\/a>\n\t\t\t\t\t<\/h3>\n\n\t\t<div class=\"gray-d1-c\">\n\t\t\t<div class=\"body-alt tight\">\n\t\t\t\t\t\t\t\tMicrosoft Research AI (MSR AI) is comprised of researchers, engineers, and postdocs who take a broad perspective on the next-generation of intelligent systems. We seek exceptional postdoc researchers from all areas of deep learning, reinforcement learning, machine learning, artificial intelligence, and related fields with a passion and demonstrated ability for independent research, including a strong publication record at top international research venues.\t\t\t<\/div>\n\t\t<\/div>\n\n\t<\/div>\n<\/article>\n<\/p>\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<div class=\"ms-grid \">\n\t\t\t<div class=\"ms-row\">\n\t\t\t\t<p>\n<article class=\"msr-light-gray-bgc m-col-12-24 l-col-8-24 bg-clip-content white-bgc margin-bottom-sp3 msr-project-card\" data-bi-slot=\"49\">\n\t<div class=\"padding-horizontal padding-vertical-sp2\">\n\t\t<h3 class=\"subtitle\">\n\t\t\t\t\t\t\t<a href=\"https:\/\/careers.microsoft.com\/us\/en\/job\/412334\/Researcher\" class=\"semibold\">Researcher<\/a>\n\t\t\t\t\t<\/h3>\n\n\t\t<div class=\"gray-d1-c\">\n\t\t\t<div class=\"body-alt tight\">\n\t\t\t\t\t\t\t\tThe HoloLens team in Cambridge, UK, is building the future for mixed reality. We are passionate about using computer vision to make interaction with our devices and communication with other people more intuitive and personal. The team has a strong track record of shipping ground-breaking technologies in Microsoft products including Kinect and HoloLens. The team is growing, and we are looking for talented computer vision and machine learning researchers and software engineers: people who love to invent and build new stuff that really works and can be deployed to millions of users.\t\t\t<\/div>\n\t\t<\/div>\n\n\t<\/div>\n<\/article>\n<\/p>\t\t\t<\/div>\n\t\t<\/div>\n\t\t<span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<!-- \/wp:freeform --><!-- \/wp:msr\/content-tab --><!-- \/wp:msr\/content-tabs -->","tab-content":[{"id":0,"name":"About","content":"Microsoft is proud to be a diamond sponsor of the Conference on Computer Vision and Pattern Recognition (<a href=\"http:\/\/cvpr2018.thecvf.com\/\" target=\"_blank\" rel=\"noopener\">CVPR<\/a>) June 18 \u2013 22 in Salt Lake City, Utah. Please visit us at booth 537 to chat with our experts, see demos of our latest research and find out about career opportunities with Microsoft.\r\n<h2>Program Committee members<\/h2>\r\nMarc Pollefeys \u2013 Robust Vision Challenge Organizers\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/sbkang\/\">Sing Bing Kang<\/a>, Stephen Lin, Sebastian Nowozin, and Wenjun Zeng \u2013\u00a0NTIRE 2018 Program Committee\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/ganghua\/\">Gang Hua<\/a>\u00a0\u2013 PBVS 2018 Program Committee\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/sbkang\/\">Daniel McDuff<\/a>\u00a0\u2013 CVPM 2018 Program Co-Chair\r\nTimnit Gebru \u2013 CV-COPS 2018 Program Committee\r\nZhengyou Zhang \u2013 Sight and Sound Workshop Organizers\r\n<h2>Tutorials<\/h2>\r\n<h4><a href=\"https:\/\/docs.microsoft.com\/en-us\/windows\/mixed-reality\/cvpr-2018\" rel=\"noopener\" target=\"_blank\">New from HoloLens: Research Mode<\/a>\r\nTuesday | 1:30 \u2013 2:50 | Room 151 - ABCG<\/h4>\r\n<p style=\"padding-left: 30px\"><strong>Marc Pollefeys<\/strong>, <strong>Pawel Olszta<\/strong><\/p>\r\n<h4><a href=\"https:\/\/www.here.com\/en\/secvs-cvpr-2018\" rel=\"noopener\" target=\"_blank\">Software Engineering in Computer Vision Systems<\/a>\r\nFriday | 8:30 \u2013 12:30 | Ballroom C<\/h4>\r\n<p style=\"padding-left: 30px\">David Doria, <strong>Tim Franklin<\/strong>, Matt Turek, Jan Ernst, Wei Xia, Stephen Miller, Ben Kadlec<\/p>\r\n<h2>Workshops<\/h2>\r\n<h4><a href=\"https:\/\/sites.google.com\/view\/fgvc5\/home\" rel=\"noopener\" target=\"_blank\">The Fifth Workshop on Fine-Grained Visual Categorization<\/a>\r\nFriday | 9:00 \u2013 5:00 | Room 151 A-C<\/h4>\r\n<p style=\"padding-left: 30px\">Why FGVC5 Folks Should be Interested in the Microsoft AI for Earth Program\r\n9:45 \u2013 10:00\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/dan\/\">Dan Morris<\/a><\/p>\r\n<h2>Microsoft attendees<\/h2>\r\nAijun Bai\r\nLuca Ballan\r\nMi\u0107o Banovi\u0107\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/febogo\/\">Federica Bogo<\/a>\r\nBogdan Burlacu\r\nNick Burton\r\nIshani Chakraborty\r\nTemo Chalasani\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/doch\/\">Dong Chen<\/a>\r\nXi Chen\r\nArti Chhajta\r\nJohn Corring\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jifdai\/\">Jifeng Dai<\/a>\r\nQi Dai\r\nMandar Dixit\r\nLiang Du\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/nanduan\/\">Nan Duan<\/a>\r\nXin Duan\r\nGoran Dubajic\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/awf\/\">Andrew Fitzgibbon<\/a>\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/dinei\/\">Dinei Florencio<\/a>\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jianf\/\">Jianlong Fu<\/a>\r\nSean Goldberg\r\nYandong Guo\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/hanhu\/\">Han Hu<\/a>\r\nHoudong Hu\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/ganghua\/\">Gang Hua<\/a>\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/qihua\/\">Qiuyuan Huang<\/a>\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/sbkang\/\">Sing Bing Kang<\/a>\r\nNikolaos Karianakis\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/nkuno\/\">Noboru Kuno<\/a>\r\nNabil Lathiff\r\nKuang-Huei Lee\r\nXing Li\r\nOlga Liakhovich\r\nTongliang Liao\r\nStephen Lin\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/zliu\/\">Zicheng Liu<\/a>\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/yanlu\/\">Yan Lu<\/a>\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/cluo\/\">Chong Luo<\/a>\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/damcduff\/\">Daniel McDuff<\/a>\r\nMeenaz Merchant\r\nLeonardo Nunes\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\">Marc Pollefeys<\/a>\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/taoqin\/\">Tao Qin<\/a>\r\nArun Sacheti\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/pablosa\/\">Pablo Sala<\/a>\r\nHarpreet Sawhney\r\nPramod Sharma\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/yeshen\/\">Yelong Shen<\/a>\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jamiesho\/\">Jamie Shotton<\/a>\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/yalesong\/\">Yale Song<\/a>\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/baochens\/\">Baochen Sun<\/a>\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/xysun\/\">Xiaoyan Sun<\/a>\r\nRavi Theja Yada\r\nAli Osman Ulusoy\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/hava\/\">Hamidreza Vaezi Joze<\/a>\r\nAlon Vinnikov\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/baoyuanw\/\">Baoyuan Wang<\/a>\r\nJianfeng Wang\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jingdw\/\">Jingdong Wang<\/a>\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/lijuanw\/\">Lijuan Wang<\/a>\r\nZhe Wang\r\nZhirong Wu\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jiaoyan\/\">Jiaolong Yang<\/a>\r\nTing Yao\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/sayoon\/\">Sang Ho Yoon<\/a>\r\nQuanzeng You\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/chazhang\/\">Cha Zhang<\/a>\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/leizhang\/\">Lei Zhang<\/a>\r\nMingxue Zhang\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/penzhan\/\">Pengchuan Zhang<\/a>\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/tinzhan\/\">Ting Zhang<\/a>\r\nYatao Zhong\r\nXiaoyong Zhu"},{"id":1,"name":"Presentations","content":"<h4>Hybrid Camera Pose Estimation<\/h4>\r\nTuesday | 8:50-10:10 | Room 255\r\nFederico Camposeco, Andrea Cohen, <strong>Marc Pollefeys<\/strong>, Torsten Sattler\r\n\r\n<h4><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/relation-networks-object-detection\/\">Relation Networks for Object Detection<\/a><\/h4>\r\nWednesday | 8:30-10:10 | Ballroom\r\nHan Hu, <strong>Jiayuan Gu<\/strong>, <strong>Zheng Zhang<\/strong>, <strong>Jifeng Dai<\/strong>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/yichenw\/\"><strong>Yichen Wei<\/strong><\/a>\r\n\r\n<h4><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/raynet-learning-volumetric-3d-reconstruction-ray-potentials\/\">RayNet: Learning Volumetric 3D Reconstruction With Ray Potentials<\/a><\/h4>\r\nWednesday | 8:30-10:10 | Room 255\r\nDespoina Paschalidou, <strong>Ali Osman Ulusoy<\/strong>, Carolin Schmitt, Luc Van Gool, Andreas Geiger\r\n\r\n<h4><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/automatic-3d-indoor-scene-modeling-single-panorama\/\">Automatic 3D Indoor Scene Modeling From Single Panorama<\/a><\/h4>\r\nWednesday | 8:30-10:10 | Room 255\r\nYang Yang, Shi Jin, Ruiyang Liu, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/sbkang\/\"><strong>Sing Bing Kang<\/strong><\/a>, Jingyi Yu\r\n\r\n<h4><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/bottom-top-attention-image-captioning-visual-question-answering\/\">Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering<\/a><\/h4>\r\nWednesday | 2:50-4:30 | Room 155\r\nPeter Anderson, Xiaodong He, <strong>Chris Buehler<\/strong>, Damien Teney, Mark Johnson, Stephen Gould, <strong>Lei Zhang<\/strong>\r\n\r\n<h4><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/visual-question-generation-dual-task-visual-question-answering\/\">Visual Question Generation as Dual Task of Visual Question Answering<\/a><\/h4>\r\nWednesday | 2:50-4:30 | Room 155\r\nYikang Li, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/nanduan\/\"><strong>Nan Duan<\/strong><\/a>, Bolei Zhou, Xiao Chu, Wanli Ouyang, Xiaogang Wang, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mingzhou\/\"><strong>Ming Zhou<\/strong><\/a>\r\n\r\n<h4><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/towards-high-performance-video-object-detection\/\">Towards High Performance Video Object Detection<\/a><\/h4>\r\nThursday | 8:30-10:10 | Ballroom\r\nXizhou Zhu, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jifdai\/\"><strong>Jifeng Dai<\/strong><\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/luyuan\/\"><strong>Lu Yuan<\/strong><\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/yichenw\/\"><strong>Yichen Wei<\/strong><\/a>\r\n\r\n<h4>Consensus Maximization for Semantic Region Correspondences<\/h4>\r\nThursday | 8:30-10:10 | Room 155\r\nPablo Speciale, Danda P. Paudel, Martin R. Oswald, Hayko Riemenschneider, Luc Van Gool, <strong>Marc Pollefeys<\/strong>\r\n\r\n<h4>InLoc: Indoor Visual Localization With Dense Matching and View Synthesis<\/h4>\r\nThursday | 8:30-10:10 | Ballroom\r\nHajime Taira, Masatoshi Okutomi, Torsten Sattler, Mircea Cimpoi, <strong>Marc Pollefeys<\/strong>, Josef Sivic, Tomas Pajdla, Akihiko Torii\r\n\r\n<h4><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/language-based-image-editing-with-recurrent-attentive-models\/\">Language-Based Image Editing With Recurrent Attentive Models<\/a><\/h4>\r\nThursday | 12:50-2:30 | Room 255\r\nJianbo Chen, <strong>Yelong Shen<\/strong>, <strong>Jianfeng Gao<\/strong>, <strong>Jingjing Liu<\/strong>, <strong>Xiaodong Liu<\/strong>\r\n\r\n<h4>Benchmarking 6DOF Outdoor Visual Localization in Changing Conditions<\/h4>\r\nThursday | 12:50-2:30 | Room 155\r\nTorsten Sattler, Will Maddern, Carl Toft, Akihiko Torii, Lars Hammarstrand, Erik Stenborg, Daniel Safari, Masatoshi Okutomi, <strong>Marc Pollefeys<strong>, Josef Sivic, Fredrik Kahl, Tomas Pajdla<\/strong><\/strong>\r\n\r\n<h4>Feature Space Transfer for Data Augmentation<\/h4>\r\nThursday | 2:50-4:30 | Room 255\r\nBo Liu, Xudong Wang, <strong>Mandar Dixit<\/strong>, Roland Kwitt, and Nuno Vasconcelos\r\n\r\n<h4><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/interleaved-structured-sparse-convolutional-neural-networks\/\">Interleaved Structured Sparse Convolutional Neural Networks<\/a><\/h4>\r\nThursday | 2:50-4:30 | Ballroom\r\nGuotian Xie, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jingdw\/\"><strong>Jingdong Wang<\/strong><\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/tinzhan\/\"><strong>Ting Zhang<\/strong><\/a>, Jianhuang Lai, Richang Hong, Guo-Jun Qi\r\n\r\n<h4><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/revisiting-deep-intrinsic-image-decompositions\/\">Revisiting Deep Intrinsic Image Decompositions<\/a><\/h4>\r\nThursday | 2:50-4:30 | Room 155\r\nQingnan Fan, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jiaoyan\/\"><strong>Jiaolong Yang<\/strong><\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/ganghua\/\"><strong>Gang Hua<\/strong><\/a>, Baoquan Chen, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/davidwip\/\"><strong>David Wipf<\/strong><\/a>\r\n<h3><a href=\"https:\/\/www.cc.gatech.edu\/~parikh\/citizenofcvpr\/\" target=\"_blank\" rel=\"noopener\">Good Citizen of CVPR Panel<\/a><\/h3>\r\nFriday | 9:30-9:50 | Ballroom E\r\nRights and Obligations (Good review and bad review, constructive criticism)\r\n<strong>Katsu lkeuchi<\/strong>\r\n\r\nFriday | 9:50-10:10 | Ballroom E\r\nHow to create an inclusive and welcoming culture at CVPR and not have a \"clique\" culture\r\n<strong>Timnit Gebru<\/strong>"},{"id":2,"name":"Posters","content":"<h2>Posters<\/h2>\r\nTuesday | 10:10-12:30 | Halls C-E\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/real-time-seamless-single-shot-6d-object-pose-prediction\/\">Real-Time Seamless Single Shot 6D Object Pose Prediction<\/a>\r\nBugra Tekin, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/sudipsin\/\"><strong>Sudipta Sinha<\/strong><\/a>, Pascal Fua\r\n\r\nTuesday | 10:10-12:30 | Halls C-E\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/mict-mixed-3d-2d-convolutional-tube-human-action-recognition\/\">MiCT: Mixed 3D\/2D Convolutional Tube for Human Action Recognition<\/a>\r\nYizhou Zhou, <strong>Xiaoyan Sun<\/strong>, Zheng-Jun Zha, <strong>Wenjun Zeng<\/strong>\r\n\r\nTuesday | 10:10-12:30 | Halls C-E\r\nHybrid Camera Pose Estimation\r\nFederico Camposeco, Andrea Cohen, <strong>Marc Pollefeys<\/strong>, Torsten Sattler\r\n\r\nTuesday | 12:30-2:50 | Halls C-E\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/global-versus-localized-generative-adversarial-nets\/\">Global Versus Localized Generative Adversarial Nets<\/a>\r\nGuo-Jun Qi, Liheng Zhang, Hao Hu, Marzieh Edraki, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jingdw\/\"><strong>Jingdong Wang<\/strong><\/a>, Xian-Sheng Hua\r\n\r\nTuesday | 12:30-2:50 | Halls C-E\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/high-quality-denoising-dataset-smartphone-cameras\/\">A High-Quality Denoising Dataset for Smartphone Cameras<\/a>\r\nAbdelrahman Abdelhamed, <strong>Stephen Lin<\/strong>, Michael S. Brown\r\n\r\nTuesday | 12:30-2:50 | Halls C-E\r\nAugmenting Crowd-Sourced 3D Reconstructions Using Semantic Detections\r\nTrue Price, <strong>Johannes L. Sch\u00f6nberger<\/strong>, Zhen Wei, <strong>Marc Pollefeys<\/strong>, Jan-Michael Frahm\r\n\r\nWednesday | 10:10-12:30 | Halls C-E\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/relation-networks-object-detection\/\">Relation Networks for Object Detection<\/a>\r\nHan Hu, <strong>Jiayuan Gu<\/strong>, <strong>Zheng Zhang<\/strong>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jifdai\/\"><strong>Jifeng Dai<\/strong><\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/yichenw\/\"><strong>Yichen Wei<\/strong><\/a>\r\n\r\nWednesday | 10:10-12:30 | Halls C-E\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/raynet-learning-volumetric-3d-reconstruction-ray-potentials\/\">RayNet: Learning Volumetric 3D Reconstruction With Ray Potentials<\/a>\r\nDespoina Paschalidou, <strong>Ali Osman Ulusoy,<\/strong> Carolin Schmitt, Luc Van Gool, Andreas Geiger\r\n\r\nWednesday | 10:10-12:30 | Halls C-E\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/automatic-3d-indoor-scene-modeling-single-panorama\/\">Automatic 3D Indoor Scene Modeling From Single Panorama<\/a>\r\nYang Yang, Shi Jin, Ruiyang Liu, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/sbkang\/\"><strong>Sing Bing Kang<\/strong><\/a>, Jingyi Yu\r\n\r\nWednesday | 10:10-12:30 | Halls C-E\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/pseudo-mask-augmented-object-detection\/\">Pseudo Mask Augmented Object Detection<\/a>\r\nXiangyun Zhao, Shuang Liang, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/yichenw\/\"><strong>Yichen Wei<\/strong><\/a>\r\n\r\nWednesday | 12:30-2:50 | Halls C-E\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/twofold-siamese-network-real-time-object-tracking\/\">A Twofold Siamese Network for Real-Time Object Tracking<\/a>\r\nAnfeng He, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/cluo\/\"><strong>Chong Luo<\/strong><\/a>, Xinmei Tian, <strong>Wenjun Zeng<\/strong>\r\n\r\nWednesday | 12:30-2:50 | Halls C-E\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/cleannet-transfer-learning-scalable-image-classifier-training-label-noise\/\">CleanNet: Transfer Learning for Scalable Image Classifier Training With Label Noise<\/a>\r\n<strong>Kuang-Huei Lee<\/strong>, Xiaodong He, <strong>Lei Zhang<\/strong>, Linjun Yang\r\n\r\nWednesday | 12:30-2:50 | Halls C-E\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/end-end-convolutional-semantic-embeddings\/\">End-to-End Convolutional Semantic Embeddings<\/a>\r\n<strong>Quanzeng You<\/strong>, <strong>Zhengyou Zhang<\/strong>, Jiebo Luo\r\n\r\nWednesday | 12:30-2:50 | Halls C-E\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/generative-adversarial-learning-towards-fast-weakly-supervised-detection\/\">Generative Adversarial Learning Towards Fast Weakly Supervised Detection<\/a>\r\nYunhan Shen, Rongrong Ji, Shengchuan Zhang, Wangmeng Zuo, <strong>Yan Wang<\/strong>\r\n\r\nWednesday | 4:30-6:30 | Halls C-E\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/bottom-top-attention-image-captioning-visual-question-answering\/\">Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering<\/a>\r\nPeter Anderson, Xiaodong He, <strong>Chris Buehler<\/strong>, Damien Teney, Mark Johnson, Stephen Gould, <strong>Lei Zhang<\/strong>\r\n\r\nWednesday | 4:30-6:30 | Halls C-E\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/visual-question-generation-dual-task-visual-question-answering\/\">Visual Question Generation as Dual Task of Visual Question Answering<\/a>\r\nYikang Li, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/nanduan\/\"><strong>Nan Duan<\/strong><\/a>, Bolei Zhou, Xiao Chu, Wanli Ouyang, Xiaogang Wang, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mingzhou\/\"><strong>Ming Zhou<\/strong><\/a>\r\n\r\nWednesday | 4:30-6:30 | Halls C-E\r\nSemantic Visual Localization\r\n<strong>Johannes L. Sch\u00f6nberger<\/strong>, <strong>Marc Pollefeys<\/strong>, Andreas Geiger, Torsten Sattler\r\n\r\nWednesday | 4:30-6:30 | Halls C-E\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/stereoscopic-neural-style-transfer\/\">Stereoscopic Neural Style Transfer<\/a>\r\nDongdong Chen, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/luyuan\/\"><strong>Lu Yuan<\/strong><\/a>, <strong>Jing Liao<\/strong>, Nenghai Yu, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/ganghua\/\"><strong>Gang Hua<\/strong><\/a>\r\n\r\nWednesday | 4:30-6:30 | Halls C-E\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/towards-open-set-identity-preserving-face-synthesis\/\">Towards Open-Set Identity Preserving Face Synthesis<\/a>\r\nJianmin Bao, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/doch\/\"><strong>Dong Chen<\/strong><\/a>, <strong>Fang Wen<\/strong>, Houqiang Li, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/ganghua\/\"><strong>Gang Hua<\/strong><\/a>\r\n\r\nWednesday | 4:30-6:30 | Halls C-E\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/weakly-supervised-semantic-segmentation-network-deep-seeded-region-growing\/\">Weakly-Supervised Semantic Segmentation Network With Deep Seeded Region Growing<\/a>\r\nZilong Huang, Xinggang Wang, Jiasi Wang, Wenyu Liu, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jingdw\/\"><strong>Jingdong Wang<\/strong><\/a>\r\n\r\nThursday | 10:10-12:30 | Halls D-E\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/towards-high-performance-video-object-detection\/\">Towards High Performance Video Object Detection<\/a>\r\nXizhou Zhu, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jifdai\/\"><strong>Jifeng Dai<\/strong><\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/luyuan\/\"><strong>Lu Yuan<\/strong><\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/yichenw\/\"><strong>Yichen Wei<\/strong><\/a>\r\n\r\nThursday | 10:10-12:30 | Halls D-E\r\nInLoc: Indoor Visual Localization With Dense Matching and View Synthesis\r\nHajime Taira, Masatoshi Okutomi, Torsten Sattler, Mircea Cimpoi, <strong>Marc Pollefeys<\/strong>, Josef Sivic, Tomas Pajdla, Akihiko Torii\r\n\r\nThursday | 10:10-12:30 | Halls D-E\r\nConsensus Maximization for Semantic Region Correspondences\r\nPablo Speciale, Danda P. Paudel, Martin R. Oswald, Hayko Riemenschneider, Luc Van Gool, <strong>Marc Pollefeys<\/strong>\r\n\r\nThursday | 10:10-12:30 | Halls D-E\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/arbitrary-style-transfer-deep-feature-reshuffle\/\">Arbitrary Style Transfer With Deep Feature Reshuffle<\/a>\r\nShuyang Gu, Congliang Chen, <strong>Jing Liao<\/strong>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/luyuan\/\"><strong>Lu Yuan<\/strong><\/a>\r\n\r\nThursday | 4:30-6:30 | Halls D-E\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/language-based-image-editing-with-recurrent-attentive-models\/\">Language-Based Image Editing With Recurrent Attentive Models<\/a>\r\nJianbo Chen, <strong>Yelong Shen,<\/strong> <strong>Jianfeng Gao<\/strong>, <strong>Jingjing Liu<\/strong>, <strong>Xiaodong Liu <\/strong>\r\n\r\nThursday | 4:30-6:30 | Halls D-E\r\nBenchmarking 6DOF Outdoor Visual Localization in Changing Conditions\r\nTorsten Sattler, Will Maddern, Carl Toft, Akihiko Torii, Lars Hammarstrand, Erik Stenborg, Daniel Safari, Masatoshi Okutomi, <strong>Marc Pollefeys<\/strong>, Josef Sivic, Fredrik Kahl, Tomas Pajdla\r\n\r\nThursday | 4:30-6:30 | Halls D-E\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/interleaved-structured-sparse-convolutional-neural-networks\/\">Interleaved Structured Sparse Convolutional Neural Networks<\/a>\r\nGuotian Xie, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jingdw\/\"><strong>Jingdong Wang<\/strong><\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/tinzhan\/\"><strong>Ting Zhang<\/strong><\/a>, Jianhuang Lai, Richang Hong, Guo-Jun Qi\r\n\r\nThursday | 4:30-6:30 |\u00a0Halls D-E\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/revisiting-deep-intrinsic-image-decompositions\/\">Revisiting Deep Intrinsic Image Decompositions<\/a>\r\nQingnan Fan, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jiaoyan\/\"><strong>Jiaolong Yang<\/strong><\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/ganghua\/\"><strong>Gang Hua<\/strong><\/a>, Baoquan Chen, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/davidwip\/\"><strong>David Wipf<\/strong><\/a>"},{"id":3,"name":"Careers","content":"[row]\r\n\r\n[card title=\"Computer Vision Scientist\" url=\"https:\/\/careers.microsoft.com\/us\/en\/job\/411303\/Computer-Vision-Scientist\" ]In Mixed Reality, people\u2014not devices\u2014are at the center of everything we do. We are a growing team of talented engineers and artists putting technology on a human path across all Windows devices, including Microsoft HoloLens, the Internet of Things, phones, tablets, desktops, and Xbox, and the larger World of all devices. There will be a better way for people to work and play effectively in a human and physical world through Human Augmentation via Mixed Reality. Come join us in creating this future![\/card]\r\n\r\n[card title=\"Research SDE\" url=\"https:\/\/careers.microsoft.com\/us\/en\/job\/399293\/Research-SDE\" ]The Computer Vision Technology Group is a vital part of the Artificial Intelligence and Research division, which mobilizes research and advanced technology projects by creating and building state-of-the-art AI technology in areas such as computer vision and machine learning. The team is growing, and we are looking for talented people who have background in research and\/or engineering, and love to develop new technology that can be deployed to millions of users worldwide.[\/card]\r\n\r\n[card title=\"Post Doc Researcher - Deep Learning\" url=\"https:\/\/careers.microsoft.com\/us\/en\/job\/428460\/Post-Doc-Researcher-Deep-Learning\" ]Microsoft Research AI (MSR AI) is comprised of researchers, engineers, and postdocs who take a broad perspective on the next-generation of intelligent systems. We seek exceptional postdoc researchers from all areas of deep learning, reinforcement learning, machine learning, artificial intelligence, and related fields with a passion and demonstrated ability for independent research, including a strong publication record at top international research venues.[\/card]\r\n\r\n[\/row][row]\r\n\r\n[card title=\"Researcher\" url=\"https:\/\/careers.microsoft.com\/us\/en\/job\/412334\/Researcher\" ]The HoloLens team in Cambridge, UK, is building the future for mixed reality. We are passionate about using computer vision to make interaction with our devices and communication with other people more intuitive and personal. The team has a strong track record of shipping ground-breaking technologies in Microsoft products including Kinect and HoloLens. The team is growing, and we are looking for talented computer vision and machine learning researchers and software engineers: people who love to invent and build new stuff that really works and can be deployed to millions of users.[\/card]\r\n\r\n[\/row]"}],"msr_startdate":"2018-06-18","msr_enddate":"2018-06-22","msr_event_time":"","msr_location":"Salt Lake City, Utah","msr_event_link":"http:\/\/cvpr2018.thecvf.com\/attend\/registration","msr_event_recording_link":"","msr_startdate_formatted":"June 18, 2018","msr_register_text":"Watch now","msr_cta_link":"http:\/\/cvpr2018.thecvf.com\/attend\/registration","msr_cta_text":"Watch now","msr_cta_bi_name":"Event Register","featured_image_thumbnail":"<img width=\"960\" height=\"360\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/06\/CVPR2018-2.jpg\" class=\"img-object-cover\" alt=\"CVPR 2018\" decoding=\"async\" loading=\"lazy\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/06\/CVPR2018-2.jpg 1920w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/06\/CVPR2018-2-300x113.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/06\/CVPR2018-2-768x288.jpg 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/06\/CVPR2018-2-1024x384.jpg 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/06\/CVPR2018-2-1600x600.jpg 1600w\" sizes=\"auto, (max-width: 960px) 100vw, 960px\" \/>","event_excerpt":"Microsoft is proud to be a diamond sponsor of the Conference on Computer Vision and Pattern Recognition (CVPR) June 18 \u2013 22 in Salt Lake City, Utah. Please visit us at booth 537 to chat with our experts, see demos of our latest research and find out about career opportunities with Microsoft. Program Committee members Marc Pollefeys \u2013 Robust Vision Challenge Organizers Sing Bing Kang, Stephen Lin, Sebastian Nowozin, and Wenjun Zeng \u2013\u00a0NTIRE 2018 Program&hellip;","msr_research_lab":[199560,199561,199565,199571],"related-researchers":[],"msr_impact_theme":[],"related-academic-programs":[],"related-groups":[],"related-projects":[],"related-opportunities":[],"related-publications":[464061,609237,609252,609834,609843,609864,609873],"related-videos":[],"related-posts":[490556,490736,490835,491132],"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/488849","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-event"}],"version-history":[{"count":5,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/488849\/revisions"}],"predecessor-version":[{"id":1147106,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/488849\/revisions\/1147106"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/489278"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=488849"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=488849"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=488849"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=488849"},{"taxonomy":"msr-video-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video-type?post=488849"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=488849"},{"taxonomy":"msr-program-audience","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-program-audience?post=488849"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=488849"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=488849"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}