{"id":498755,"date":"2018-08-02T05:54:58","date_gmt":"2018-08-02T12:54:58","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-event&#038;p=498755"},"modified":"2025-08-06T11:56:57","modified_gmt":"2025-08-06T18:56:57","slug":"eccv-2018","status":"publish","type":"msr-event","link":"https:\/\/www.microsoft.com\/en-us\/research\/event\/eccv-2018\/","title":{"rendered":"Microsoft @ ECCV 2018"},"content":{"rendered":"\n\n<p><strong>Venue: <\/strong><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/en.gasteig.de\/\" target=\"_blank\" rel=\"noopener\">GASTEIG Cultural Center<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nRosenheimer Str. 5<br \/>\n81667 Munich<br \/>\nGermany<\/p>\n<p><strong>Website:<\/strong> <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/eccv2018.org\/\" target=\"_blank\" rel=\"noopener\">ECCV 2018<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<p>Microsoft is proud to be a Diamond sponsor of the <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/eccv2018.org\/\" target=\"_blank\" rel=\"noopener\">European Conference on Computer Vision<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> in Munich September 8, 2018 \u2013 September 14, 2018. Come by our booth to chat with our experts, see demos of our latest research and find out about <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/careers.microsoft.com\/us\/en\/c\/research-jobs\" target=\"_blank\" rel=\"noopener\">career opportunities<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> with Microsoft.<\/p>\n<h2>Committee Chairs<\/h2>\n<h3>Area Chair<\/h3>\n<p style=\"padding-left: 30px\"><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/awf\/\">Andrew Fitzgibbon<\/a><br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/senowozi\/\">Sebastian Nowozin<\/a><\/p>\n<h2>Microsoft Attendees<\/h2>\n<p style=\"padding-left: 30px\">Alex Hagiopol<br \/>\nAna Anastasijevic<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/awf\/\">Andrew Fitzgibbon<\/a><br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/libin\/\">Bin Li<\/a><br \/>\nBin Xiao<br \/>\nChris Aholt<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/chnuwa\/\">Chunyu Wang<\/a><br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/culan\/\">Cuiling Lan<\/a><br \/>\nErroll Wood<br \/>\nFangyun Wei<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jamiesho\/\">Jamie Shotton<\/a><br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jiaoyan\/\">Jiaolong Yang<\/a><br \/>\nJoseph DeGol<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/kualee\/\">Kuang-Huei Lee<\/a><br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\">Marc Pollefeys<\/a><br \/>\nMladen Radojevic<br \/>\nNikola Milosavljevic<br \/>\nNikolaos Karianakis<br \/>\nPatrick Buehler<br \/>\nShivkumar Swaminathan<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/sudipsin\/\">Sudipta Sinha<\/a><br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/tcashman\/\">Tom Cashman<\/a><br \/>\nVukasin Rankovic<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/wezeng\/\">Wenjun Zeng<\/a><br \/>\nXudong Liu<br \/>\nZhirong Wu<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/zliu\/\">Zicheng Liu<\/a><\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<h3>Saturday AM | Theresianum 606<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/docs.microsoft.com\/en-us\/windows\/mixed-reality\/eccv-2018\" target=\"_blank\" rel=\"noopener\">HoloLens as a tool for computer vision research<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h3>\n<p style=\"padding-left: 30px\"><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\"><strong>Marc Pollefeys<\/strong><\/a>, <strong>Johannes Sch\u00f6nberger<\/strong>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/awf\/\"><strong>Andrew Fitzgibbon<\/strong><\/a><\/p>\n<h3>Saturday PM | Theresianum 601<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.gcc.tu-darmstadt.de\/home\/events\/eccv_w_2018__vision_for_xr_\/eccv2018_workshop_vision_for_xr.en.jsp\" target=\"_blank\" rel=\"noopener\">Vision for XR<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h3>\n<p style=\"padding-left: 30px\">Invited talk: <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\"><strong>Mark Pollefeys<\/strong><\/a><\/p>\n<h3>Sunday AM | N1179<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"http:\/\/trimbot2020.webhosting.rug.nl\/events\/3drms\/\" target=\"_blank\" rel=\"noopener\">3D Reconstruction Meets Semantics (3DRMS)<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h3>\n<p style=\"padding-left: 30px\">Program chair: <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\"><strong>Mark Pollefeys<\/strong><\/a><\/p>\n<h3>Sunday PM | Audimax 0980<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/360pi.github.io\/\" target=\"_blank\" rel=\"noopener\">360\u00b0 Perception and Interaction<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h3>\n<p style=\"padding-left: 30px\">Invited talk: <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\"><strong>Mark Pollefeys<\/strong><\/a><\/p>\n<h3>Sunday PM | Theresianum 606<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/sites.google.com\/view\/hands2018\/\" target=\"_blank\" rel=\"noopener\">Observing and Understanding Hands in Action (HANDS2018)<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h3>\n<p style=\"padding-left: 30px\">Invited talk: <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/awf\/\"><strong>Andrew Fitzgibbon<\/strong><\/a><\/p>\n<h3>Sunday PM | N1090ZG<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/wicvworkshop.github.io\/ECCV2018\/\" target=\"_blank\" rel=\"noopener\">Women in Computer Vision | N1090ZG<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h3>\n<p style=\"padding-left: 30px\">Workshop panelist: <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/awf\/\"><strong>Andrew Fitzgibbon<\/strong><\/a><\/p>\n<h3>Sunday PM | Theresianum 602<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"http:\/\/www.picdataset.com\/\" target=\"_blank\" rel=\"noopener\">1st Person in Context (PIC) Workshop and Challenge<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h3>\n<p style=\"padding-left: 30px\">Invited talk: <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/wezeng\/\"><strong>Wenjun Zeng<\/strong><\/a><\/p>\n<h3>Sunday All Day | 1200<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"http:\/\/apolloscape.auto\/ECCV\/index.html\" target=\"_blank\" rel=\"noopener\">ApolloScape: Vision-based Navigation for Autonomous Driving<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h3>\n<p style=\"padding-left: 30px\">Invited talk and panelist: <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\"><strong>Mark Pollefeys<\/strong><\/a><\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<h2>Monday, September 9, 2018 | 10:00 AM | 1A<\/h2>\n<h3><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/arxiv.org\/abs\/1807.07872\" target=\"_blank\" rel=\"noopener\">From Face Recognition to Models of Identity: A Bayesian Approach to Learning about Unknown Identities from Unsupervised Data<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h3>\n<p style=\"padding-left: 30px\">Daniel Castro, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/senowozi\/\"><strong>Sebastian Nowozin<\/strong><\/a><\/p>\n<h3><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/arxiv.org\/abs\/1805.07888\" target=\"_blank\" rel=\"noopener\">DeepPhys: Video-Based Physiological Measurement Using Convolutional Attention Networks<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h3>\n<p style=\"padding-left: 30px\">Weixuan Chen, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/damcduff\/\"><strong>Daniel McDuff<\/strong><\/a><\/p>\n<h3><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"http:\/\/people.inf.ethz.ch\/sattlert\/publications\/Toft2018ECCV.pdf\" target=\"_blank\" rel=\"noopener\">Semantic Match Consistency for Long-Term Visual Localization<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h3>\n<p style=\"padding-left: 30px\">Carl Toft, Erik Stenborg, Lars Hammarstrand, Lucas Brynte, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\"><strong>Marc Pollefeys<\/strong><\/a>, Torsten Sattler, Fredrik Kahl<\/p>\n<p>&nbsp;<\/p>\n<h2>Monday, September 9, 2018 | 4:00 PM | 1B<\/h2>\n<h3><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/stacked-cross-attention-for-image-text-matching\/\">Stacked Cross Attention for Image-Text Matching<\/a><\/h3>\n<p style=\"padding-left: 30px\"><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/kualee\/\"><strong>Kuang-Huei Lee<\/strong><\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/chexi\/\"><strong>Xi Chen<\/strong><\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/ganghua\/\"><strong>Gang Hua<\/strong><\/a>, <strong>Houdong Hu<\/strong>, Xiaodong He<\/p>\n<h3><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"http:\/\/openaccess.thecvf.com\/content_ECCV_2018\/html\/Yiding_Liu_Affinity_Derivation_and_ECCV_2018_paper.html\" target=\"_blank\" rel=\"noopener\">Affinity Derivation and Graph Merge for Instance Segmentation<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h3>\n<p style=\"padding-left: 30px\">Yiding Liu, Siyu Yang, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/libin\/\"><strong>Bin Li<\/strong><\/a>, Wengang Zhou, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jzxu\/\"><strong>Ji-Zheng Xu<\/strong><\/a>, Houqiang Li, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/yanlu\/\"><strong>Yan Lu<\/strong><\/a><\/p>\n<h3><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/online-dictionary-learning-for-approximate-archetypal-analysis\/\">Online Dictionary Learning for Approximate Archetypal Analysis<\/a><\/h3>\n<p style=\"padding-left: 30px\"><strong>Jieru Mei<\/strong>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/chnuwa\/\"><strong>Chunyu Wang<\/strong><\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/wezeng\/\"><strong>Wenjun Zeng<\/strong><\/a><strong><br \/>\n<\/strong><\/p>\n<h3><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/demuc.de\/papers\/lianos2018vso.pdf\" target=\"_blank\" rel=\"noopener\">VSO: Visual Semantic Odometry<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h3>\n<p style=\"padding-left: 30px\">Konstantinos-Nektarios Lianos, <strong>Johannes Sch\u00f6nberger<\/strong>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\"><strong>Marc Pollefeys<\/strong><\/a>, Torsten Sattler<\/p>\n<h3><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"http:\/\/web.engr.illinois.edu\/~degol2\/pages\/TagSfM_ECCV18.html\" target=\"_blank\" rel=\"noopener\">Improved Structure from Motion Using Fiducial Marker Matching<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h3>\n<p style=\"padding-left: 30px\"><strong>Joseph DeGol<\/strong>, Timothy Bretl, Derek Hoiem<\/p>\n<p>&nbsp;<\/p>\n<h2>Tuesday, September 10, 2018 | 10:00 AM | 2A<\/h2>\n<h3><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/arxiv.org\/abs\/1801.05551\" target=\"_blank\" rel=\"noopener\">Semi-supervised FusedGAN for Conditional Image Generation<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h3>\n<p style=\"padding-left: 30px\">Navaneeth Bodla, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/ganghua\/\"><strong>Gang Hua<\/strong><\/a>, Rama Chellappa<\/p>\n<h3><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/arxiv.org\/abs\/1809.06079\" target=\"_blank\" rel=\"noopener\">Integral Human Pose Regression<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h3>\n<p style=\"padding-left: 30px\"><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/xias\/\"><strong>Xiao Sun<\/strong><\/a>, Bin Xiao, Fangyin Wei, Shuang Liang, Yichen Wei<\/p>\n<h3><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"http:\/\/openaccess.thecvf.com\/content_ECCV_2018\/html\/Dong_Li_Recurrent_Tubelet_Proposal_ECCV_2018_paper.html\" target=\"_blank\" rel=\"noopener\">Recurrent Tubelet Proposal and Recognition Networks for Action Detection<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h3>\n<p style=\"padding-left: 30px\">Dong Li, Zhaofan Qiu, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/qid\/\"><strong>Qi Dai<\/strong><\/a>, Ting Yao, Tao Mei<\/p>\n<h3><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/link.springer.com\/chapter\/10.1007\/978-3-030-01228-1_44\" target=\"_blank\" rel=\"noopener\">Reinforced Temporal Attention and Split-Rate Transfer for Depth-Based Person Re-identification<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h3>\n<p style=\"padding-left: 30px\"><strong>Nikolaos Karianakis<\/strong>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/zliu\/\"><strong>Zicheng Liu<\/strong><\/a>, <strong>Yinpeng Chen<\/strong>, Stefano Soatto<\/p>\n<h3><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/arxiv.org\/abs\/1804.06208\" target=\"_blank\" rel=\"noopener\">Simple Baselines for Human Pose Estimation and Tracking<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h3>\n<p style=\"padding-left: 30px\"><strong>Bin Xiao<\/strong>, <strong>Haiping Wu<\/strong>, <strong>Yichen Wei<\/strong><\/p>\n<p>&nbsp;<\/p>\n<h2>Tuesday, September 10, 2018 | 4:00 PM | 2B<\/h2>\n<h3><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"http:\/\/openaccess.thecvf.com\/content_ECCV_2018\/papers\/Dongqing_Zhang_Optimized_Quantization_for_ECCV_2018_paper.pdf\" target=\"_blank\" rel=\"noopener\">Optimized Quantization for Highly Accurate and Compact DNNs<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h3>\n<p style=\"padding-left: 30px\"><strong>Dongqing Zhang<\/strong>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jiaoyan\/\"><strong>Jiaolong Yang<\/strong><\/a>, <strong>Dongqiangzi Ye<\/strong>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/ganghua\/\"><strong>Gang Hua<\/strong><\/a><\/p>\n<h3><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/arxiv.org\/abs\/1808.04699\" target=\"_blank\" rel=\"noopener\">Improving Embedding Generalization via Scalable Neighborhood Component Analysis<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h3>\n<p style=\"padding-left: 30px\"><strong>Zhirong Wu<\/strong>, Alexei Efros, Stella Yu<\/p>\n<p>&nbsp;<\/p>\n<h2>Wednesday, September 11, 2018 | 10:00 AM | 3A<\/h2>\n<h3><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/arxiv.org\/abs\/1807.03871\" target=\"_blank\" rel=\"noopener\">&#8220;Factual&#8221; or &#8220;Emotional&#8221;: Stylized Image Captioning with Adaptive Learning and Attention<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h3>\n<p style=\"padding-left: 30px\">Tianlang Chen, Zhongping Zhang, <strong>Quanzeng You<\/strong>, Chen Fang, Zhaowen Wang, Hailin Jin, Jiebo Luo<\/p>\n<h3><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/adding-attentiveness-to-the-neurons-in-recurrent-neural-networks\/\">Adding Attentiveness to the Neurons in Recurrent Neural Networks<\/a><\/h3>\n<p style=\"padding-left: 30px\">Pengfei Zhang, Jianru Xue, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/culan\/\"><strong>Cuiling Lan<\/strong><\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/wezeng\/\"><strong>Wenjun Zeng<\/strong><\/a>, Zhanning Gao, Nanning Zheng<\/p>\n<h3><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/arxiv.org\/abs\/1805.03430\" target=\"_blank\" rel=\"noopener\">Deep Directional Statistics: Pose Estimation with Uncertainty Quantification<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h3>\n<p style=\"padding-left: 30px\">Sergey Prokudin, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/senowozi\/\"><strong>Sebastian Nowozin<\/strong><\/a>, Peter Gehler<\/p>\n<h3><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/arxiv.org\/abs\/1803.06340\" target=\"_blank\" rel=\"noopener\">Faces as Lighting Probes via Unsupervised Deep Highlight Extraction<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h3>\n<p style=\"padding-left: 30px\">Renjiao Yi, Chenyang Zhu, Ping Tan, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/stevelin\/\"><strong>Stephen Lin<\/strong><\/a><\/p>\n<h3><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"http:\/\/people.inf.ethz.ch\/aksoyy\/flashambient\/\" target=\"_blank\" rel=\"noopener\">A Dataset of Flash and Ambient Illumination Pairs from the Crowd<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h3>\n<p style=\"padding-left: 30px\">Yagiz Aksoy, Changil Kim, Petr Kellnhofer, Sylvain Paris, Mohamed A. Elghareb, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\"><strong>Marc Pollefeys<\/strong><\/a>, Wojciech Matusik<\/p>\n<p>&nbsp;<\/p>\n<h2>Wednesday, September 11, 2018 | 2:30 PM | 3B<\/h2>\n<h3><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"http:\/\/openaccess.thecvf.com\/content_ECCV_2018\/html\/Yalong_Bai_Deep_Attention_Neural_ECCV_2018_paper.html\" target=\"_blank\" rel=\"noopener\">Deep Attention Neural Tensor Network for Visual Question Answering<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h3>\n<p style=\"padding-left: 30px\">Yalong Bai, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jianf\/\"><strong>Jianlong Fu<\/strong><\/a>, Tao Mei<\/p>\n<h3><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/arxiv.org\/abs\/1803.07066\" target=\"_blank\" rel=\"noopener\">Learning Region Features for Object Detection<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h3>\n<p style=\"padding-left: 30px\">Jiayuan Gu, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/hanhu\/\"><strong>Han Hu<\/strong><\/a>, Liwei Wang, <strong>Yichen Wei<\/strong>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jifdai\/\"><strong>Jifeng Dai<\/strong><\/a><strong><br \/>\n<\/strong><\/p>\n<h3><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"http:\/\/openaccess.thecvf.com\/content_ECCV_2018\/html\/Hai_Ci_Video_Object_Segmentation_ECCV_2018_paper.html\" target=\"_blank\" rel=\"noopener\">Video Object Segmentation by Learning Location-Sensitive Embeddings<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h3>\n<p style=\"padding-left: 30px\">Hai Ci, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/chnuwa\/\"><strong>Chunyu Wang<\/strong><\/a>, Yizhou Wang<\/p>\n<h3><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"http:\/\/www.cvlibs.net\/publications\/Cherabier2018ECCV.pdf\" target=\"_blank\" rel=\"noopener\">Learning Priors for Semantic 3D Reconstruction<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h3>\n<p style=\"padding-left: 30px\">Ian Cherabier, <strong>Johannes Sch\u00f6nberger<\/strong>, Martin R. Oswald, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\"><strong>Marc Pollefeys<\/strong><\/a>, Andreas Geiger<\/p>\n<p>&nbsp;<\/p>\n<h2>Thursday, September 12, 2018 | 10:00 AM | 4A<\/h2>\n<h3><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/arxiv.org\/abs\/1809.07041\" target=\"_blank\" rel=\"noopener\">Exploring Visual Relationship for Image Captioning<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h3>\n<p style=\"padding-left: 30px\"><strong>Ting Yao<\/strong>, Yingwei Pan, Yehao Li, Tao Mei<\/p>\n<h3><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/arxiv.org\/abs\/1807.08186\" target=\"_blank\" rel=\"noopener\">Learning to Learn Parameterized Image Operators<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h3>\n<p style=\"padding-left: 30px\">Qingnan Fan, Dongdong Chen, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/luyuan\/\"><strong>Lu Yuan<\/strong><\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/ganghua\/\"><strong>Gang Hua<\/strong><\/a>, Nenghai Yu, Baoquan Chen<\/p>\n<h3><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/learning-to-fuse-proposals-from-multiple-scanline-optimizations-in-semi-global-matching\/\">Learning to Fuse Proposals from Multiple Scanline Optimizations in Semi-Global Matching<\/a><\/h3>\n<p style=\"padding-left: 30px\">Johannes Schoenberger, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/sudipsin\/\"><strong>Sudipta Sinha<\/strong><\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\"><strong>Marc Pollefeys<\/strong><\/a><\/p>\n<h3><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/arxiv.org\/abs\/1804.07094\" target=\"_blank\" rel=\"noopener\">Part-Aligned Bilinear Representations for Person Re-Identification<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h3>\n<p style=\"padding-left: 30px\">Yumin Suh,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jingdw\/\"><strong>Jingdong Wang<\/strong><\/a>, Kyoung Mu Lee<\/p>\n<p>&nbsp;<\/p>\n<h2>Thursday, September 12, 2018 | 4:00PM | 4B<\/h2>\n<h3><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/arxiv.org\/abs\/1803.07231\" target=\"_blank\" rel=\"noopener\">Hierarchical Metric Learning and Matching for 2D and 3D Geometric Correspondences<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h3>\n<p style=\"padding-left: 30px\">Mohammed Fathy, Quoc-Huy Tran, <strong><strong>Zeeshan Zia<\/strong><\/strong>, Paul Vernaza, Manmohan Chandraker<\/p>\n<h3><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/learn-to-score-efficient-3d-scene-exploration-by-predicting-view-utility\/\">Learn-to-Score: Efficient 3D Scene Exploration by Predicting View Utility<\/a><\/h3>\n<p style=\"padding-left: 30px\">Benjamin Hepp, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/dedey\/\"><strong>Debadeepta Dey<\/strong><\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/sudipsin\/\"><strong>Sudipta Sinha<\/strong><\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/akapoor\/\"><strong>Ashish Kapoor<\/strong><\/a>, Neel Joshi, Otmar Hilliges<\/p>\n<h3><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/arxiv.org\/abs\/1807.08333\" target=\"_blank\" rel=\"noopener\">AutoLoc: Weakly-supervised Temporal Action Localization in Untrimmed Videos<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h3>\n<p style=\"padding-left: 30px\">Zheng Shou, Hang Gao, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/leizhang\/\"><strong>Lei Zhang<\/strong><\/a>, Kazuyuki Miyazawa, Shih-Fu Chang<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<h2>Join us in our donation #MSFTResearchGives.<\/h2>\n<h3>Help us choose which organization should receive this donation by voting on our Twitter poll <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/x.com\/MSFTResearch\" target=\"_blank\" rel=\"noopener\">@MSFTResearch.<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h3>\n<p>In lieu of purchasing thousands of giveaway items, we have decided to reduce our environmental footprint and donate to one of the following organizations:<\/p>\n<hr \/>\n<h2><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"http:\/\/code.org\" target=\"_blank\" rel=\"noopener\"><img loading=\"lazy\" decoding=\"async\" class=\"alignleft wp-image-494144 size-thumbnail\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/06\/CODE_logo_RGB-x200h--150x150.jpg\" alt=\"Code.org\u00ae is a non-profit dedicated to expanding access to computer science, and increasing participation by women and underrepresented minorities. \" width=\"150\" height=\"150\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/06\/CODE_logo_RGB-x200h--150x150.jpg 150w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/06\/CODE_logo_RGB-x200h--180x180.jpg 180w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/06\/CODE_logo_RGB-x200h-.jpg 201w\" sizes=\"auto, (max-width: 150px) 100vw, 150px\" \/><span class=\"sr-only\"> (opens in new tab)<\/span><\/a>Code.org<\/h2>\n<p>Code.org\u00ae is a non-profit dedicated to expanding access to computer science, and increasing participation by women and underrepresented minorities. Their vision is that every student in every school should have the opportunity to learn computer science, just like biology, chemistry or algebra. Code.org organizes the annual <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"http:\/\/hourofcode.com\/\" target=\"_blank\" rel=\"noopener\">Hour of Code<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> campaign which has engaged 10% of all students in the world, and provides the leading curriculum for K-12 computer science in the largest school districts in the United States.<\/p>\n<hr \/>\n<h2><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.firstinspires.org\/\" target=\"_blank\" rel=\"noopener\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-494147 alignleft\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/06\/FIRST_Redemp.-Center-Image-x200h-002.png\" alt=\"FIRST (For Inspiration and Recognition of Science and Technology) was founded in 1989 to inspire young people's interest and participation in science and technology\" width=\"165\" height=\"125\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/06\/FIRST_Redemp.-Center-Image-x200h-002.png 264w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/06\/FIRST_Redemp.-Center-Image-x200h-002-80x60.png 80w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/06\/FIRST_Redemp.-Center-Image-x200h-002-240x180.png 240w\" sizes=\"auto, (max-width: 165px) 100vw, 165px\" \/><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><em>FIRST<\/em><\/h2>\n<p><i>FIRST<\/i> (<b>F<\/b>or Inspiration and <b>R<\/b>ecognition of <b>S<\/b>cience and <b>T<\/b>echnology) was founded in 1989 to inspire young people&#8217;s interest and participation in science and technology. Based in Manchester, NH, the 501(c)(3) not-for-profit public charity designs accessible, innovative programs that motivate young people to pursue education and career opportunities in science, technology, engineering, and math, while building self-confidence, knowledge, and life skills.<\/p>\n<p><em>FIRST<\/em> is <b>More Than Robots<\/b>. <i>FIRST<\/i> participation is proven to encourage students to pursue education and careers in STEM-related fields, inspire them to become leaders and innovators, and enhance their 21st century work-life skills. Read more about the Impact of <i>FIRST<\/i>.<\/p>\n<p>Learn more at <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"http:\/\/www.firstinspires.org\/\" target=\"_blank\" rel=\"noopener\">www.firstinspires.org<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>.<\/p>\n<hr \/>\n<h2><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/girlswhocode.com\/\" target=\"_blank\" rel=\"noopener\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-494150 alignleft\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/06\/GWC-logo_2016_-x200h-002.png\" alt=\"Girls Who Code focuses on closing the gender gap in technology. \" width=\"205\" height=\"95\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/06\/GWC-logo_2016_-x200h-002.png 432w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/06\/GWC-logo_2016_-x200h-002-300x139.png 300w\" sizes=\"auto, (max-width: 205px) 100vw, 205px\" \/><span class=\"sr-only\"> (opens in new tab)<\/span><\/a>Girls Who Code<\/h2>\n<p><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/girlswhocode.com\/\" target=\"_blank\" rel=\"noopener\">Girls Who Code<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> focuses on closing the gender gap in technology. Through the National Girls Who Code Clubs program, Girls Who Code offers a free after-school program for 6th-12th graders that provides computer science instruction along with a community of supportive peers and role models. With support from Microsoft, Girls Who Code will expand the program in cities and rural communities. The support will enable greater engagement within these communities, support of volunteer instructors, a refresh of curriculum, tools, and program evaluation as well as program enrichment opportunities, such as field trips, guest speakers, and meet-ups.<\/p>\n<hr \/>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Microsoft is proud to be a Diamond sponsor of the European Conference on Computer Vision in Munich September 8, 2018 \u2013 September 14, 2018. Come by our booth to chat with our experts, see demos of our latest research and find out about career opportunities with Microsoft.<\/p>\n","protected":false},"featured_media":498773,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr_startdate":"2018-09-08","msr_enddate":"2018-09-14","msr_location":"Munich, Germany","msr_expirationdate":"","msr_event_recording_link":"","msr_event_link":"https:\/\/eccv2018.org\/attending\/registration\/","msr_event_link_redirect":false,"msr_event_time":"","msr_hide_region":false,"msr_private_event":false,"msr_hide_image_in_river":0,"footnotes":""},"research-area":[13562],"msr-region":[239178],"msr-event-type":[197941],"msr-video-type":[],"msr-locale":[268875],"msr-program-audience":[],"msr-post-option":[],"msr-impact-theme":[],"class_list":["post-498755","msr-event","type-msr-event","status-publish","has-post-thumbnail","hentry","msr-research-area-computer-vision","msr-region-europe","msr-event-type-conferences","msr-locale-en_us"],"msr_about":"<!-- wp:msr\/event-details {\"title\":\"Microsoft @ ECCV 2018\",\"backgroundColor\":\"grey\",\"image\":{\"id\":498773,\"url\":\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/08\/1920x720-header-ECCV2018.jpg\",\"alt\":\"\"}} \/-->\n\n<!-- wp:msr\/content-tabs --><!-- wp:msr\/content-tab {\"title\":\"About\"} --><!-- wp:freeform --><p><strong>Venue: <\/strong><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/en.gasteig.de\/\" target=\"_blank\" rel=\"noopener\">GASTEIG Cultural Center<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><br \/>\nRosenheimer Str. 5<br \/>\n81667 Munich<br \/>\nGermany<\/p>\n<p><strong>Website:<\/strong> <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/eccv2018.org\/\" target=\"_blank\" rel=\"noopener\">ECCV 2018<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<p>Microsoft is proud to be a Diamond sponsor of the <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/eccv2018.org\/\" target=\"_blank\" rel=\"noopener\">European Conference on Computer Vision<\/a> in Munich September 8, 2018 \u2013 September 14, 2018. Come by our booth to chat with our experts, see demos of our latest research and find out about <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/careers.microsoft.com\/us\/en\/c\/research-jobs\" target=\"_blank\" rel=\"noopener\">career opportunities<\/a> with Microsoft.<\/p>\n<h2>Committee Chairs<\/h2>\n<h3>Area Chair<\/h3>\n<p style=\"padding-left: 30px\"><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/awf\/\">Andrew Fitzgibbon<\/a><br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/senowozi\/\">Sebastian Nowozin<\/a><\/p>\n<h2>Microsoft Attendees<\/h2>\n<p style=\"padding-left: 30px\">Alex Hagiopol<br \/>\nAna Anastasijevic<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/awf\/\">Andrew Fitzgibbon<\/a><br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/libin\/\">Bin Li<\/a><br \/>\nBin Xiao<br \/>\nChris Aholt<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/chnuwa\/\">Chunyu Wang<\/a><br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/culan\/\">Cuiling Lan<\/a><br \/>\nErroll Wood<br \/>\nFangyun Wei<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jamiesho\/\">Jamie Shotton<\/a><br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jiaoyan\/\">Jiaolong Yang<\/a><br \/>\nJoseph DeGol<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/kualee\/\">Kuang-Huei Lee<\/a><br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\">Marc Pollefeys<\/a><br \/>\nMladen Radojevic<br \/>\nNikola Milosavljevic<br \/>\nNikolaos Karianakis<br \/>\nPatrick Buehler<br \/>\nShivkumar Swaminathan<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/sudipsin\/\">Sudipta Sinha<\/a><br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/tcashman\/\">Tom Cashman<\/a><br \/>\nVukasin Rankovic<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/wezeng\/\">Wenjun Zeng<\/a><br \/>\nXudong Liu<br \/>\nZhirong Wu<br \/>\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/zliu\/\">Zicheng Liu<\/a><\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<!-- \/wp:freeform --><!-- \/wp:msr\/content-tab --><!-- wp:msr\/content-tab {\"title\":\"Tutorials\/Workshops\"} --><!-- wp:freeform --><h3>Saturday AM | Theresianum 606<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/docs.microsoft.com\/en-us\/windows\/mixed-reality\/eccv-2018\" target=\"_blank\" rel=\"noopener\">HoloLens as a tool for computer vision research<\/a><\/h3>\n<p style=\"padding-left: 30px\"><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\"><strong>Marc Pollefeys<\/strong><\/a>, <strong>Johannes Sch\u00f6nberger<\/strong>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/awf\/\"><strong>Andrew Fitzgibbon<\/strong><\/a><\/p>\n<h3>Saturday PM | Theresianum 601<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.gcc.tu-darmstadt.de\/home\/events\/eccv_w_2018__vision_for_xr_\/eccv2018_workshop_vision_for_xr.en.jsp\" target=\"_blank\" rel=\"noopener\">Vision for XR<\/a><\/h3>\n<p style=\"padding-left: 30px\">Invited talk: <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\"><strong>Mark Pollefeys<\/strong><\/a><\/p>\n<h3>Sunday AM | N1179<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"http:\/\/trimbot2020.webhosting.rug.nl\/events\/3drms\/\" target=\"_blank\" rel=\"noopener\">3D Reconstruction Meets Semantics (3DRMS)<\/a><\/h3>\n<p style=\"padding-left: 30px\">Program chair: <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\"><strong>Mark Pollefeys<\/strong><\/a><\/p>\n<h3>Sunday PM | Audimax 0980<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/360pi.github.io\/\" target=\"_blank\" rel=\"noopener\">360\u00b0 Perception and Interaction<\/a><\/h3>\n<p style=\"padding-left: 30px\">Invited talk: <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\"><strong>Mark Pollefeys<\/strong><\/a><\/p>\n<h3>Sunday PM | Theresianum 606<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/sites.google.com\/view\/hands2018\/\" target=\"_blank\" rel=\"noopener\">Observing and Understanding Hands in Action (HANDS2018)<\/a><\/h3>\n<p style=\"padding-left: 30px\">Invited talk: <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/awf\/\"><strong>Andrew Fitzgibbon<\/strong><\/a><\/p>\n<h3>Sunday PM | N1090ZG<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/wicvworkshop.github.io\/ECCV2018\/\" target=\"_blank\" rel=\"noopener\">Women in Computer Vision | N1090ZG<\/a><\/h3>\n<p style=\"padding-left: 30px\">Workshop panelist: <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/awf\/\"><strong>Andrew Fitzgibbon<\/strong><\/a><\/p>\n<h3>Sunday PM | Theresianum 602<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"http:\/\/www.picdataset.com\/\" target=\"_blank\" rel=\"noopener\">1st Person in Context (PIC) Workshop and Challenge<\/a><\/h3>\n<p style=\"padding-left: 30px\">Invited talk: <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/wezeng\/\"><strong>Wenjun Zeng<\/strong><\/a><\/p>\n<h3>Sunday All Day | 1200<br \/>\n<a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"http:\/\/apolloscape.auto\/ECCV\/index.html\" target=\"_blank\" rel=\"noopener\">ApolloScape: Vision-based Navigation for Autonomous Driving<\/a><\/h3>\n<p style=\"padding-left: 30px\">Invited talk and panelist: <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\"><strong>Mark Pollefeys<\/strong><\/a><\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<!-- \/wp:freeform --><!-- \/wp:msr\/content-tab --><!-- wp:msr\/content-tab {\"title\":\"Poster Sessions\"} --><!-- wp:freeform --><h2>Monday, September 9, 2018 | 10:00 AM | 1A<\/h2>\n<h3><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/arxiv.org\/abs\/1807.07872\" target=\"_blank\" rel=\"noopener\">From Face Recognition to Models of Identity: A Bayesian Approach to Learning about Unknown Identities from Unsupervised Data<\/a><\/h3>\n<p style=\"padding-left: 30px\">Daniel Castro, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/senowozi\/\"><strong>Sebastian Nowozin<\/strong><\/a><\/p>\n<h3><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/arxiv.org\/abs\/1805.07888\" target=\"_blank\" rel=\"noopener\">DeepPhys: Video-Based Physiological Measurement Using Convolutional Attention Networks<\/a><\/h3>\n<p style=\"padding-left: 30px\">Weixuan Chen, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/damcduff\/\"><strong>Daniel McDuff<\/strong><\/a><\/p>\n<h3><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"http:\/\/people.inf.ethz.ch\/sattlert\/publications\/Toft2018ECCV.pdf\" target=\"_blank\" rel=\"noopener\">Semantic Match Consistency for Long-Term Visual Localization<\/a><\/h3>\n<p style=\"padding-left: 30px\">Carl Toft, Erik Stenborg, Lars Hammarstrand, Lucas Brynte, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\"><strong>Marc Pollefeys<\/strong><\/a>, Torsten Sattler, Fredrik Kahl<\/p>\n<p>&nbsp;<\/p>\n<h2>Monday, September 9, 2018 | 4:00 PM | 1B<\/h2>\n<h3><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/stacked-cross-attention-for-image-text-matching\/\">Stacked Cross Attention for Image-Text Matching<\/a><\/h3>\n<p style=\"padding-left: 30px\"><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/kualee\/\"><strong>Kuang-Huei Lee<\/strong><\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/chexi\/\"><strong>Xi Chen<\/strong><\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/ganghua\/\"><strong>Gang Hua<\/strong><\/a>, <strong>Houdong Hu<\/strong>, Xiaodong He<\/p>\n<h3><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"http:\/\/openaccess.thecvf.com\/content_ECCV_2018\/html\/Yiding_Liu_Affinity_Derivation_and_ECCV_2018_paper.html\" target=\"_blank\" rel=\"noopener\">Affinity Derivation and Graph Merge for Instance Segmentation<\/a><\/h3>\n<p style=\"padding-left: 30px\">Yiding Liu, Siyu Yang, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/libin\/\"><strong>Bin Li<\/strong><\/a>, Wengang Zhou, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jzxu\/\"><strong>Ji-Zheng Xu<\/strong><\/a>, Houqiang Li, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/yanlu\/\"><strong>Yan Lu<\/strong><\/a><\/p>\n<h3><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/online-dictionary-learning-for-approximate-archetypal-analysis\/\">Online Dictionary Learning for Approximate Archetypal Analysis<\/a><\/h3>\n<p style=\"padding-left: 30px\"><strong>Jieru Mei<\/strong>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/chnuwa\/\"><strong>Chunyu Wang<\/strong><\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/wezeng\/\"><strong>Wenjun Zeng<\/strong><\/a><strong><br \/>\n<\/strong><\/p>\n<h3><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/demuc.de\/papers\/lianos2018vso.pdf\" target=\"_blank\" rel=\"noopener\">VSO: Visual Semantic Odometry<\/a><\/h3>\n<p style=\"padding-left: 30px\">Konstantinos-Nektarios Lianos, <strong>Johannes Sch\u00f6nberger<\/strong>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\"><strong>Marc Pollefeys<\/strong><\/a>, Torsten Sattler<\/p>\n<h3><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"http:\/\/web.engr.illinois.edu\/~degol2\/pages\/TagSfM_ECCV18.html\" target=\"_blank\" rel=\"noopener\">Improved Structure from Motion Using Fiducial Marker Matching<\/a><\/h3>\n<p style=\"padding-left: 30px\"><strong>Joseph DeGol<\/strong>, Timothy Bretl, Derek Hoiem<\/p>\n<p>&nbsp;<\/p>\n<h2>Tuesday, September 10, 2018 | 10:00 AM | 2A<\/h2>\n<h3><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/arxiv.org\/abs\/1801.05551\" target=\"_blank\" rel=\"noopener\">Semi-supervised FusedGAN for Conditional Image Generation<\/a><\/h3>\n<p style=\"padding-left: 30px\">Navaneeth Bodla, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/ganghua\/\"><strong>Gang Hua<\/strong><\/a>, Rama Chellappa<\/p>\n<h3><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/arxiv.org\/abs\/1809.06079\" target=\"_blank\" rel=\"noopener\">Integral Human Pose Regression<\/a><\/h3>\n<p style=\"padding-left: 30px\"><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/xias\/\"><strong>Xiao Sun<\/strong><\/a>, Bin Xiao, Fangyin Wei, Shuang Liang, Yichen Wei<\/p>\n<h3><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"http:\/\/openaccess.thecvf.com\/content_ECCV_2018\/html\/Dong_Li_Recurrent_Tubelet_Proposal_ECCV_2018_paper.html\" target=\"_blank\" rel=\"noopener\">Recurrent Tubelet Proposal and Recognition Networks for Action Detection<\/a><\/h3>\n<p style=\"padding-left: 30px\">Dong Li, Zhaofan Qiu, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/qid\/\"><strong>Qi Dai<\/strong><\/a>, Ting Yao, Tao Mei<\/p>\n<h3><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/link.springer.com\/chapter\/10.1007\/978-3-030-01228-1_44\" target=\"_blank\" rel=\"noopener\">Reinforced Temporal Attention and Split-Rate Transfer for Depth-Based Person Re-identification<\/a><\/h3>\n<p style=\"padding-left: 30px\"><strong>Nikolaos Karianakis<\/strong>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/zliu\/\"><strong>Zicheng Liu<\/strong><\/a>, <strong>Yinpeng Chen<\/strong>, Stefano Soatto<\/p>\n<h3><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/arxiv.org\/abs\/1804.06208\" target=\"_blank\" rel=\"noopener\">Simple Baselines for Human Pose Estimation and Tracking<\/a><\/h3>\n<p style=\"padding-left: 30px\"><strong>Bin Xiao<\/strong>, <strong>Haiping Wu<\/strong>, <strong>Yichen Wei<\/strong><\/p>\n<p>&nbsp;<\/p>\n<h2>Tuesday, September 10, 2018 | 4:00 PM | 2B<\/h2>\n<h3><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"http:\/\/openaccess.thecvf.com\/content_ECCV_2018\/papers\/Dongqing_Zhang_Optimized_Quantization_for_ECCV_2018_paper.pdf\" target=\"_blank\" rel=\"noopener\">Optimized Quantization for Highly Accurate and Compact DNNs<\/a><\/h3>\n<p style=\"padding-left: 30px\"><strong>Dongqing Zhang<\/strong>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jiaoyan\/\"><strong>Jiaolong Yang<\/strong><\/a>, <strong>Dongqiangzi Ye<\/strong>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/ganghua\/\"><strong>Gang Hua<\/strong><\/a><\/p>\n<h3><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/arxiv.org\/abs\/1808.04699\" target=\"_blank\" rel=\"noopener\">Improving Embedding Generalization via Scalable Neighborhood Component Analysis<\/a><\/h3>\n<p style=\"padding-left: 30px\"><strong>Zhirong Wu<\/strong>, Alexei Efros, Stella Yu<\/p>\n<p>&nbsp;<\/p>\n<h2>Wednesday, September 11, 2018 | 10:00 AM | 3A<\/h2>\n<h3><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/arxiv.org\/abs\/1807.03871\" target=\"_blank\" rel=\"noopener\">&#8220;Factual&#8221; or &#8220;Emotional&#8221;: Stylized Image Captioning with Adaptive Learning and Attention<\/a><\/h3>\n<p style=\"padding-left: 30px\">Tianlang Chen, Zhongping Zhang, <strong>Quanzeng You<\/strong>, Chen Fang, Zhaowen Wang, Hailin Jin, Jiebo Luo<\/p>\n<h3><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/adding-attentiveness-to-the-neurons-in-recurrent-neural-networks\/\">Adding Attentiveness to the Neurons in Recurrent Neural Networks<\/a><\/h3>\n<p style=\"padding-left: 30px\">Pengfei Zhang, Jianru Xue, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/culan\/\"><strong>Cuiling Lan<\/strong><\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/wezeng\/\"><strong>Wenjun Zeng<\/strong><\/a>, Zhanning Gao, Nanning Zheng<\/p>\n<h3><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/arxiv.org\/abs\/1805.03430\" target=\"_blank\" rel=\"noopener\">Deep Directional Statistics: Pose Estimation with Uncertainty Quantification<\/a><\/h3>\n<p style=\"padding-left: 30px\">Sergey Prokudin, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/senowozi\/\"><strong>Sebastian Nowozin<\/strong><\/a>, Peter Gehler<\/p>\n<h3><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/arxiv.org\/abs\/1803.06340\" target=\"_blank\" rel=\"noopener\">Faces as Lighting Probes via Unsupervised Deep Highlight Extraction<\/a><\/h3>\n<p style=\"padding-left: 30px\">Renjiao Yi, Chenyang Zhu, Ping Tan, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/stevelin\/\"><strong>Stephen Lin<\/strong><\/a><\/p>\n<h3><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"http:\/\/people.inf.ethz.ch\/aksoyy\/flashambient\/\" target=\"_blank\" rel=\"noopener\">A Dataset of Flash and Ambient Illumination Pairs from the Crowd<\/a><\/h3>\n<p style=\"padding-left: 30px\">Yagiz Aksoy, Changil Kim, Petr Kellnhofer, Sylvain Paris, Mohamed A. Elghareb, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\"><strong>Marc Pollefeys<\/strong><\/a>, Wojciech Matusik<\/p>\n<p>&nbsp;<\/p>\n<h2>Wednesday, September 11, 2018 | 2:30 PM | 3B<\/h2>\n<h3><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"http:\/\/openaccess.thecvf.com\/content_ECCV_2018\/html\/Yalong_Bai_Deep_Attention_Neural_ECCV_2018_paper.html\" target=\"_blank\" rel=\"noopener\">Deep Attention Neural Tensor Network for Visual Question Answering<\/a><\/h3>\n<p style=\"padding-left: 30px\">Yalong Bai, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jianf\/\"><strong>Jianlong Fu<\/strong><\/a>, Tao Mei<\/p>\n<h3><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/arxiv.org\/abs\/1803.07066\" target=\"_blank\" rel=\"noopener\">Learning Region Features for Object Detection<\/a><\/h3>\n<p style=\"padding-left: 30px\">Jiayuan Gu, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/hanhu\/\"><strong>Han Hu<\/strong><\/a>, Liwei Wang, <strong>Yichen Wei<\/strong>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jifdai\/\"><strong>Jifeng Dai<\/strong><\/a><strong><br \/>\n<\/strong><\/p>\n<h3><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"http:\/\/openaccess.thecvf.com\/content_ECCV_2018\/html\/Hai_Ci_Video_Object_Segmentation_ECCV_2018_paper.html\" target=\"_blank\" rel=\"noopener\">Video Object Segmentation by Learning Location-Sensitive Embeddings<\/a><\/h3>\n<p style=\"padding-left: 30px\">Hai Ci, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/chnuwa\/\"><strong>Chunyu Wang<\/strong><\/a>, Yizhou Wang<\/p>\n<h3><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"http:\/\/www.cvlibs.net\/publications\/Cherabier2018ECCV.pdf\" target=\"_blank\" rel=\"noopener\">Learning Priors for Semantic 3D Reconstruction<\/a><\/h3>\n<p style=\"padding-left: 30px\">Ian Cherabier, <strong>Johannes Sch\u00f6nberger<\/strong>, Martin R. Oswald, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\"><strong>Marc Pollefeys<\/strong><\/a>, Andreas Geiger<\/p>\n<p>&nbsp;<\/p>\n<h2>Thursday, September 12, 2018 | 10:00 AM | 4A<\/h2>\n<h3><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/arxiv.org\/abs\/1809.07041\" target=\"_blank\" rel=\"noopener\">Exploring Visual Relationship for Image Captioning<\/a><\/h3>\n<p style=\"padding-left: 30px\"><strong>Ting Yao<\/strong>, Yingwei Pan, Yehao Li, Tao Mei<\/p>\n<h3><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/arxiv.org\/abs\/1807.08186\" target=\"_blank\" rel=\"noopener\">Learning to Learn Parameterized Image Operators<\/a><\/h3>\n<p style=\"padding-left: 30px\">Qingnan Fan, Dongdong Chen, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/luyuan\/\"><strong>Lu Yuan<\/strong><\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/ganghua\/\"><strong>Gang Hua<\/strong><\/a>, Nenghai Yu, Baoquan Chen<\/p>\n<h3><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/learning-to-fuse-proposals-from-multiple-scanline-optimizations-in-semi-global-matching\/\">Learning to Fuse Proposals from Multiple Scanline Optimizations in Semi-Global Matching<\/a><\/h3>\n<p style=\"padding-left: 30px\">Johannes Schoenberger, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/sudipsin\/\"><strong>Sudipta Sinha<\/strong><\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\"><strong>Marc Pollefeys<\/strong><\/a><\/p>\n<h3><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/arxiv.org\/abs\/1804.07094\" target=\"_blank\" rel=\"noopener\">Part-Aligned Bilinear Representations for Person Re-Identification<\/a><\/h3>\n<p style=\"padding-left: 30px\">Yumin Suh,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jingdw\/\"><strong>Jingdong Wang<\/strong><\/a>, Kyoung Mu Lee<\/p>\n<p>&nbsp;<\/p>\n<h2>Thursday, September 12, 2018 | 4:00PM | 4B<\/h2>\n<h3><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/arxiv.org\/abs\/1803.07231\" target=\"_blank\" rel=\"noopener\">Hierarchical Metric Learning and Matching for 2D and 3D Geometric Correspondences<\/a><\/h3>\n<p style=\"padding-left: 30px\">Mohammed Fathy, Quoc-Huy Tran, <strong><strong>Zeeshan Zia<\/strong><\/strong>, Paul Vernaza, Manmohan Chandraker<\/p>\n<h3><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/learn-to-score-efficient-3d-scene-exploration-by-predicting-view-utility\/\">Learn-to-Score: Efficient 3D Scene Exploration by Predicting View Utility<\/a><\/h3>\n<p style=\"padding-left: 30px\">Benjamin Hepp, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/dedey\/\"><strong>Debadeepta Dey<\/strong><\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/sudipsin\/\"><strong>Sudipta Sinha<\/strong><\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/akapoor\/\"><strong>Ashish Kapoor<\/strong><\/a>, Neel Joshi, Otmar Hilliges<\/p>\n<h3><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/arxiv.org\/abs\/1807.08333\" target=\"_blank\" rel=\"noopener\">AutoLoc: Weakly-supervised Temporal Action Localization in Untrimmed Videos<\/a><\/h3>\n<p style=\"padding-left: 30px\">Zheng Shou, Hang Gao, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/leizhang\/\"><strong>Lei Zhang<\/strong><\/a>, Kazuyuki Miyazawa, Shih-Fu Chang<\/p>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<!-- \/wp:freeform --><!-- \/wp:msr\/content-tab --><!-- wp:msr\/content-tab {\"title\":\"Donate For Good\"} --><!-- wp:freeform --><h2>Join us in our donation #MSFTResearchGives.<\/h2>\n<h3>Help us choose which organization should receive this donation by voting on our Twitter poll <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/x.com\/MSFTResearch\" target=\"_blank\" rel=\"noopener\">@MSFTResearch.<span class=\"sr-only\"> (opens in new tab)<\/span><\/a><\/h3>\n<p>In lieu of purchasing thousands of giveaway items, we have decided to reduce our environmental footprint and donate to one of the following organizations:<\/p>\n<hr \/>\n<h2><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"http:\/\/code.org\" target=\"_blank\" rel=\"noopener\"><img loading=\"lazy\" decoding=\"async\" class=\"alignleft wp-image-494144 size-thumbnail\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/06\/CODE_logo_RGB-x200h--150x150.jpg\" alt=\"Code.org\u00ae is a non-profit dedicated to expanding access to computer science, and increasing participation by women and underrepresented minorities. \" width=\"150\" height=\"150\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/06\/CODE_logo_RGB-x200h--150x150.jpg 150w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/06\/CODE_logo_RGB-x200h--180x180.jpg 180w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/06\/CODE_logo_RGB-x200h-.jpg 201w\" sizes=\"auto, (max-width: 150px) 100vw, 150px\" \/><span class=\"sr-only\"> (opens in new tab)<\/span><\/a>Code.org<\/h2>\n<p>Code.org\u00ae is a non-profit dedicated to expanding access to computer science, and increasing participation by women and underrepresented minorities. Their vision is that every student in every school should have the opportunity to learn computer science, just like biology, chemistry or algebra. Code.org organizes the annual <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"http:\/\/hourofcode.com\/\" target=\"_blank\" rel=\"noopener\">Hour of Code<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> campaign which has engaged 10% of all students in the world, and provides the leading curriculum for K-12 computer science in the largest school districts in the United States.<\/p>\n<hr \/>\n<h2><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/www.firstinspires.org\/\" target=\"_blank\" rel=\"noopener\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-494147 alignleft\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/06\/FIRST_Redemp.-Center-Image-x200h-002.png\" alt=\"FIRST (For Inspiration and Recognition of Science and Technology) was founded in 1989 to inspire young people's interest and participation in science and technology\" width=\"165\" height=\"125\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/06\/FIRST_Redemp.-Center-Image-x200h-002.png 264w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/06\/FIRST_Redemp.-Center-Image-x200h-002-80x60.png 80w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/06\/FIRST_Redemp.-Center-Image-x200h-002-240x180.png 240w\" sizes=\"auto, (max-width: 165px) 100vw, 165px\" \/><span class=\"sr-only\"> (opens in new tab)<\/span><\/a><em>FIRST<\/em><\/h2>\n<p><i>FIRST<\/i> (<b>F<\/b>or Inspiration and <b>R<\/b>ecognition of <b>S<\/b>cience and <b>T<\/b>echnology) was founded in 1989 to inspire young people&#8217;s interest and participation in science and technology. Based in Manchester, NH, the 501(c)(3) not-for-profit public charity designs accessible, innovative programs that motivate young people to pursue education and career opportunities in science, technology, engineering, and math, while building self-confidence, knowledge, and life skills.<\/p>\n<p><em>FIRST<\/em> is <b>More Than Robots<\/b>. <i>FIRST<\/i> participation is proven to encourage students to pursue education and careers in STEM-related fields, inspire them to become leaders and innovators, and enhance their 21st century work-life skills. Read more about the Impact of <i>FIRST<\/i>.<\/p>\n<p>Learn more at <a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"http:\/\/www.firstinspires.org\/\" target=\"_blank\" rel=\"noopener\">www.firstinspires.org<span class=\"sr-only\"> (opens in new tab)<\/span><\/a>.<\/p>\n<hr \/>\n<h2><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/girlswhocode.com\/\" target=\"_blank\" rel=\"noopener\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-494150 alignleft\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/06\/GWC-logo_2016_-x200h-002.png\" alt=\"Girls Who Code focuses on closing the gender gap in technology. \" width=\"205\" height=\"95\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/06\/GWC-logo_2016_-x200h-002.png 432w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/06\/GWC-logo_2016_-x200h-002-300x139.png 300w\" sizes=\"auto, (max-width: 205px) 100vw, 205px\" \/><span class=\"sr-only\"> (opens in new tab)<\/span><\/a>Girls Who Code<\/h2>\n<p><a class=\"msr-external-link glyph-append glyph-append-open-in-new-tab glyph-append-xsmall\" href=\"https:\/\/girlswhocode.com\/\" target=\"_blank\" rel=\"noopener\">Girls Who Code<span class=\"sr-only\"> (opens in new tab)<\/span><\/a> focuses on closing the gender gap in technology. Through the National Girls Who Code Clubs program, Girls Who Code offers a free after-school program for 6th-12th graders that provides computer science instruction along with a community of supportive peers and role models. With support from Microsoft, Girls Who Code will expand the program in cities and rural communities. The support will enable greater engagement within these communities, support of volunteer instructors, a refresh of curriculum, tools, and program evaluation as well as program enrichment opportunities, such as field trips, guest speakers, and meet-ups.<\/p>\n<hr \/>\n<p><span id=\"label-external-link\" class=\"sr-only\" aria-hidden=\"true\">Opens in a new tab<\/span><\/p>\n<!-- \/wp:freeform --><!-- \/wp:msr\/content-tab --><!-- \/wp:msr\/content-tabs -->","tab-content":[{"id":0,"name":"About","content":"Microsoft is proud to be a Diamond sponsor of the <a href=\"https:\/\/eccv2018.org\/\" target=\"_blank\" rel=\"noopener\">European Conference on Computer Vision<\/a> in Munich September 8, 2018 \u2013 September 14, 2018. Come by our booth to chat with our experts, see demos of our latest research and find out about <a href=\"https:\/\/careers.microsoft.com\/us\/en\/c\/research-jobs\" target=\"_blank\" rel=\"noopener\">career opportunities<\/a> with Microsoft.\r\n<h2>Committee Chairs<\/h2>\r\n<h3>Area Chair<\/h3>\r\n<p style=\"padding-left: 30px\"><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/awf\/\">Andrew Fitzgibbon<\/a>\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/senowozi\/\">Sebastian Nowozin<\/a><\/p>\r\n\r\n<h2>Microsoft Attendees<\/h2>\r\n<p style=\"padding-left: 30px\">Alex Hagiopol\r\nAna Anastasijevic\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/awf\/\">Andrew Fitzgibbon<\/a>\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/libin\/\">Bin Li<\/a>\r\nBin Xiao\r\nChris Aholt\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/chnuwa\/\">Chunyu Wang<\/a>\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/culan\/\">Cuiling Lan<\/a>\r\nErroll Wood\r\nFangyun Wei\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jamiesho\/\">Jamie Shotton<\/a>\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jiaoyan\/\">Jiaolong Yang<\/a>\r\nJoseph DeGol\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/kualee\/\">Kuang-Huei Lee<\/a>\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\">Marc Pollefeys<\/a>\r\nMladen Radojevic\r\nNikola Milosavljevic\r\nNikolaos Karianakis\r\nPatrick Buehler\r\nShivkumar Swaminathan\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/sudipsin\/\">Sudipta Sinha<\/a>\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/tcashman\/\">Tom Cashman<\/a>\r\nVukasin Rankovic\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/wezeng\/\">Wenjun Zeng<\/a>\r\nXudong Liu\r\nZhirong Wu\r\n<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/zliu\/\">Zicheng Liu<\/a><\/p>"},{"id":1,"name":"Tutorials\/Workshops","content":"<h3>Saturday AM | Theresianum 606\r\n<a href=\"https:\/\/docs.microsoft.com\/en-us\/windows\/mixed-reality\/eccv-2018\" target=\"_blank\" rel=\"noopener\">HoloLens as a tool for computer vision research<\/a><\/h3>\r\n<p style=\"padding-left: 30px\"><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\"><strong>Marc Pollefeys<\/strong><\/a>, <strong>Johannes Sch\u00f6nberger<\/strong>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/awf\/\"><strong>Andrew Fitzgibbon<\/strong><\/a><\/p>\r\n\r\n<h3>Saturday PM | Theresianum 601\r\n<a href=\"https:\/\/www.gcc.tu-darmstadt.de\/home\/events\/eccv_w_2018__vision_for_xr_\/eccv2018_workshop_vision_for_xr.en.jsp\" target=\"_blank\" rel=\"noopener\">Vision for XR<\/a><\/h3>\r\n<p style=\"padding-left: 30px\">Invited talk: <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\"><strong>Mark Pollefeys<\/strong><\/a><\/p>\r\n\r\n<h3>Sunday AM | N1179\r\n<a href=\"http:\/\/trimbot2020.webhosting.rug.nl\/events\/3drms\/\" target=\"_blank\" rel=\"noopener\">3D Reconstruction Meets Semantics (3DRMS)<\/a><\/h3>\r\n<p style=\"padding-left: 30px\">Program chair: <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\"><strong>Mark Pollefeys<\/strong><\/a><\/p>\r\n\r\n<h3>Sunday PM | Audimax 0980\r\n<a href=\"https:\/\/360pi.github.io\/\" target=\"_blank\" rel=\"noopener\">360\u00b0 Perception and Interaction<\/a><\/h3>\r\n<p style=\"padding-left: 30px\">Invited talk: <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\"><strong>Mark Pollefeys<\/strong><\/a><\/p>\r\n\r\n<h3>Sunday PM | Theresianum 606\r\n<a href=\"https:\/\/sites.google.com\/view\/hands2018\/\" target=\"_blank\" rel=\"noopener\">Observing and Understanding Hands in Action (HANDS2018)<\/a><\/h3>\r\n<p style=\"padding-left: 30px\">Invited talk: <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/awf\/\"><strong>Andrew Fitzgibbon<\/strong><\/a><\/p>\r\n\r\n<h3>Sunday PM | N1090ZG\r\n<a href=\"https:\/\/wicvworkshop.github.io\/ECCV2018\/\" target=\"_blank\" rel=\"noopener\">Women in Computer Vision | N1090ZG<\/a><\/h3>\r\n<p style=\"padding-left: 30px\">Workshop panelist: <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/awf\/\"><strong>Andrew Fitzgibbon<\/strong><\/a><\/p>\r\n\r\n<h3>Sunday PM | Theresianum 602\r\n<a href=\"http:\/\/www.picdataset.com\/\" target=\"_blank\" rel=\"noopener\">1st Person in Context (PIC) Workshop and Challenge<\/a><\/h3>\r\n<p style=\"padding-left: 30px\">Invited talk: <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/wezeng\/\"><strong>Wenjun Zeng<\/strong><\/a><\/p>\r\n\r\n<h3>Sunday All Day | 1200\r\n<a href=\"http:\/\/apolloscape.auto\/ECCV\/index.html\" target=\"_blank\" rel=\"noopener\">ApolloScape: Vision-based Navigation for Autonomous Driving<\/a><\/h3>\r\n<p style=\"padding-left: 30px\">Invited talk and panelist: <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\"><strong>Mark Pollefeys<\/strong><\/a><\/p>"},{"id":2,"name":"Poster Sessions","content":"<h2>Monday, September 9, 2018 | 10:00 AM | 1A<\/h2>\r\n<h3><a href=\"https:\/\/arxiv.org\/abs\/1807.07872\" target=\"_blank\" rel=\"noopener\">From Face Recognition to Models of Identity: A Bayesian Approach to Learning about Unknown Identities from Unsupervised Data<\/a><\/h3>\r\n<p style=\"padding-left: 30px\">Daniel Castro, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/senowozi\/\"><strong>Sebastian Nowozin<\/strong><\/a><\/p>\r\n\r\n<h3><a href=\"https:\/\/arxiv.org\/abs\/1805.07888\" target=\"_blank\" rel=\"noopener\">DeepPhys: Video-Based Physiological Measurement Using Convolutional Attention Networks<\/a><\/h3>\r\n<p style=\"padding-left: 30px\">Weixuan Chen, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/damcduff\/\"><strong>Daniel McDuff<\/strong><\/a><\/p>\r\n\r\n<h3><a href=\"http:\/\/people.inf.ethz.ch\/sattlert\/publications\/Toft2018ECCV.pdf\" target=\"_blank\" rel=\"noopener\">Semantic Match Consistency for Long-Term Visual Localization<\/a><\/h3>\r\n<p style=\"padding-left: 30px\">Carl Toft, Erik Stenborg, Lars Hammarstrand, Lucas Brynte, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\"><strong>Marc Pollefeys<\/strong><\/a>, Torsten Sattler, Fredrik Kahl<\/p>\r\n&nbsp;\r\n<h2>Monday, September 9, 2018 | 4:00 PM | 1B<\/h2>\r\n<h3><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/stacked-cross-attention-for-image-text-matching\/\">Stacked Cross Attention for Image-Text Matching<\/a><\/h3>\r\n<p style=\"padding-left: 30px\"><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/kualee\/\"><strong>Kuang-Huei Lee<\/strong><\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/chexi\/\"><strong>Xi Chen<\/strong><\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/ganghua\/\"><strong>Gang Hua<\/strong><\/a>, <strong>Houdong Hu<\/strong>, Xiaodong He<\/p>\r\n\r\n<h3><a href=\"http:\/\/openaccess.thecvf.com\/content_ECCV_2018\/html\/Yiding_Liu_Affinity_Derivation_and_ECCV_2018_paper.html\" target=\"_blank\" rel=\"noopener\">Affinity Derivation and Graph Merge for Instance Segmentation<\/a><\/h3>\r\n<p style=\"padding-left: 30px\">Yiding Liu, Siyu Yang, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/libin\/\"><strong>Bin Li<\/strong><\/a>, Wengang Zhou, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jzxu\/\"><strong>Ji-Zheng Xu<\/strong><\/a>, Houqiang Li, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/yanlu\/\"><strong>Yan Lu<\/strong><\/a><\/p>\r\n\r\n<h3><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/online-dictionary-learning-for-approximate-archetypal-analysis\/\">Online Dictionary Learning for Approximate Archetypal Analysis<\/a><\/h3>\r\n<p style=\"padding-left: 30px\"><strong>Jieru Mei<\/strong>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/chnuwa\/\"><strong>Chunyu Wang<\/strong><\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/wezeng\/\"><strong>Wenjun Zeng<\/strong><\/a><strong>\r\n<\/strong><\/p>\r\n\r\n<h3><a href=\"https:\/\/demuc.de\/papers\/lianos2018vso.pdf\" target=\"_blank\" rel=\"noopener\">VSO: Visual Semantic Odometry<\/a><\/h3>\r\n<p style=\"padding-left: 30px\">Konstantinos-Nektarios Lianos, <strong>Johannes Sch\u00f6nberger<\/strong>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\"><strong>Marc Pollefeys<\/strong><\/a>, Torsten Sattler<\/p>\r\n\r\n<h3><a href=\"http:\/\/web.engr.illinois.edu\/~degol2\/pages\/TagSfM_ECCV18.html\" target=\"_blank\" rel=\"noopener\">Improved Structure from Motion Using Fiducial Marker Matching<\/a><\/h3>\r\n<p style=\"padding-left: 30px\"><strong>Joseph DeGol<\/strong>, Timothy Bretl, Derek Hoiem<\/p>\r\n&nbsp;\r\n<h2>Tuesday, September 10, 2018 | 10:00 AM | 2A<\/h2>\r\n<h3><a href=\"https:\/\/arxiv.org\/abs\/1801.05551\" target=\"_blank\" rel=\"noopener\">Semi-supervised FusedGAN for Conditional Image Generation<\/a><\/h3>\r\n<p style=\"padding-left: 30px\">Navaneeth Bodla, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/ganghua\/\"><strong>Gang Hua<\/strong><\/a>, Rama Chellappa<\/p>\r\n\r\n<h3><a href=\"https:\/\/arxiv.org\/abs\/1809.06079\" target=\"_blank\" rel=\"noopener\">Integral Human Pose Regression<\/a><\/h3>\r\n<p style=\"padding-left: 30px\"><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/xias\/\"><strong>Xiao Sun<\/strong><\/a>, Bin Xiao, Fangyin Wei, Shuang Liang, Yichen Wei<\/p>\r\n\r\n<h3><a href=\"http:\/\/openaccess.thecvf.com\/content_ECCV_2018\/html\/Dong_Li_Recurrent_Tubelet_Proposal_ECCV_2018_paper.html\" target=\"_blank\" rel=\"noopener\">Recurrent Tubelet Proposal and Recognition Networks for Action Detection<\/a><\/h3>\r\n<p style=\"padding-left: 30px\">Dong Li, Zhaofan Qiu, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/qid\/\"><strong>Qi Dai<\/strong><\/a>, Ting Yao, Tao Mei<\/p>\r\n\r\n<h3><a href=\"https:\/\/link.springer.com\/chapter\/10.1007\/978-3-030-01228-1_44\" target=\"_blank\" rel=\"noopener\">Reinforced Temporal Attention and Split-Rate Transfer for Depth-Based Person Re-identification<\/a><\/h3>\r\n<p style=\"padding-left: 30px\"><strong>Nikolaos Karianakis<\/strong>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/zliu\/\"><strong>Zicheng Liu<\/strong><\/a>, <strong>Yinpeng Chen<\/strong>, Stefano Soatto<\/p>\r\n\r\n<h3><a href=\"https:\/\/arxiv.org\/abs\/1804.06208\" target=\"_blank\" rel=\"noopener\">Simple Baselines for Human Pose Estimation and Tracking<\/a><\/h3>\r\n<p style=\"padding-left: 30px\"><strong>Bin Xiao<\/strong>, <strong>Haiping Wu<\/strong>, <strong>Yichen Wei<\/strong><\/p>\r\n&nbsp;\r\n<h2>Tuesday, September 10, 2018 | 4:00 PM | 2B<\/h2>\r\n<h3><a href=\"http:\/\/openaccess.thecvf.com\/content_ECCV_2018\/papers\/Dongqing_Zhang_Optimized_Quantization_for_ECCV_2018_paper.pdf\" target=\"_blank\" rel=\"noopener\">Optimized Quantization for Highly Accurate and Compact DNNs<\/a><\/h3>\r\n<p style=\"padding-left: 30px\"><strong>Dongqing Zhang<\/strong>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jiaoyan\/\"><strong>Jiaolong Yang<\/strong><\/a>, <strong>Dongqiangzi Ye<\/strong>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/ganghua\/\"><strong>Gang Hua<\/strong><\/a><\/p>\r\n\r\n<h3><a href=\"https:\/\/arxiv.org\/abs\/1808.04699\" target=\"_blank\" rel=\"noopener\">Improving Embedding Generalization via Scalable Neighborhood Component Analysis<\/a><\/h3>\r\n<p style=\"padding-left: 30px\"><strong>Zhirong Wu<\/strong>, Alexei Efros, Stella Yu<\/p>\r\n&nbsp;\r\n<h2>Wednesday, September 11, 2018 | 10:00 AM | 3A<\/h2>\r\n<h3><a href=\"https:\/\/arxiv.org\/abs\/1807.03871\" target=\"_blank\" rel=\"noopener\">\"Factual\" or \"Emotional\": Stylized Image Captioning with Adaptive Learning and Attention<\/a><\/h3>\r\n<p style=\"padding-left: 30px\">Tianlang Chen, Zhongping Zhang, <strong>Quanzeng You<\/strong>, Chen Fang, Zhaowen Wang, Hailin Jin, Jiebo Luo<\/p>\r\n\r\n<h3><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/adding-attentiveness-to-the-neurons-in-recurrent-neural-networks\/\">Adding Attentiveness to the Neurons in Recurrent Neural Networks<\/a><\/h3>\r\n<p style=\"padding-left: 30px\">Pengfei Zhang, Jianru Xue, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/culan\/\"><strong>Cuiling Lan<\/strong><\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/wezeng\/\"><strong>Wenjun Zeng<\/strong><\/a>, Zhanning Gao, Nanning Zheng<\/p>\r\n\r\n<h3><a href=\"https:\/\/arxiv.org\/abs\/1805.03430\" target=\"_blank\" rel=\"noopener\">Deep Directional Statistics: Pose Estimation with Uncertainty Quantification<\/a><\/h3>\r\n<p style=\"padding-left: 30px\">Sergey Prokudin, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/senowozi\/\"><strong>Sebastian Nowozin<\/strong><\/a>, Peter Gehler<\/p>\r\n\r\n<h3><a href=\"https:\/\/arxiv.org\/abs\/1803.06340\" target=\"_blank\" rel=\"noopener\">Faces as Lighting Probes via Unsupervised Deep Highlight Extraction<\/a><\/h3>\r\n<p style=\"padding-left: 30px\">Renjiao Yi, Chenyang Zhu, Ping Tan, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/stevelin\/\"><strong>Stephen Lin<\/strong><\/a><\/p>\r\n\r\n<h3><a href=\"http:\/\/people.inf.ethz.ch\/aksoyy\/flashambient\/\" target=\"_blank\" rel=\"noopener\">A Dataset of Flash and Ambient Illumination Pairs from the Crowd<\/a><\/h3>\r\n<p style=\"padding-left: 30px\">Yagiz Aksoy, Changil Kim, Petr Kellnhofer, Sylvain Paris, Mohamed A. Elghareb, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\"><strong>Marc Pollefeys<\/strong><\/a>, Wojciech Matusik<\/p>\r\n&nbsp;\r\n<h2>Wednesday, September 11, 2018 | 2:30 PM | 3B<\/h2>\r\n<h3><a href=\"http:\/\/openaccess.thecvf.com\/content_ECCV_2018\/html\/Yalong_Bai_Deep_Attention_Neural_ECCV_2018_paper.html\" target=\"_blank\" rel=\"noopener\">Deep Attention Neural Tensor Network for Visual Question Answering<\/a><\/h3>\r\n<p style=\"padding-left: 30px\">Yalong Bai, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jianf\/\"><strong>Jianlong Fu<\/strong><\/a>, Tao Mei<\/p>\r\n\r\n<h3><a href=\"https:\/\/arxiv.org\/abs\/1803.07066\" target=\"_blank\" rel=\"noopener\">Learning Region Features for Object Detection<\/a><\/h3>\r\n<p style=\"padding-left: 30px\">Jiayuan Gu, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/hanhu\/\"><strong>Han Hu<\/strong><\/a>, Liwei Wang, <strong>Yichen Wei<\/strong>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jifdai\/\"><strong>Jifeng Dai<\/strong><\/a><strong>\r\n<\/strong><\/p>\r\n\r\n<h3><a href=\"http:\/\/openaccess.thecvf.com\/content_ECCV_2018\/html\/Hai_Ci_Video_Object_Segmentation_ECCV_2018_paper.html\" target=\"_blank\" rel=\"noopener\">Video Object Segmentation by Learning Location-Sensitive Embeddings<\/a><\/h3>\r\n<p style=\"padding-left: 30px\">Hai Ci, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/chnuwa\/\"><strong>Chunyu Wang<\/strong><\/a>, Yizhou Wang<\/p>\r\n\r\n<h3><a href=\"http:\/\/www.cvlibs.net\/publications\/Cherabier2018ECCV.pdf\" target=\"_blank\" rel=\"noopener\">Learning Priors for Semantic 3D Reconstruction<\/a><\/h3>\r\n<p style=\"padding-left: 30px\">Ian Cherabier, <strong>Johannes Sch\u00f6nberger<\/strong>, Martin R. Oswald, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\"><strong>Marc Pollefeys<\/strong><\/a>, Andreas Geiger<\/p>\r\n&nbsp;\r\n<h2>Thursday, September 12, 2018 | 10:00 AM | 4A<\/h2>\r\n<h3><a href=\"https:\/\/arxiv.org\/abs\/1809.07041\" target=\"_blank\" rel=\"noopener\">Exploring Visual Relationship for Image Captioning<\/a><\/h3>\r\n<p style=\"padding-left: 30px\"><strong>Ting Yao<\/strong>, Yingwei Pan, Yehao Li, Tao Mei<\/p>\r\n\r\n<h3><a href=\"https:\/\/arxiv.org\/abs\/1807.08186\" target=\"_blank\" rel=\"noopener\">Learning to Learn Parameterized Image Operators<\/a><\/h3>\r\n<p style=\"padding-left: 30px\">Qingnan Fan, Dongdong Chen, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/luyuan\/\"><strong>Lu Yuan<\/strong><\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/ganghua\/\"><strong>Gang Hua<\/strong><\/a>, Nenghai Yu, Baoquan Chen<\/p>\r\n\r\n<h3><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/learning-to-fuse-proposals-from-multiple-scanline-optimizations-in-semi-global-matching\/\">Learning to Fuse Proposals from Multiple Scanline Optimizations in Semi-Global Matching<\/a><\/h3>\r\n<p style=\"padding-left: 30px\">Johannes Schoenberger, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/sudipsin\/\"><strong>Sudipta Sinha<\/strong><\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/mapoll\/\"><strong>Marc Pollefeys<\/strong><\/a><\/p>\r\n\r\n<h3><a href=\"https:\/\/arxiv.org\/abs\/1804.07094\" target=\"_blank\" rel=\"noopener\">Part-Aligned Bilinear Representations for Person Re-Identification<\/a><\/h3>\r\n<p style=\"padding-left: 30px\">Yumin Suh,\u00a0<a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/jingdw\/\"><strong>Jingdong Wang<\/strong><\/a>, Kyoung Mu Lee<\/p>\r\n&nbsp;\r\n<h2>Thursday, September 12, 2018 | 4:00PM | 4B<\/h2>\r\n<h3><a href=\"https:\/\/arxiv.org\/abs\/1803.07231\" target=\"_blank\" rel=\"noopener\">Hierarchical Metric Learning and Matching for 2D and 3D Geometric Correspondences<\/a><\/h3>\r\n<p style=\"padding-left: 30px\">Mohammed Fathy, Quoc-Huy Tran, <strong><strong>Zeeshan Zia<\/strong><\/strong>, Paul Vernaza, Manmohan Chandraker<\/p>\r\n\r\n<h3><a href=\"https:\/\/www.microsoft.com\/en-us\/research\/publication\/learn-to-score-efficient-3d-scene-exploration-by-predicting-view-utility\/\">Learn-to-Score: Efficient 3D Scene Exploration by Predicting View Utility<\/a><\/h3>\r\n<p style=\"padding-left: 30px\">Benjamin Hepp, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/dedey\/\"><strong>Debadeepta Dey<\/strong><\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/sudipsin\/\"><strong>Sudipta Sinha<\/strong><\/a>, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/akapoor\/\"><strong>Ashish Kapoor<\/strong><\/a>, Neel Joshi, Otmar Hilliges<\/p>\r\n\r\n<h3><a href=\"https:\/\/arxiv.org\/abs\/1807.08333\" target=\"_blank\" rel=\"noopener\">AutoLoc: Weakly-supervised Temporal Action Localization in Untrimmed Videos<\/a><\/h3>\r\n<p style=\"padding-left: 30px\">Zheng Shou, Hang Gao, <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/people\/leizhang\/\"><strong>Lei Zhang<\/strong><\/a>, Kazuyuki Miyazawa, Shih-Fu Chang<\/p>"},{"id":3,"name":"Donate For Good","content":"<h2>Join us in our donation #MSFTResearchGives.<\/h2>\r\n<h3>Help us choose which organization should receive this donation by voting on our Twitter poll <a href=\"https:\/\/x.com\/MSFTResearch\" target=\"_blank\" rel=\"noopener\">@MSFTResearch.<\/a><\/h3>\r\nIn lieu of purchasing thousands of giveaway items, we have decided to reduce our environmental footprint and donate to one of the following organizations:\r\n\r\n<hr \/>\r\n\r\n<h2><a href=\"http:\/\/code.org\" target=\"_blank\" rel=\"noopener\"><img class=\"alignleft wp-image-494144 size-thumbnail\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/06\/CODE_logo_RGB-x200h--150x150.jpg\" alt=\"Code.org\u00ae is a non-profit dedicated to expanding access to computer science, and increasing participation by women and underrepresented minorities. \" width=\"150\" height=\"150\" \/><\/a>Code.org<\/h2>\r\nCode.org\u00ae is a non-profit dedicated to expanding access to computer science, and increasing participation by women and underrepresented minorities. Their vision is that every student in every school should have the opportunity to learn computer science, just like biology, chemistry or algebra. Code.org organizes the annual <a href=\"http:\/\/hourofcode.com\/\" target=\"_blank\" rel=\"noopener\">Hour of Code<\/a> campaign which has engaged 10% of all students in the world, and provides the leading curriculum for K-12 computer science in the largest school districts in the United States.\r\n\r\n<hr \/>\r\n\r\n<h2><a href=\"https:\/\/www.firstinspires.org\/\" target=\"_blank\" rel=\"noopener\"><img class=\"wp-image-494147 alignleft\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/06\/FIRST_Redemp.-Center-Image-x200h-002.png\" alt=\"FIRST (For Inspiration and Recognition of Science and Technology) was founded in 1989 to inspire young people's interest and participation in science and technology\" width=\"165\" height=\"125\" \/><\/a><em>FIRST<\/em><\/h2>\r\n<i>FIRST<\/i> (<b>F<\/b>or Inspiration and <b>R<\/b>ecognition of <b>S<\/b>cience and <b>T<\/b>echnology) was founded in 1989 to inspire young people's interest and participation in science and technology. Based in Manchester, NH, the 501(c)(3) not-for-profit public charity designs accessible, innovative programs that motivate young people to pursue education and career opportunities in science, technology, engineering, and math, while building self-confidence, knowledge, and life skills.\r\n\r\n<em>FIRST<\/em> is <b>More Than Robots<\/b>. <i>FIRST<\/i> participation is proven to encourage students to pursue education and careers in STEM-related fields, inspire them to become leaders and innovators, and enhance their 21st century work-life skills. Read more about the Impact of <i>FIRST<\/i>.\r\n\r\nLearn more at <a href=\"http:\/\/www.firstinspires.org\/\" target=\"_blank\" rel=\"noopener\">www.firstinspires.org<\/a>.\r\n\r\n<hr \/>\r\n\r\n<h2><a href=\"https:\/\/girlswhocode.com\/\" target=\"_blank\" rel=\"noopener\"><img class=\"wp-image-494150 alignleft\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/06\/GWC-logo_2016_-x200h-002.png\" alt=\"Girls Who Code focuses on closing the gender gap in technology. \" width=\"205\" height=\"95\" \/><\/a>Girls Who Code<\/h2>\r\n<a href=\"https:\/\/girlswhocode.com\/\" target=\"_blank\" rel=\"noopener\">Girls Who Code<\/a> focuses on closing the gender gap in technology. Through the National Girls Who Code Clubs program, Girls Who Code offers a free after-school program for 6th-12th graders that provides computer science instruction along with a community of supportive peers and role models. With support from Microsoft, Girls Who Code will expand the program in cities and rural communities. The support will enable greater engagement within these communities, support of volunteer instructors, a refresh of curriculum, tools, and program evaluation as well as program enrichment opportunities, such as field trips, guest speakers, and meet-ups.\r\n\r\n<hr \/>"}],"msr_startdate":"2018-09-08","msr_enddate":"2018-09-14","msr_event_time":"","msr_location":"Munich, Germany","msr_event_link":"https:\/\/eccv2018.org\/attending\/registration\/","msr_event_recording_link":"","msr_startdate_formatted":"September 8, 2018","msr_register_text":"Watch now","msr_cta_link":"https:\/\/eccv2018.org\/attending\/registration\/","msr_cta_text":"Watch now","msr_cta_bi_name":"Event Register","featured_image_thumbnail":"<img width=\"960\" height=\"360\" src=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/08\/1920x720-header-ECCV2018.jpg\" class=\"img-object-cover\" alt=\"selective focus front eyeglasses on table with blur office supplies, vintage light tone.\" decoding=\"async\" loading=\"lazy\" srcset=\"https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/08\/1920x720-header-ECCV2018.jpg 4272w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/08\/1920x720-header-ECCV2018-300x113.jpg 300w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/08\/1920x720-header-ECCV2018-768x288.jpg 768w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/08\/1920x720-header-ECCV2018-1024x384.jpg 1024w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/08\/1920x720-header-ECCV2018-1920x720.jpg 1920w, https:\/\/www.microsoft.com\/en-us\/research\/wp-content\/uploads\/2018\/08\/1920x720-header-ECCV2018-1600x600.jpg 1600w\" sizes=\"auto, (max-width: 960px) 100vw, 960px\" \/>","event_excerpt":"Microsoft is proud to be a Diamond sponsor of the European Conference on Computer Vision in Munich September 8, 2018 \u2013 September 14, 2018. Come by our booth to chat with our experts, see demos of our latest research and find out about career opportunities with Microsoft.","msr_research_lab":[199560,199561,199565],"related-researchers":[],"msr_impact_theme":[],"related-academic-programs":[],"related-groups":[],"related-projects":[],"related-opportunities":[],"related-publications":[633438,633447,707086,807619],"related-videos":[],"related-posts":[504137],"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/498755","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-event"}],"version-history":[{"count":3,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/498755\/revisions"}],"predecessor-version":[{"id":1147088,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event\/498755\/revisions\/1147088"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media\/498773"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=498755"}],"wp:term":[{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=498755"},{"taxonomy":"msr-region","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-region?post=498755"},{"taxonomy":"msr-event-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-event-type?post=498755"},{"taxonomy":"msr-video-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-video-type?post=498755"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=498755"},{"taxonomy":"msr-program-audience","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-program-audience?post=498755"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=498755"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=498755"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}