Microsoft @ ECCV 2018

Microsoft @ ECCV 2018

About

Microsoft is proud to be a Diamond sponsor of the European Conference on Computer Vision in Munich September 8, 2018 – September 14, 2018. Come by our booth to chat with our experts, see demos of our latest research and find out about career opportunities with Microsoft.

Committee Chairs

Area Chair

Andrew Fitzgibbon
Sebastian Nowozin

Microsoft Attendees

Alex Hagiopol
Ana Anastasijevic
Andrew Fitzgibbon
Bin Li
Bin Xiao
Chris Aholt
Chunyu Wang
Cuiling Lan
Erroll Wood
Fangyun Wei
Jamie Shotton
Jiaolong Yang
Joseph DeGol
Kuang-Huei Lee
Marc Pollefeys
Mladen Radojevic
Nikola Milosavljevic
Nikolaos Karianakis
Patrick Buehler
Shivkumar Swaminathan
Sudipta Sinha
Tom Cashman
Vukasin Rankovic
Wenjun Zeng
Xudong Liu
Zhirong Wu
Zicheng Liu

Tutorials/Workshops

Saturday AM | Theresianum 606
HoloLens as a tool for computer vision research

Marc Pollefeys, Johannes Schönberger, Andrew Fitzgibbon

Saturday PM | Theresianum 601
Vision for XR

Invited talk: Mark Pollefeys

Sunday AM | N1179
3D Reconstruction Meets Semantics (3DRMS)

Program chair: Mark Pollefeys

Sunday PM | Audimax 0980
360° Perception and Interaction

Invited talk: Mark Pollefeys

Sunday PM | Theresianum 606
Observing and Understanding Hands in Action (HANDS2018)

Invited talk: Andrew Fitzgibbon

Sunday PM | N1090ZG
Women in Computer Vision | N1090ZG

Workshop panelist: Andrew Fitzgibbon

Sunday PM | Theresianum 602
1st Person in Context (PIC) Workshop and Challenge

Invited talk: Wenjun Zeng

Sunday All Day | 1200
ApolloScape: Vision-based Navigation for Autonomous Driving

Invited talk and panelist: Mark Pollefeys

Poster Sessions

Monday, September 9, 2018 | 10:00 AM | 1A

From Face Recognition to Models of Identity: A Bayesian Approach to Learning about Unknown Identities from Unsupervised Data

Daniel Castro, Sebastian Nowozin

DeepPhys: Video-Based Physiological Measurement Using Convolutional Attention Networks

Weixuan Chen, Daniel McDuff

Semantic Match Consistency for Long-Term Visual Localization

Carl Toft, Erik Stenborg, Lars Hammarstrand, Lucas Brynte, Marc Pollefeys, Torsten Sattler, Fredrik Kahl

 

Monday, September 9, 2018 | 4:00 PM | 1B

Stacked Cross Attention for Image-Text Matching

Kuang-Huei Lee, Xi Chen, Gang Hua, Houdong Hu, Xiaodong He

Affinity Derivation and Graph Merge for Instance Segmentation

Yiding Liu, Siyu Yang, Bin Li, Wengang Zhou, Ji-Zheng Xu, Houqiang Li, Yan Lu

Online Dictionary Learning for Approximate Archetypal Analysis

Jieru Mei, Chunyu Wang, Wenjun Zeng

VSO: Visual Semantic Odometry

Konstantinos-Nektarios Lianos, Johannes Schönberger, Marc Pollefeys, Torsten Sattler

Improved Structure from Motion Using Fiducial Marker Matching

Joseph DeGol, Timothy Bretl, Derek Hoiem

 

Tuesday, September 10, 2018 | 10:00 AM | 2A

Semi-supervised FusedGAN for Conditional Image Generation

Navaneeth Bodla, Gang Hua, Rama Chellappa

Integral Human Pose Regression

Xiao Sun, Bin Xiao, Fangyin Wei, Shuang Liang, Yichen Wei

Recurrent Tubelet Proposal and Recognition Networks for Action Detection

Dong Li, Zhaofan Qiu, Qi Dai, Ting Yao, Tao Mei

Reinforced Temporal Attention and Split-Rate Transfer for Depth-Based Person Re-identification

Nikolaos Karianakis, Zicheng Liu, Yinpeng Chen, Stefano Soatto

Simple Baselines for Human Pose Estimation and Tracking

Bin Xiao, Haiping Wu, Yichen Wei

 

Tuesday, September 10, 2018 | 4:00 PM | 2B

Optimized Quantization for Highly Accurate and Compact DNNs

Dongqing Zhang, Jiaolong Yang, Dongqiangzi Ye, Gang Hua

Improving Embedding Generalization via Scalable Neighborhood Component Analysis

Zhirong Wu, Alexei Efros, Stella Yu

 

Wednesday, September 11, 2018 | 10:00 AM | 3A

“Factual” or “Emotional”: Stylized Image Captioning with Adaptive Learning and Attention”

Tianlang Chen, Zhongping Zhang, Quanzeng You, Chen Fang, Zhaowen Wang, Hailin Jin, Jiebo Luo

Adding Attentiveness to the Neurons in Recurrent Neural Networks

Pengfei Zhang, Jianru Xue, Cuiling Lan, Wenjun Zeng, Zhanning Gao, Nanning Zheng

Deep Directional Statistics: Pose Estimation with Uncertainty Quantification

Sergey Prokudin, Sebastian Nowozin, Peter Gehler

Faces as Lighting Probes via Unsupervised Deep Highlight Extraction

Renjiao Yi, Chenyang Zhu, Ping Tan, Stephen Lin

A Dataset of Flash and Ambient Illumination Pairs from the Crowd

Yagiz Aksoy, Changil Kim, Petr Kellnhofer, Sylvain Paris, Mohamed A. Elghareb, Marc Pollefeys, Wojciech Matusik

 

Wednesday, September 11, 2018 | 2:30 PM | 3B

Deep Attention Neural Tensor Network for Visual Question Answering

Yalong Bai, Jianlong Fu, Tao Mei

Learning Region Features for Object Detection

Jiayuan Gu, Han Hu, Liwei Wang, Yichen Wei, Jifeng Dai

Video Object Segmentation by Learning Location-Sensitive Embeddings

Hai Ci, Chunyu Wang, Yizhou Wang

Learning Priors for Semantic 3D Reconstruction

Ian Cherabier, Johannes Schönberger, Martin R. Oswald, Marc Pollefeys, Andreas Geiger

 

Thursday, September 12, 2018 | 10:00 AM | 4A

Exploring Visual Relationship for Image Captioning

Ting Yao, Yingwei Pan, Yehao Li, Tao Mei

Learning to Learn Parameterized Image Operators

Qingnan Fan, Dongdong Chen, Lu Yuan, Gang Hua, Nenghai Yu, Baoquan Chen

Learning to Fuse Proposals from Multiple Scanline Optimizations in Semi-Global Matching

Johannes Schoenberger, Sudipta Sinha, Marc Pollefeys

Part-Aligned Bilinear Representations for Person Re-Identification

Yumin Suh, Jingdong Wang, Kyoung Mu Lee

 

Thursday, September 12, 2018 | 4:00PM | 4B

Hierarchical Metric Learning and Matching for 2D and 3D Geometric Correspondences

Mohammed Fathy, Quoc-Huy Tran, Zeeshan Zia, Paul Vernaza, Manmohan Chandraker

Learn-to-Score: Efficient 3D Scene Exploration by Predicting View Utility

Benjamin Hepp, Debadeepta Dey, Sudipta Sinha, Ashish Kapoor, Neel Joshi, Otmar Hilliges

AutoLoc: Weakly-supervised Temporal Action Localization in Untrimmed Videos

Zheng Shou, Hang Gao, Lei Zhang, Kazuyuki Miyazawa, Shih-Fu Chang

Career Opportunities

Machine Learning Researcher - Audio, Speech, Computer Vision

From being able to log you in with face recognition, launch Cortana with a voice command, to the exciting possibilities in augmented reality, are you itching to play a part in bringing applications of computer vision to millions? The Microsoft Applied Sciences Group incubates disruptive technologies for Microsoft’s next-gen hardware products and is working on several exciting projects that will shape how computers and other devices perceive the user and the user’s environment.

Mixed Reality and AI Research Scientists and Engineers

In Mixed Reality, people—not devices—are at the center of everything we do. Our tech moves beyond screens and pixels, creating a new reality aimed at bringing us closer together—whether that’s scientists “meeting” on the surface of a virtual Mars or some yet undreamt-of possibility. To get there, we’re incorporating diverse ground-breaking technologies, from the revolutionary Holographic Processing Unit to computer vision, machine learning, human-computer interaction, and more.

Full-time opportunities for Ph.D. students & recent graduates

Software Engineer – Computer Vision

Our Computer Vision teams at Microsoft mobilize research and advanced technology projects by creating and building state-of-the-art AI technology in areas such as computer vision, natural language processing, machine learning, while driving end-to-end AI experiences in close collaboration with partners across Microsoft Research Labs and other Microsoft product teams.

Internship opportunities for Ph.D. students

Software Engineer – Computer Vision

Applications to these opportunities are considered for all available Ph.D. Computer Vision intern roles including the ones described below. To be considered for an internship, you need to be enrolled full-time as a student majoring in an applicable field.

Full-time opportunities for students & recent graduates

Software Engineering & Program Management

Software engineers at Microsoft are passionate about building technologies that make the world a better place. At Microsoft, you will collaborate with others to solve problems and build some of the world’s most advanced services and devices. Your efforts on the design, development, and testing of next-generation applications will have an impact on millions of people.