By Marc Pollefeys, Partner Director of Science, HoloLens and Jamie Shotton, Partner Scientist Lead, HoloLens
We are pleased to announce Microsoft’s Platinum sponsorship of the 14th European Conference on Computer Vision (ECCV) in Amsterdam from October 8-16. ECCV is one of the top international conferences on computer vision research. Microsoft researchers, scientists, and engineers will be participating in the discussion with dozens of talks and posters, a workshop co-organized by Zhengyou Zhang on Computer Vision for Audio-Visual Media, and a keynote by Changhu Wang at the first workshop on Visual Analysis of Sketches.
This is a golden age for computer vision. Research breakthroughs are leaving the lab and getting into users’ hands in record time. Computer vision now plays a pivotal role in many advances benefitting society, such as autonomous vehicles, improved biometric security, and medical imaging. But out of all these innovations, one really stands out to us as having the potential to completely upend how we access information and communicate with each other: mixed reality. Spurred by recent developments in SLAM, 3D reconstruction, gesture recognition, and scene understanding, we’re already experiencing it in the form of groundbreaking products including Microsoft HoloLens.
But we’re just at the start of our journey. Many deep research questions and difficult engineering challenges remain if we are to deliver the ultimate promise of mixed reality. And so, to help invent this future, we’ve just announced the formation of a new HoloLens computer vision research team at Microsoft in Cambridge, UK. Poised to expand substantially over the coming months, we’re looking for people who love to build amazing new technology and have the strong blend of research, engineering, and mathematics they’ll need to thrive with us.
If you’re attending ECCV please stop by our booth and talk to us about computer vision at Microsoft, and opportunities in Cambridge, Redmond, and beyond.
We look forward to meeting you!
In addition to the main plenary sessions, the conference will include the following keynotes, workshops, demonstrations and exhibits by Microsoft employees.
- Gang Hua, Workshop Chair
- Sebastian Nowozin, Area Chair
- Jingdong Wang, Area Chair
- “Projective Bundle Adjustment from Arbitrary Initialization using the Variable Projection Method” by Je Hyeong Hong, Christopher Zach, Andrew Fitzgibbon and Roberto Cipolla
- “Geometric Neural Phrase Pooling: Modeling the Spatial Co-occurrence of Neurons” by Lingxi Xie, Qi Tian, John Flynn, Jingdong Wang and Alan Yuille
- “Is Faster R-CNN Doing Well for Pedestrian Detection?” by Liliang Zhang, Liang Lin, Xiaodan Liang and Kaiming He
- “Sparse Subspace Clustering” by Yingzhen Yang, Jiashi Feng, Nebojsa Jojic, Jianchao Yang and Thomas Huang
- “MS-Celeb-1M: A Dataset and Benchmark for Large Scale Face Recognition” by Yandong Guo, Lei Zhang, Yuxiao Hu, Xiaodong He and Jianfeng Gao
- “Indoor-Outdoor 3D Reconstruction Alignment” by Andrea Cohen, Johannes Schonberger, Pablo Speciale, Torsten Sattler, Jan-Michael Frahm and Marc Pollefeys
- “Pixelwise View Selection for Unstructured Multi-View Stereo” by Johannes Schönberger, Enliang Zheng, Marc Pollefeys and Jan-Michael Frahm
- “Identity Mappings in Deep Residual Networks” by Kaiming He, Xiangyu Zhang, Shaoqing Ren and Jian Sun
- “Supervised Transformer Network for Efficient Face Detection” by Dong Chen, Gang Hua, Fang Wen and Jian Sun
- “Minimal Solvers for Generalized Pose and Scale Estimation from Two Rays and One Point” by Federico Camposeco, Torsten Sattler and Marc Pollefeys
- “Search-based Depth Estimation via Coupled Dictionary Learning with Large-Margin Structure Inference” by Yan Zhang, Rongrong Ji, Xiaopeng Fan, Yan Wang, Feng Guo, Yue Gao and De-bin Zhao
- “COCO Attributes: Attributes for People, Animals, and Objects” by Genevieve Patterson and James Hays
- “Instance-sensitive Fully Convolutional Networks” by Jifeng Dai, Kaiming He, Yi Li, Tsinghua University; Shaoqing Ren and Jian Sun
- “Semantic Reconstruction of Heads” by Fabio Maninchedda, Christian Haene, Bastien Jacquet, Amael Delaunoy and Marc Pollefeys
- “MeshFlow: Minimum Latency Online Video Stabilization” by Shuaicheng Liu, Ping Tan, Lu Yuan, Jian Sun and Bing Zeng
- “MARS: A Video Benchmark for Large-Scale Person Re-identification” by Liang Zheng, Zhi Bie, Yifan Sun, Jingdong Wang, Chi Su, Shengjin Wang and Qi Tian
- “Angry Crowds: Detecting Violent Events in Videos” by Seyed Sadegh Mohammadi, Alessandro Perina, Hamed Kiani and Vittorio Murino.
- “Online Human Action Detection using Joint Classification-Regression Recurrent Neural Networks” by Yanghao Li, Cuiling Lan, Junliang Xing, Wenjun Zeng, Chunfeng Yuan and Jiaying Liu
- “A Deep Learning-based Approach to Progressive Vehicle Re-identification for Urban Surveillance” by Xinchen Liu, Wu Liu, Tao Mei and Huadong Ma
- “Unified Depth Prediction and Intrinsic Image Decomposition from a Single Image via Joint Convolutional Neural Fields” by Seungryong Kim, Kihong Park, Kwanghoon Sohn and Stephen Lin
- “A Symmetry Prior for Convex Variational 3D Reconstruction” by Pablo Speciale, Martin Oswald, Andrea Cohen and Marc Pollefeys
- “Deep Self-Correlation Descriptor for Dense Cross-Modal Correspondence” by Seungryong Kim, Dongbo Min, Stephen Lin and Kwanghoon Sohn