Portrait of Andrew Fitzgibbon

Andrew Fitzgibbon

Partner Scientist, HoloLens


Andrew Fitzgibbon is a scientist with HoloLens at Microsoft, Cambridge, UK. He is best known for his work on 3D vision, having been a core contributor to the Emmy-award-winning 3D camera tracker “boujou” and Kinect for Xbox 360, but his interests are broad, spanning computer vision, graphics, machine learning, and occasionally a little neuroscience.

He has published numerous highly-cited papers, and received many awards for his work, including ten “best paper” prizes at various venues, the Silver medal of the Royal Academy of Engineering, and the BCS Roger Needham award. He is a fellow of the Royal Academy of Engineering, the British Computer Society, and the International Association for Pattern Recognition. Before joining Microsoft in 2005, he was a Royal Society University Research Fellow at Oxford University, having previously studied at Edinburgh University, Heriot-Watt University, and University College, Cork.

Other sites:

My publications on DBLP, Google Scholar
AWF’s Utility Library: http://awful.codeplex.com
Various github repositories: https://github.com/awf
My personal page: http://www.fitzgibbon.ie


Fully Articulated Hand Tracking

Established: October 2, 2014

We present a new real-time articulated hand tracker which can enable new possibilities for human-computer interaction (HCI). Our system accurately reconstructs complex hand poses across a variety of subjects using only a single depth camera. It also allows for a high-degree of robustness, continually recovering from tracking failures. However, the most unique aspect of our tracker is its flexibility in terms of camera placement and operating range.   Screenshots Please note, we…


Established: February 12, 2014

Use consumer video equipment to trace animal movement. Zootracer is software developed at Microsoft Research, capable of accurately tracking multiple, unmarked, interacting individuals in arbitrary video footage. As the software is independent of recording devices, the user may provide recordings from any habitat type. With its intended ability to robustly cope with variations in lighting, camera movement and changes in object appearance, Zootracer represents a step forward in facilitating the collection and analysis of behaviour…

RGB-D Dataset 7-Scenes

Established: January 1, 2013

The 7-Scenes dataset is a collection of tracked RGB-D camera frames. The dataset may be used for evaluation of methods for different applications such as dense tracking and mapping and relocalization techniques. Overview All scenes were recorded from a handheld Kinect RGB-D camera at 640x480 resolution. We use an implementation of the KinectFusion system to obtain the 'ground truth' camera tracks, and a dense 3D model. Several sequences were recorded per scene…

KinÊtre: Animating the World with your Body

Established: July 25, 2012

KinÊtre lets anyone create playful 3D animations. Imagine you are asked to produce a 3D animation of a demonic armchair terrorizing an innocent desk lamp. You may think about model rigging, skeleton deformation, and keyframing. Depending on your experience, you might imagine hours to days at the controls of Maya or Blender. But even if you have absolutely no computer graphics experience, it can be so much easier: grab a nearby chair and desk lamp,…

KinectFusion Project Page

Established: August 9, 2011

This project investigates techniques to track the 6DOF position of handheld depth sensing cameras, such as Kinect, as they move through space and perform high quality 3D surface reconstructions for interaction. Other collaborators (missing from the list below): Richard Newcombe (Imperial College London); David Kim (Newcastle University & Microsoft Research); Andy Davison (Imperial College London)    

Human Pose Estimation for Kinect

Established: January 25, 2011

Kinect for Xbox 360 and Windows makes you the controller by fusing 3D imaging hardware with markerless human-motion capture software. Our group investigates such software. Mixing computer vision, graphics, and machine learning techniques, we look at how to build algorithms that can learn to recognize human poses quickly and reliably. Images Traditional RGB image

Subdivision Surfaces in Computer Vision

Established: July 7, 2008

In vision and machine learning, almost everything we do may be considered to be a form of model fitting. Whether estimating the parameters of a convolutional neural network, computing structure and motion from image collections, tracking objects in video, computing low-dimensional representations of datasets, estimating parameters for an inference model such as Markov random fields, or extracting shape spaces such as active appearance models, it almost always boils down to minimizing an objective containing some…


Established: February 26, 2007

Video images live in a 3D world. This project is about unlocking that 3D information to allow complex special effects with simple user interaction.       Andrew Fitzgibbon Toby Sharp Video looks at a 3D world, so users working with video should be able to interact with that 3D world. Editing and interaction with the video, however, remains based on 2D interface paradigms which have evolved…

Image and Video Editing at MSR Cambridge

Established: January 23, 2002

At Microsoft Research in Cambridge we are developing new machine vision algorithms for intelligent image and video editing and browsing. Our technology provides tools for: accurate interactive segmentation and matting, color correction, easy object removal and image restoration, and seamless object insertion. News! AutoCollage is now available as a product from Microsoft Research Cambridge. Click here to get a free trial version.  Computer Vision at MSR Cambridge