This talk examines methods for estimating scene structure and camera motion from very long video sequences. We propose a novel method for incrementally augmenting a reconstruction as new images or measurements become available.
The efficient update of very large reconstructions can be cast as a dimensionality reduction problem. Dimensionality reduction is possible by rigidly locking together sets of cameras and has two important properties. During optimization, only measurements that span partitions need to be considered, and the sparsity of the system is preserved. These properties are essential for updating a reconstruction in a scalable manner.
We will discuss principled ways of partitioning cameras into rigid sets as well as methods to initialize and maintain a hierarchical partitioning. In our current research, we are examining how the dimensionality reduction can be adapted to new measurements by choosing cuts through the hierarchy.