The work described in this thesis develops the theory of computing world measurements from photographs of scenes and reconstructing three-dimensional models of the scene. The main tool used is Projective Geometry which forms the basis for accurate estimation algorithms. The techniques presented employ uncalibrated images; no knowledge of the camera internal parameters (such as focal length and aspect ratio) or its pose (position and orientation) is required at any time. Extensive use is made of geometric characteristics of the scene, thus there is no need for specialized calibration devices. A hierarchy of novel, accurate and ﬂexible techniques is developed to addressa number of different situations ranging from where no scene metric information is known to cases where some distances are known but there is not sufﬁcient information for a complete camera calibration. The geometry of single views is explored and monocular vision shown to be sufﬁcient to obtain a partial or complete three-dimensional reconstruction of a scene. To achieve this the properties of planar homographies and planar homologies are extensively exploited. The geometry of multiple views is also investigated, particularly the use of a parallax-based approach for structure and camera recovery. The duality between two-view and three-view conﬁgurations is described in detail. Measured distances must be associated with a measurement accuracy to be meaningful. Therefore, an uncertainty propagation analysis is developed in order to take account of the possible sources of error and how they affect the uncertainty in the ﬁnal measurements. The general techniques developed in this thesis can be applied to several areas. Examples are presented of commercial, industrial and artistic use.