In this talk I will describe three projects that harness the potential of variable-aperture photography – capturing multiple photos by manipulating basic lens controls such as aperture and focus. I will show that by combining such photos, the information encoded in defocus can be used to achieve a variety of goals. First, I will describe a new method for computing highly detailed 3D shape by controlling both the aperture and focus of a lens. This method is particularly well-suited for scenes with high geometric complexity, for which standard computer vision techniques can break down. Second, I will show that we can exploit “aperture bracketing” – a one-button operation on most digital SLR’s – to allow refocusing and other effects in post-capture, all with increased dynamic range. To achieve this, we compute a layered scene model which simultaneously accounts for defocus, high dynamic range exposure, and noise in the input images. Finally, I will talk about our current work on “light-efficient” photography, whose goal is to capture photos with the desired depth-of-field in the shortest amount of time possible.