We present a prototype of time-of-flight depth-sensing technology, which will be adopted in Project Kinect for Azure as well as in the next-generation of HoloLens. This depth sensor outperforms the current state-of-the-art in terms of depth precision, while maintaining both a small-form factor and high-power efficiency. The depth sensor supports various depth ranges, frame rates, and image resolutions. The user can choose between large and medium field-of-view modes. Our depth technology is used for real-time interaction scenarios such as hand or skeleton tracking and enables high-fidelity spatial mapping. It also empowers researchers and developers to build new scenarios for working with ambient intelligence using Azure AI.