The Internet Media group aims to build seamless yet efficient media applications and systems through breakthroughs in fundamental theory and innovations in algorithm and system technology. We address the problem of media content sensing, processing, analysis, delivery, format adaptation, and the generic scalability issues of media computing systems in terms of bandwidth, processing capability, screen resolution, memory, and battery power.
Intelligent Video Analytics
Video is the biggest big data that contains an enormous amount of information. We are leveraging computer vision and deep learning to develop a cloud-based intelligence engine that can turn raw video data into insights to facilitate various applications and services. Target application scenarios include smart home surveillance, business (retail store, office) intelligence, public security, video storytelling and sharing, etc. We have taken a human centric approach where a significant effort has been focused on understanding human, human attributes and human behaviors. Our research has contributed to a number of video APIs offered in Microsoft Cognitive Services (https://www.microsoft.com/cognitive-services).
Project Titanium aims at bringing new computing experiences through enriched cloud-client computing. While data and programs can be provided as services from the cloud, the screen, referring to the entire collection of data involved in user interface, constitutes the missing third dimension. Titanium will address the problems of adaptive screen composition, representation, and processing, following the roadmap of Titanium Screen, Titanium Remote, Titanium Live, and Titanium Cloud. As “Titanium” suggests, it will provide a light-weight yet efficient solution towards ultimate computing experiences in the cloud plus service era.
Project Mira aims at enabling multimedia representation and processing towards perceptual quality rather than pixel-wise fidelity through a joint effort of signal processing, computer vision, and machine learning. In particular, it seeks to build systems not only incorporating this newly developed vision and learning technologies into compression but also inspiring new vision technologies by looking at the problem from the view of signal processing. By bridging vision and signal processing, this project is expected to offer a fresh frame of mind to multimedia representation and processing.
We envision that, with the development of sensing, networking, and storage technologies, the Internet will rapidly expand and grow into a universal network containing physical and virtual objects. This project will explore theoretical and engineering problems in such a network: at the edge, it considers massive data acquisition in wireless sensor networks and mobile networks; in the center, it addresses the interconnection between networks and data communications in their entire life cycle. This project will leverage and develop technologies in network coding, distributed compressive sensing, network optimization, and network protocols.