I am a Principal Researcher with the Office of the CTO, Azure for Operators. My research interests lie broadly in mobile, sensing and networked systems, with a recent focus on topics including edge video analytics (e.g., accurate and efficient video analytics platform, collaborative and continuous learning, 5G video services etc.) and location-based systems (e.g., navigation, mapping, location spoofing etc.).
Prior to joining Azure, I was with the Mobility and Networking Research Group at Microsoft Research Redmond. I received Ph.D. from Zhejiang University, and was also a joint Ph.D. student in the EECS Department at the University of Michigan, Ann Arbor. I was the recipient of ACM China Doctoral Dissertation Award (2/yr), IBM PhD Fellowship and five best paper/demo (runner-up) awards from leading CS and EE conferences.
Please visit my personal webpage for more information.
With 5G, not only will the volume of video traffic increase, but there will also be many new solutions for industries, from retail to manufacturing to healthcare and forest monitoring, infusing deep learning and AI for video analytics scenarios. The symbiotic evolution of video analytics and edge computing provides opportunities for operators to offer new services which they can monetize with their customers.
Telstra’s purpose is to build a connected future so that everyone can thrive. As part of that mission, the Australian telecom created video analytics offerings for commercial customers using AI. Telstra adopted Microsoft Azure Video Analyzer and Microsoft Rocket along with Azure Stack Edge and Azure Percept Preview to build different edge computing zones. Together, Telstra and Microsoft developed algorithms to intelligently distribute AI across these edge zones to better utilize Telstra’s 5G network. The company is developing scalable, cost-efficient solutions that help its customers optimize traffic flow, increase construction safety, and minimize accidents.
This workshop calls for research on various issues and solutions that can enable live video analytics with the role for edge computing. The workshop will be in conjunction with ACM MobiCom 2021.
Paper Submissions Deadline: May 21, 2021
We are excited to announce Microsoft Indoor Location Competition 2.0 on Kaggle. This competition aims to bring together indoor location technologies from both academia and industry, and compare their performance in the same space with the first-of-its-kind large-scale indoor location dataset. We hope the dataset released for the competition will also be of great value to research and development of indoor space beyond localization and navigation.
Microsoft Rocket, an open-source project from Microsoft Research, provides cascaded video pipelines that combined with Live Video Analytics from Azure Media Services, makes it easy and affordable for developers to build video analytics applications in their IoT solutions.
Rocket—which we’re glad to announce is now open source on GitHub—enables the easy construction of video pipelines for efficiently processing live video streams. You can build, for example, a video pipeline that includes a cascade of DNNs in which a decoded frame is first passed through a relatively inexpensive “light” DNN like ResNet-18 or Tiny YOLO and a “heavy” DNN such as ResNet-152 or YOLOv3 is invoked only when required. With Rocket, you can plug in any TensorFlow or Darknet DNN model. You can also augment the above pipeline with, let’s say, a simpler motion filter based on OpenCV background subtraction.
Project Rocket, an extensible software stack that leverages the edge and cloud, is designed with maximum functionality in mind, capable of meeting the needs of varying video analytic applications. In this webinar, Microsoft researchers Ganesh Ananthanarayanan and Yuanchao Shu explain how Rocket uses approximation to run scalable analytics across the edge and cloud and how efficient live video analysis advances the interactive querying of stored video. The researchers will also provide a tutorial on how to get started with the stack and how to construct and execute video analytics pipelines.
Path Guide is a completely plug-and-play indoor navigation service that does not require maps or any additional equipment. Using Path Guide, users can create routes by recording sensory data with their smartphones while walking indoors, and others can simply follow the routes to the same destination in a real-time manner.