I am a member of Mobility and Networking Research Group at Microsoft Research Redmond. My research interests lie broadly in mobile, sensing and networked systems, with a recent focus on applications including large-scale video analytics (e.g., accurate and efficient video analytics platform, collaborative learning, camera networks), and location-based systems (e.g., navigation, mapping, location spoofing).
During 2015 to 2018, I was a Researcher with Cloud & Mobile Research Group and Mobile and Sensing System Group at Microsoft Research Asia. I received Ph.D. from Zhejiang University, and was also a joint Ph.D. student in the EECS Department at the University of Michigan, Ann Arbor. I was the recipient of ACM China Doctoral Dissertation Award (2/yr), IBM PhD Fellowship and several best paper/demo awards from leading CS and EE conferences.
Please visit my personal webpage for more information.
Microsoft Rocket, an open-source project from Microsoft Research, provides cascaded video pipelines that combined with Live Video Analytics from Azure Media Services, makes it easy and affordable for developers to build video analytics applications in their IoT solutions.
We are excited to announce Microsoft Indoor Location Competition 2.0, where localization meets big data and AI. The committee of Microsoft Indoor Location Competition now collaborates with XYZ10 to release an indoor location data set, consisting of WiFi, geomagnetic, and Bluetooth signatures with ground truths from about 1000 buildings. We believe that this data set can inspire the R&D of indoor space.
This workshop calls for research on various issues and solutions that can enable live video analytics with the role for edge computing. The workshop will be in conjunction with ACM SIGCOMM 2020.
Paper Submissions Deadline: May 1, 2020
Rocket—which we’re glad to announce is now open source on GitHub—enables the easy construction of video pipelines for efficiently processing live video streams. You can build, for example, a video pipeline that includes a cascade of DNNs in which a decoded frame is first passed through a relatively inexpensive “light” DNN like ResNet-18 or Tiny YOLO and a “heavy” DNN such as ResNet-152 or YOLOv3 is invoked only when required. With Rocket, you can plug in any TensorFlow or Darknet DNN model. You can also augment the above pipeline with, let’s say, a simpler motion filter based on OpenCV background subtraction.
Project Rocket, an extensible software stack that leverages the edge and cloud, is designed with maximum functionality in mind, capable of meeting the needs of varying video analytic applications. In this webinar, Microsoft researchers Ganesh Ananthanarayanan and Yuanchao Shu explain how Rocket uses approximation to run scalable analytics across the edge and cloud and how efficient live video analysis advances the interactive querying of stored video. The researchers will also provide a tutorial on how to get started with the stack and how to construct and execute video analytics pipelines.
Path Guide is a completely plug-and-play indoor navigation service that does not require maps or any additional equipment. Using Path Guide, users can create routes by recording sensory data with their smartphones while walking indoors, and others can simply follow the routes to the same destination in a real-time manner.