About
I am a Senior Principal Researcher in Microsoft’s Azure for Operators group in its Office of the CTO. Earlier I was in the Mobility and Networking group at Microsoft Research. I finished my Ph.D. in the AMP Lab at UC Berkeley in 2014, advised by Ion Stoica.
[Projects | Publications | Talks | Students | Bio]
Featured content
Microsoft’s vision for 5G, brought to life with Ferrovial
We’re opening the door for developers to seize the 5G opportunity and shared a great real-world example at Build 2022, showcasing a partnership between Microsoft and a Spanish multinational company using 5G to build smart highways. Ferrovial built an AI solution for object recognition to identify safety hazards such as debris or broken-down vehicles. Ferrovial can offer these services to drivers or expose the information as APIs to connected vehicles or autonomous cars. For example, they can identify traffic congestion and automatically respond by updating digital highway signs.
Microsoft and AT&T demonstrate 5G-powered video analytics
Working with AT&T, Microsoft demonstrated the value of Edge Video Services on the Azure public MEC connected to the AT&T’s 5G network in Atlanta. To light up new compelling applications with Azure public MEC that benefit from low latency 5G connectivity, we are making available a video analytics library under the umbrella of Edge Video Services.
Don’t let data drift derail edge compute machine learning models
Ekya enables both retraining and inference to co-exist on the edge box. We are pointing to the raw video datasets released by the City of Bellevue. This includes 101 hours of video from five traffic intersections, all of which have also been labeled with our golden YOLOv3 model. We hope that the videos from the City of Bellevue as well as the other datasets included in the Ekya repository will aid in the building of new edge models as well as improving our pre-trained specialized models to significantly advance the state of the art.
Telstra creates innovative AI solutions for the 5G era with Azure
Telstra’s purpose is to build a connected future so that everyone can thrive. As part of that mission, the Australian telecom created video analytics offerings for commercial customers using AI. Telstra adopted Microsoft Azure Video Analyzer and Microsoft Rocket along with Azure Stack Edge and Azure Percept Preview to build different edge computing zones. Together, Telstra and Microsoft developed algorithms to intelligently distribute AI across these edge zones to better utilize Telstra’s 5G network.
Azure Live Video Analytics with Microsoft Rocket for reducing edge compute costs
Microsoft Rocket, an open-source project from Microsoft Research, provides cascaded video pipelines that combined with Live Video Analytics from Azure Media Services, makes it easy and affordable for developers to build video analytics applications in their IoT solutions.
Project Rocket platform—designed for easy, customizable live video analytics—is open source
Rocket—which we’re glad to announce is now open source on GitHub—enables the easy construction of video pipelines for efficiently processing live video streams. You can build, for example, a video pipeline that includes a cascade of DNNs in which a decoded frame is first passed through a relatively inexpensive “light” DNN like ResNet-18 or Tiny YOLO and a “heavy” DNN such as ResNet-152 or YOLOv3 is invoked only when required. With Rocket, you can plug in any TensorFlow or Darknet DNN model. You can also augment the above pipeline with, let’s say, a simpler motion filter based on OpenCV background subtraction.
Microsoft Rocket: Hybrid Edge + Cloud Video Analytics Platform Webinar
Project Rocket, an extensible software stack that leverages the edge and cloud, is designed with maximum functionality in mind, capable of meeting the needs of varying video analytic applications. In this webinar, Microsoft researchers Ganesh Ananthanarayanan and Yuanchao Shu explain how Rocket uses approximation to run scalable analytics across the edge and cloud and how efficient live video analysis advances the interactive querying of stored video. The researchers will also provide a tutorial on how to get started with the stack and how to construct and execute video analytics pipelines.
Who’s to blame? Debugging Internet performance for Azure users with BlameIt
When there are inevitable slow-downs in the network, we must be able to identify the problem and recover as quickly as possible. This is where BlameIt technology comes in. In real time, BlameIt endeavors to precisely identify where, in the pathway from client to cloud and back to client, there are issues in individual autonomous systems (AS or ASes) along the way.
Live video analytics and research as Test Cricket with Dr. Ganesh Ananthanarayanan
In an era of unprecedented advances in AI and machine learning, current gen systems and networks are being challenged by an unprecedented level of complexity and cost. Fortunately, Dr. Ganesh Ananthanarayanan, a researcher in the Mobility and Networking group at MSR, is up for a challenge. And, it seems, the more computationally intractable the better! A prolific researcher who’s interested in all aspects of systems and networking, he’s on a particular quest to extract value from live video feeds and develop “killer apps” that will have a practical impact on the world.
On Video Analytics for Smart Cities. Interview with Ganesh Ananthanarayanan
“Cameras are now everywhere. Large-scale video processing is a grand challenge representing an important frontier for analytics, what with videos from factory floors, traffic intersections, police vehicles, and retail shops. It’s the golden era for computer vision, AI, and machine learning – it’s a great time now to extract value from videos to impact science, society, and business!”