Microsoft Research Blog

Microsoft Research Blog

The Microsoft Research blog provides in-depth views and perspectives from our researchers, scientists and engineers, plus information about noteworthy events and conferences, scholarships, and fellowships designed for academic and scientific communities.

Microsoft Research bringing its best to SIGCOMM 2018

August 21, 2018 | By Yibo Zhu, Researcher

Microsoft Research is actively developing technologies as we continually strive to make our network and online services the most performant and efficient on the planet and this includes openly sharing our progress in advancing the state of the art with the research community. At the upcoming SIGCOMM 2018 – the annual flagship conference organized by Association for Computing Machinery’s Special Interest Group on Data Communications, held this year in Budapest, August 20-24 – Microsoft Research will present multiple papers that cover a wide spectrum of its network, from data center networks, to wide area networks, to the large-scale service that analyzes video streams. We are particularly proud of our accomplishments this year and looking forward to sharing our knowledge and experience with you in person.

This post previews several papers that we will be presenting at SIGCOMM.

Low-latency Storage System Enabled by Our Novel RDMA Networks

Let’s begin with data center networks. Storage systems in data centers are an important component of large-scale online services. They typically perform replicated transactional operations for high data availability and integrity. Today, however, such operations suffer from high tail latency even with recent kernel bypass and storage optimizations and thus affect the predictability of end-to-end performance of these services. We observe that the root cause of the problem is the involvement of the CPU, a precious commodity in multi-tenant settings, in the critical path of replicated transactions.

In our paper, Group-Based NIC-Offloading to Accelerate Replicated Transactions in Multi-Tenant Storage Systems, we present a new framework that removes the CPU from the critical path of replicated transactions in storage systems by offloading them to commodity RDMA NICs with non-volatile memory as the storage medium. To achieve this, we develop new and general NIC offloading primitives that can perform memory operations on all nodes in a replication group while guaranteeing ACID properties without CPU involvement. We demonstrate that popular storage applications can be easily optimized using our primitives. Our evaluation results show that HyperLoop can reduce 99th percentile latency ≈ 800× with close to 0% CPU consumption on replicas.

Figure 1: Hyperloop reduces 99th percentile latency of group-based storage replication by up to 800x.

Effective Traffic Engineering for Optical Links in Wide Area Network

Outside data centers, Microsoft researchers also have achieved a breakthrough. Fiber optic cables connecting different data centers are an expensive resource acquired by large organizations with significant monetary investment. Their importance has driven a conservative deployment approach with redundancy and reliability baked in at multiple layers.

In our paper RADWAN: Rate Adaptive Wide Area Network, we take a more aggressive approach and argue for adapting the capacity of fiber optic links based on their signal-to-noise ratio (SNR). We investigated this idea by analyzing the SNR of over 2,000 links in an optical backbone for a period of three years. We were able to show that the capacity of 64% of IP links can be augmented by at least 75 Gbps, leading to an overall capacity gain of over 134 Tbps. Moreover, adapting link capacity to a lower rate can prevent 25% of link failures. This means using the same links, we get higher capacity and better availability.

We proposed RADWAN, a traffic engineering system that allows optical links to adapt their rate based on the observed SNR to achieve higher throughput and availability while minimizing the churn during capacity reconfigurations. We evaluated RADWAN using a testbed consisting of 1,540 km fiber with 16 amplifiers and attenuators. We then simulated the throughput gains of RADWAN at scale compared to the state-of-the-art. Our results show that with realistic traffic matrices and conservative churn control, RADWAN improves the overall network throughput by 40%. The service provider we studied has invested in this idea and is rolling out the necessary infrastructure to deploy the first capacity variable link between Canada and Europe this year.
Scalable Deep Neural Network Adaption for Video Analytics

Finally, we will share our experience in applying deep convolutional neural networks (NN) to video data at scale. This poses a substantial systems challenge, as improving inference accuracy often requires a prohibitive cost in computational resources. While it is promising to balance resource and accuracy by selecting a suitable NN configuration (for example, the resolution and frame rate of the input video), one must also address the significant dynamics of the NN configuration’s impact on video analytics accuracy.

In our paper Scalable Adaptation of Video Analytics, we present Chameleon, a controller that dynamically picks the best configurations for existing NN-based video analytics pipelines. The key challenge in Chameleon is that in theory, adapting configurations frequently can reduce resource consumption with little degradation in accuracy, but searching a large space of configurations periodically incurs an overwhelming resource overhead that negates the gains of adaptation. The insight behind Chameleon is that the underlying characteristics (for example, the velocity and sizes of objects) that affect the best configuration have enough temporal and spatial correlation to allow the search cost to be amortized over time and across multiple video feeds. For example, using the video feeds of five traffic cameras, we demonstrate that compared to a baseline that picks a single optimal configuration offline, Chameleon can achieve 20-50% higher accuracy with the same amount of resources, or achieve the same accuracy with only 30-50% of the resources (a 2-3× speedup).

Figure 2: Chameleon (red) consistently outperforms the baseline across different metrics. The graphs also include 1-σ ellipses to mark the performance variance of each solution.

Just a glimpse of a few exciting stories we are looking forward to sharing with you in more detail. See you at SIGCOMM 2018!

Up Next

Systems and networking

Researchers seek to simplify the complex in cloud computing

From February 26–28, researchers gathered in Boston for the 16th USENIX Symposium on Networked Systems Design and Implementation (NSDI), one of the top conferences in the networking and systems field. Microsoft, a silver sponsor of the event, was represented by researchers serving on the program committee, as well as those presenting papers, including two research […]

Microsoft blog editor

Systems and networking

Microsoft Research and Microsoft Azure improve the efficiency and capacity of cloud-scale optical networks

Today, at HotNets 2017, the Sixteenth ACM Workshop on Hot Topics in Networks, we shared with you some important research results from Microsoft Research and Microsoft Azure that show how we can increase the capacity of optical networks without purchasing new fibers. Data, data, data – it’s all about the data Our journey to understand and […]

Victor Bahl

Distinguished Scientist, Director Mobility & Networking Research

Systems and networking

Microsoft Azure and Microsoft Research take giant step towards eliminating network downtime

At the 26th ACM Annual Symposium on Operating Systems and Principles, better known as SOSP 2017, my colleagues described a new technology called CrystalNet – a high-fidelity, cloud-scale emulator that helps network engineers nearly eliminate network downtime related to routine maintenance and upgrades as well as software bugs and human errors.  A collaboration by Microsoft […]

Victor Bahl

Distinguished Scientist, Director Mobility & Networking Research