Cambridge Systems and Networking

Established: July 1, 1997

The systems and networking group is composed of approximately 20 researchers and post-docs. The group has existed since the lab opened, and over the last decade and a half we have covered a broad range of topics including systems, operating systems, networking, distributed systems, file and storage systems, cloud and data centre computing, social networking, security, network management, computer architecture, programming languages, and databases. We are a group that designs and builds systems that address significant real-world problems and demonstrate novel underlying principles.

Many projects that researchers from the group have been involved with have had significant impact in the academic community and resulted in papers that have been widely cited. We are also very proud that many of our projects have also had internal Microsoft impact or have been licensed. A recent example of our impact on Microsoft is the Storage QoS feature in the new Windows Server Technical Preview. This feature enables data center hosters to control the bandwidth of traffic from VMs to storage on a per-class basis and is one of the outputs of our research on Predictable Data Centers. You can find more about this work in a recent blog we wrote.

We also transferred our Control Flow Guard compiler security improvements, and today all of Windows builds with this modified compiler, adding safety checks to all indirect function calls.  A rebuild of Windows 8.1 was released on Windows Update, and the compiler is available for anyone to use with Visual Studio 2015 (preview).  Read the compiler team’s blog post to learn more.

Other public examples of our impact on Microsoft include the System Centre Capacity Planner and the Windows Network Map.

Research Themes


shutterstock_79202623The group has a long tradition of research in most aspects of computer communications and networks in general. The interests and contributions of the group are quite diverse with projects over the previous years in a number of areas such as peer-to-peer networks, resource allocation, congestion control and transport protocols, performance modelling, epidemics, network coding, routing, mobility models, social networks, enterprise and home network management, etc. Currently, there is a strong focus on data centre networks (private and cloud) where traditional assumptions that have underpinned the design of the Internet are challenged.


shutterstock_60409711_smlThe systems and networking team have a history of doing research in file and storage systems. We engage in storage research at all levels of the storage stack, from user-experience driven file system designs to data centre scale scalable, predictable and efficient storage systems. Recently we have been working on a software-defined storage (SDS) architecture that opens up the storage stack and makes it more controllable and programmable. We have also released the Microsoft Research Storage Toolkit to allow others to experiment with the architecture. We also build cold-storage systems such as Pelican, optimised for capacity and low cost. Pelican spins down disks to limit peak power draw thus allowing the disks to be more densely packed, and so reducing costs.


operatingsystemspicOur system research spans hardware, programming languages, compilers, and applications. Our mission is to advance the state of the art in these areas to build cloud, mobile, and desktop platforms that are secure, high-performance, and cost- and energy-efficient. Our current focus is on providing memory safety, strong security, and high performance all at the same time. This requires us to define security properties at the programming language level and then enforce them using the compiler, runtime, OS and hardware.

We are also investigating the impact of new hardware technologies such as on-chip customization and distributed integrated fabrics. This will enable building “rack-scale computers” with terabytes of memory, 100s of cores and low-latency communication between them. In the rack-scale computing project, we are looking at how to leverage these new technologies and at the implications for the software stack at large (OS, networking, and applications).

Distributed Systems

bid-dataOur research group focuses on the vision of how our data center systems will look like years ahead and reason about the fundamental changes, both practical and theoretical, to distributed algorithms, distributed architectures, and networked hardware to realize such a vision. Core to our vision are components and mechanisms like systems for distributed coordination (a la ZooKeeper), replicated storage (such as BookKeeper for logging), and new forms of communicating distributed processes and servers, for example, with RDMA in the FARM project. Innovating the distributed systems that form the foundation of our services and that boost the productivity of our developers is an integral part to this group’s mission. We also conduct research in the area of algorithms and systems for processing massive amounts of data. Our work aims at pushing the boundary of computer science in the area of algorithms and systems for large-scale computations. To find out more, please visit our project page.


wirelessOur mission is to invent new wireless architectures and technologies that will support ever-growing traffic demands from mobile devices. We are particularly interested in the new spectrum access models (such as white-spaces) that will yield more flexible and efficient use of spectrum. Our research focus is on wireless network protocols, underlying signal processing algorithms and systems that will run them. We are equally engaged in advancing the state-of-the-art algorithmic research and building network prototypes and tools to prove our concepts. To find out more, please visit our project page.

Empirical Software Engineering

website_ese_logoWe develop empirically driven strategies, techniques and tools to optimize software development. We base our analysis on development process data—changes and tests, bug reports and patches, organizational structure and team management. Using this historical data allows us to characterize and model existing development processes with respect to efficiency and effectiveness and to simulate the impact of optimization strategies on the overall development process and quality, speed and cost goals.

Analysing development process data has in the past already proved useful to software development organizations, as they seek to manage scope, quality, cost and time in software development projects. To find out more, please visit this project page.

Intern Projects

Please find below a selection of our interns and the projects they have worked on.

Sebastian Angel, University of Texas at Austin, Host: Hitesh Ballani

End-to-end Performance Isolation through Virtual Datacenters

The aim of this project is to provide end-to-end performance isolation in multi-tenant data centres in the presence of middleboxes like shared storage servers, load balancers, intrusion detection systems, etc. We offer tenants a virtual datacenter with their virtual machines (VMs) connected to “private” virtual middleboxes through ‘private’ virtual links. This ensures tenants get aggregate bandwidth guarantees both for VM-to-VM traffic and for VM-through-middlebox traffic, irrespective of whether the network or the physical middlebox is the bottleneck resource.

To achieve this, we have designed a software-defined architecture comprising two key components- a logically centralised controller and a multi-resource rate limiter. We use a virtual currency as a common performance metric. At the controller, we have developed a mechanism for dynamically estimating the capacity of a middlebox (our current focus is on storage servers) and adapted existing multi-resource allocation algorithms to work in the presence of distributed middleboxes. At the data plane, our multi-resource rate limiter ensures performance isolation irrespective of the bottleneck resource. We plan to show this design can ensure end-to-end performance guarantees in shared data centres. The internship let to the following OSDI’14 paper which Sebastian presented at the conference: End-to-end Performance Isolation through Virtual Datacenters.

Stanko Novakovic, École Polytechnique Fédérale de Lausanne, Host: Aleksandar Dragojevic

A Transactional System for Modern Networks

During his internship, Stanko built a new distributed transactional engine for FaRM, based on multi-version concurrency control (MVCC). MVCC maintains multiple versions of each object to make the validation of the read set unnecessary and improve performance of read-only transactions by up to two times. It also makes it possible to use snapshot isolation for transactions instead of serializability, which weakens guarantees but improves performance of read-write transactions. The main issue with implementing MVCC is to obtain globally consistent timestamps across servers without introducing scalability bottlenecks or significantly increasing latency. The modern cluster networks make this possible because of their low latency and high throughput. The proposed solution dictates a single server to generate commit timestamps, and batches requests to it. In addition, transactions estimate their read timestamps instead of fetching them from the server, which further alleviates the scalability bottleneck of relying on a single timestamp server. Our measurements show that the system can scale up to several hundred machines with negligible increase in latency.

Matthew Grosvenor, University of Cambridge, Host: Christos Gkantsidis

SWEDEN: Software Defined Edges

Software Defined Networking (SDN) has recently gained a lot of popularity. However, current SDNs focus on simple management tasks (namely routing and reachability). SWEDEN proposes a software-defined edge architecture for enforcing management policies in closed environments like datacenters. Unlike SDNs, SWEDEN leverages the end-hosts to capture richer context about the network flows, and to enforce the policies. Moreover, the end-hosts typically can enforce more complicated/expensive policies than network switches. Sweden exposes a programmable match-action API to the network administrator and decomposes user program policies to program fragments that (a) require global visibility and hence run in the centralized controller, (b) need to be responsive and hence run on end-hosts, and (c) are in the data-path and hence run on programmable network interfaces (NICs) in the end-hosts.

Matt focused on demonstrating the feasibility and benefits of programmability in the end-hosts as well as exploring mechanisms to decompose policies subject to the underlying architecture constraints.

Kaveh Razavi, Vrije Universiteit, Host: Paolo Costa

Designing and evaluating a new network stack for rack-scale computers

During the internship, Kaveh designed and implemented Maze, an emulation platform for rack-scale computers. Maze runs on top of RDMA-based clusters and enables emulating arbitrary virtual topologies (e.g., torus, mesh) with configurable link bandwidth. Its stack is fully customizable and it can support different routing and transport protocols. Maze leverages RDMA primitives for communication and adopts a zero-copy-based forwarding and a task-based cooperative scheduling to achieve high performance. In our micro-benchmarks, Maze can sustain a throughput of up to 37 Gbps when using 8KB-packets and a latency of 2.5 us / hop for small packets. Maze is also compatible with the simulation environment used within the group, enabling rerunning the same workloads and cross-validating the results.

Xiaozhou Li, Princeton UniversityHost: Ant Rowstron & Sergey Legthchenko

Synthesizing software stack for rack-scale cold data storage

In this project, we explore new approaches to synthesize the software stack for cost-effective rack-scale HDD-based cold data storage units. To achieve high-density and low-cost, the cold data storage rack is right-provisioned – designed to simply have enough resources (power, cooling, bandwidth etc.) to handle the expected workload. The operating states of the rack is restricted by a complex set of hardware constraints, which requires complex resource management handled by the dedicated software stack. We are building the first tool to auto-generate the storage software stack with given descriptions of right-provisioned hardware. Our tool will capture the constraints and the desired properties, and automatically generate the data layout and IO scheduling schemes to optimize the performance while adhering to all the constraints.

Cole Schlesinger, Princeton University, Host: Hitesh Ballani & Thomas Karagiannis

Quality of Service Abstractions for Software-defined Networks

Software-defined networking (SDN) provides a means of configuring the packet-forwarding behavior of a network from a logically-centralized controller. Expressive, high-level languages have emerged for expressing data-plane configurations, and new tools allow for verifying packet reachability properties in real time. But SDN largely ignores quality of service (QoS) primitives, such as queues, queuing disciplines, and rate limiters, leaving configuration of these elements to be performed out of band in an ad-hoc manner. Not only does this make QoS elements difficult to configure, it also leads to a “try it and see” approach to analysis and verification of QoS properties. We propose a new language for configuring SDNs with quality of service primitives, which comes equipped with a well-defined semantics drawn from the network calculus. Although still early-stage work, we believe this approach will yield decision procedures for verifying QoS properties as well as an equational theory for guiding (proved correct) network refactoring and reconfiguration.

PhD Scholarships

Launched in 2004 the PhD Scholarship Programme in EMEA (Europe, Middle East and Africa) has so far supported more than 200 PhD students from more than 18 countries and 51 institutions.

The full list of current PhD students under the PhD Scholarship Programme can be found on the Selected PhD Projects and Scholars page, below is the list specific to the Systems and Networking group:

Andrey Rodchenko, University of ManchesterVirtualization and High-Productivity for Many-Cores

Istvan Haller, Vrije Universiteit AmsterdamSecurity for Legacy Binaries

Karthik Nilakant, University of CambridgeD³N: Data Driven Declarative Networking in Dynamic Mobile Networks

Nooshin Sadat Mirzadeh, École Polytechnique Fédérale de Lausanne (ÉPFL)3D Smart Memory for Scale-Out Servers

Paul-Jules Micolet, University of EdinburghPerformance Portability for Large-Scale Heterogeneous Systems

Tomasz Kuchta, Imperial College LondonIncremental and Adaptive Symbolic Execution









NaaS: Network-as-a-Service in the Cloud
Paolo Costa, Matteo Migliavacca, Peter Pietzuch, Alexander L. Wolf, in Proceedings of the 2nd USENIX Workshop on Hot Topics in Management of Internet, Cloud, and Enterprise Networks and Services (Hot-ICE '12), co-located with USENIX NSDI'12, USENIX, April 1, 2012, View abstract, Download PDF







Discovering Dependencies for Network Management
Victor Bahl, Paul Barham, Richard Black, Ranveer Chandra, Moises Goldszmidt, Rebecca Isaacs, Srikanth Kandula, Lun Li, John MacCormick, Dave Maltz, Richard Mortier, Mike Wawrzoniak, Ming Zhang, in Workshop on Hot Topics in Networks (HotNets-V), Association for Computing Machinery, Inc., November 1, 2006, View abstract, Download PDF




Xen 2002
Paul Barham, Boris Dragovic, Keir A. Fraser, Steven M. Hand, Tim Harris, Alex C. Ho, Evangelos Kotsovinos, Anil V.S. Madhavapeddy, Rolf Neugebauer, Ian A. Pratt, Andrew K. Warfield, University of Cambridge Computer Laboratory, January 1, 2003, View abstract, Download PDF, View external link




A Network Based Replay Portal
Jacobus E. van der Merwe, Cormac J. Sreenan, Austin Donnelly, Andrea Basso, Charles R. Kalmanek, in IFIP Conference on Intelligence in Newtworks (SmartNet 2000, September 1, 2000, View abstract







Understanding Rack-Scale Disaggregated Storage

May 2017

Disaggregation of resources in the data center, especially at the rack-scale, offers the opportunity to use valuable resources more efficiently. It is common that mass storage racks in large-scale clouds are filled with servers with Hard Disk Drives (HDDs) attached directly to each of them, either using SATA or SAS depending on the number of…

    Click the icon to access this download

  • Website

Microsoft Research Storage Toolkit

November 2014

The Microsoft Research Storage Toolkit enables effective and accessible research in Software Defined Storage by adding I/O classification functions to the Windows 8.1 storage stack and exposing selected flows of I/O requests to a user-supplied program written in C# which can easily inspect or modify them. Parts of the Toolkit have supported our own recent…

Size: 45 MB

    Click the icon to access this download

  • Website


Link description

A video on /Guard


July 14, 2015


Manuel Costa, Austin Donnelly, and Richard Black

Link description

Big Data Analytics


July 14, 2015


Ant Rowstron and Milan Vojnovic


Workshop on Algorithms and Data Science

Cambridge UK | May 2014

The goal of this workshop is to bring together researchers working in the area of algorithms across different application domains, discuss what the most interesting challenges are, and provide an overview of the on-going activities within MSR Cambridge, as part of external collaborations, and the technology transfer success stories.


Trusted Cloud

Established: August 31, 2015

The Trusted Cloud project at Microsoft Research aims to provide customers of cloud computing complete control over their data: no one should be able to access the data without the customer’s permission. Even if there are malicious employees in the cloud service provider, or hackers break into the data center, they still should not be able to get access to customer data. Trust model: We use a non-hierarchical trust model. That is, we…

Ziria – Wireless Programming for Hardware Dummies

Established: June 16, 2014

Software-defined radios (SDR) have a potential to bring major innovation in wireless networking design. However, their impact so far has been limited due to complex programming tools. Ziria addresses this problem. It consists of a novel programming language and an optimizing compiler. It is able to synthesize a very efficient SDR code from a high-level PHY description written in Ziria language. Link to code Ziria@GitHub Slides NEW: tutorial slides, covering WiFi case study in Ziria…

Software-Defined Storage (SDS) Architectures

In data centers, the IO path to storage is long and complex. It comprises many layers or “stages” with opaque interfaces between them. This makes it hard to enforce end-to-end policies that dictate a storage IO flow’s performance (e.g., guarantee a tenant’s IO bandwidth) and routing (e.g., route an untrusted VM’s traffic through a sanitization middlebox). We are researching architectures that decouple control from data flow to enable such policies.

Rack-scale Computing

Established: January 1, 2013

  New hardware technology such as systems- and networks-on-chip (SOCs and NOCs), switchless network fabrics, silicon photonics, and RDMA, are redefining the landscape of data center computing, enabling interconnecting thousands of cores at high speed at the scale of today's racks. We refer to this new class of hardware as rack-scale computers (RSCs) because the rack is increasingly replacing the individual server as the basic building block of modern data centers. Early examples of RSCs…

Big Data Analytics

Established: October 18, 2012

We conduct research in the area of algorithms and systems for processing massive amounts of data. Our work aims at pushing the boundary of computer science in the area of algorithms and systems for large-scale computations. Our mission is to achieve major technological breakthroughs in order to facilitate new systems and services relying on efficient processing of big data. Research Areas Database queries - How can we efficiently resolve database queries on massive amounts of input data? Here the input data may be…

Predictable Data Centers (PDC)

Established: September 1, 2010

Performance predictability is a key requirement for high-performant applications in today's multi-tenant datacenters. Online services running in infrastructure datacenters need such predictability to satisfy applications SLAs. Cloud datacenters require guaranteed performance to bound customer costs and spur adoption. However, the network and storage stack used in today’s datacenters is unaware of such application requirements. This projects examines how to enable preditable datacenters. Performance predictability is a key requirement for high-performant applications in today's multi-tenant data…


Established: August 27, 2009

Barrelfish is a new operating system being built from scratch in a collaboration between researchers at ETH Zurich and Microsoft Research, Cambridge. We are exploring how to structure an OS for future multi- and many-core systems. The motivation is two closely related hardware trends: first, the rapidly growing number of cores, which leads to scalability challenges, and second, the increasing diversity in computer hardware, requiring the OS to manage and exploit heterogeneous hardware resources. For…

Project CamCube

Established: January 1, 2009

Why do we build data center clusters the way we build them? We started this project in 2008 wondering if there was a better hardware platform with a co-designed software stack that would make it easier/more preformat to write the sort of applications that run in data centers. We wanted to try and address the many issues, which we largely saw as being a function of the hard separation between the network and the processing…

Avalanche: File Swarming with Network Coding

Established: December 9, 2008

The code-named research project "Avalanche" studies how to enable a cost effective, internet scalable and very fast file distribution solution (e.g. for TV on-demand, patches, software distribution). Such an approach leverages desktop PCs to aid in the distribution process, relieving congested servers and network links from most of the traffic. Details Existing Peer-Assisted file delivery systems use swarming techniques to simultaneously obtain different pieces of a file from multiple nodes. One problem of such…

HomeMaestro: A distributed system for the monitoring and instrumentation of home networks

HomeMaestro strives to put order in the chaos of home networks through an end-host distributed solution that requires no additional assistance from network equipment such as routers or access points or modification of network applications. HomeMaestro performs extensive measurements at the host level to infer application network requirements, and identifies network related problems through time-series analysis. HomeMaestro automatically detects and resolves contention over network resources

Virtual Ring Routing

Established: August 27, 2007

A new unique routing protocol Version 1.1 now available here. Virtual Ring Routing (VRR) is a new network routing protocol that occupies a unique point in the design space. VRR is a clean-slate design inspired by overlay routing algorithms in Distributed Hash Tables (DHTs). Unlike DHTs, VRR is implemented directly on top of the link layer and does not rely on an underlying network routing protocol. VRR provides both traditional point-to-point network routing and DHT…

Network Inference

Established: February 25, 2004

Network Inference is a research project theme in which the network end-system (i.e., the computer) infers properties about the behaviour of the network and other end-systems in order to get a better experience. Such improvements might be better sharing or improved latency through reduced queueing and the like. Past Contributors Laurent Massoulie


Established: July 9, 2003

OS and tools for building dependable systems. The Singularity research codebase and design evolved to become the Midori advanced-development OS project. While never reaching commercial release, at one time Midori powered all of Microsoft's natural language search service for the West Coast and Asia. " is impossible to predict how a singularity will affect objects in its causal future." - NCSA Cyberia Glossary What's New? The Singularity Research Development Kit (RDK) 2.0 is available for…


Established: February 15, 2001

A substrate for peer-to-peer applications Pastry is a generic, scalable and efficient substrate for peer-to-peer applications. Pastry nodes form a decentralized, self-organizing and fault-tolerant overlay network within the Internet. Pastry provides efficient request routing, deterministic object location, and load balancing in an application-independent manner. Furthermore, Pastry provides mechanisms that support and facilitate application-specific object replication, caching, and fault recovery. Short overview of Pastry (please don't cite) Pastry provides the following capabilities. First, each node in…

Microsoft Research blog

Location, Location, Location

By Suzanne Ross, Writer, Microsoft Research Isn’t it always about location, dahling? The Internet tribe might say, ‘not in our backyard, the Web is a wild frontier without borders and laughs at geography.’ But if you’ve ever searched the Web for a service available in your local area, or wanted to get local news and weather on your portal site without having to wade through the personalization feature, or thought it would be nice if…

July 2001

Microsoft Research Blog