Portrait of Richard Black

Richard Black

Principal Research Software Development Engineer


Richard is a Principal Research Software Development Engineer at Microsoft Research Cambridge, where he is part of the Systems and Networking group. His research interests include performance analysis of distributed systems, operating systems and networking.  Richard has also done some work on computer security mitigations, and he is currently particularly interested in data-centre scale storage.

Richard enjoys the mix of academic publishing and product impact which MSR provides. You can figure out the time-line of his particular mix from looking at the dates of his publications below.  One of his product contributions is the algorithms and protocols to enable the Network Map feature of Windows. Underlying the Network Map feature is the LLTD protocol which is licensed as part of the Windows Rally program.

Richard occasionally hosts interns, especially those excited by contributing to working prototypes; if you are interested please apply using the standard intern tool.

Richard did his Bachelor and Doctorate degrees at the University of Cambridge.  He started his career as a Research Fellow at the University of Cambridge Computer Laboratory and then as faculty member at the University of Glasgow Department of Computing Science. He returned to Cambridge in January 2000, to join the Microsoft Research laboratory, initially as a Researcher, and from 2006 as Principal Research Software Development Engineer.


Software-Defined Storage (SDS) Architectures

Established: August 14, 2013

In data centers, the IO path to storage is long and complex. It comprises many layers or “stages” with opaque interfaces between them. This makes it hard to enforce end-to-end policies that dictate a storage IO flow’s performance (e.g., guarantee a tenant’s IO bandwidth) and routing (e.g., route an untrusted VM’s traffic through a sanitization middlebox). We are researching architectures that decouple control from data flow to enable such policies.

Rack-scale Computing

Established: January 1, 2013

  New hardware technology such as systems- and networks-on-chip (SOCs and NOCs), switchless network fabrics, silicon photonics, and RDMA, are redefining the landscape of data center computing, enabling interconnecting thousands of cores at high speed at the scale of today's racks. We refer to this new class of hardware as rack-scale computers (RSCs) because the rack is increasingly replacing the individual server as the basic building block of modern data centers. Early examples of RSCs…

Project Pelican

Established: November 1, 2012

Pelican: A building block for exascale cold data storage Pelican aims to store infrequently accessed (cold) data as inexpensively as possible. The amount of data stored is growing at a huge rate, but not all of it is “hot,” i.e. frequently accessed.  There is little reason to store cold data in the same high-performance, high-cost systems as hot data. Our goal was to design a storage system—called Pelican—specifically to take advantage of the needs of…

Predictable Data Centers (PDC)

Established: September 1, 2010

Performance predictability is a key requirement for high-performant applications in today's multi-tenant datacenters. Online services running in infrastructure datacenters need such predictability to satisfy applications SLAs. Cloud datacenters require guaranteed performance to bound customer costs and spur adoption. However, the network and storage stack used in today’s datacenters is unaware of such application requirements. This projects examines how to enable preditable datacenters. Performance predictability is a key requirement for high-performant applications in today's multi-tenant data…


Established: August 27, 2009

Barrelfish is a new operating system being built from scratch in a collaboration between researchers at ETH Zurich and Microsoft Research, Cambridge. We are exploring how to structure an OS for future multi- and many-core systems. The motivation is two closely related hardware trends: first, the rapidly growing number of cores, which leads to scalability challenges, and second, the increasing diversity in computer hardware, requiring the OS to manage and exploit heterogeneous hardware resources. For…

Avalanche: File Swarming with Network Coding

Established: December 9, 2008

The code-named research project "Avalanche" studies how to enable a cost effective, internet scalable and very fast file distribution solution (e.g. for TV on-demand, patches, software distribution). Such an approach leverages desktop PCs to aid in the distribution process, relieving congested servers and network links from most of the traffic. Details Existing Peer-Assisted file delivery systems use swarming techniques to simultaneously obtain different pieces of a file from multiple nodes. One problem of such systems is…

Network Inference

Established: February 25, 2004

Network Inference is a research project theme in which the network end-system (i.e., the computer) infers properties about the behaviour of the network and other end-systems in order to get a better experience. Such improvements might be better sharing or improved latency through reduced queueing and the like. Past Contributors Laurent Massoulie


Established: July 9, 2003

OS and tools for building dependable systems. The Singularity research codebase and design evolved to become the Midori advanced-development OS project. While never reaching commercial release, at one time Midori powered all of Microsoft's natural language search service for the West Coast and Asia. "...it is impossible to predict how a singularity will affect objects in its causal future." - NCSA Cyberia Glossary What's New? The Singularity Research Development Kit (RDK) 2.0 is available for…











Discovering Dependencies for Network Management
Victor Bahl, Paul Barham, Richard Black, Ranveer Chandra, Moises Goldszmidt, Rebecca Isaacs, Srikanth Kandula, Lun Li, John MacCormick, Dave Maltz, Richard Mortier, Mike Wawrzoniak, Ming Zhang, in Workshop on Hot Topics in Networks (HotNets-V), Association for Computing Machinery, Inc., November 1, 2006, View abstract, Download PDF











Understanding Rack-Scale Disaggregated Storage

May 2017

Disaggregation of resources in the data center, especially at the rack-scale, offers the opportunity to use valuable resources more efficiently. It is common that mass storage racks in large-scale clouds are filled with servers with Hard Disk Drives (HDDs) attached directly to each of them, either using SATA or SAS depending on the number of…

    Click the icon to access this download

  • Website