Transactional Memory – Semantics and Performance


June 4, 2012


Tim Harris




Writing concurrent programs is notoriously difficult, and is of increasing practical importance. In this series of lectures, I introduce “Transactional Memory” (TM) as a technique for building shared memory data structures. As I illustrate, it can be much easier to build shared memory data structures using TM than it is to use conventional abstractions such as locks, or the atomic compare-and-swap instruction. In this lecture, I look at the semantics of programming language constructs built over TM—for example, how programs using TM interact with the existing concurrency-control features of a language—and some initial ideas for what it means for a program to use TM correctly.


Tim Harris

Tim Harris received his PhD from the University of Cambridge, UK, where he is now a Lecturer in the Systems Research Group. His research interests are in practical concurrent and distributed systems and the programming languages and tools to support them. Current research topics include non-blocking concurrency control primitives, distributed debugging for e-Science applications and the XenoServers public distributed computing project. He is co-author of the undergraduate text book “Operating Systems” published by Pearson and is a Teaching Fellow at Churchill College, Cambridge, UK.


  • Portrait of Tim Harris

    Tim Harris