Optimistic concurrency relies on speculative execution, read-write conflict detection, and checkpoint-rollback techniques to provide a programming model that replaces locks with the abstraction of atomic, isolated execution of critical sections. Previous research has shown that on chip multi-processors, a class of workloads featuring irregular parallelism and rare read-write conflicts can reap significant benefits from the TM model because complex synchronization code can be avoided without the scalability sacrifice that is the hallmark of coarse-grain synchronization.

In a distributed in a distributed setting, however, with current technological parameters, this class of workloads becomes vanishingly small. Moreover, this class does not include the workloads currently used to evaluate TM and distributed TM systems. We construct a model that predicts performance for a distributed software transactional memory (DSTM) executing a given workload. The model assumes optimal pipelining, batching, and locality, and predicts performance by finding the critical path induced by read-write sharing. We validate the model against real executions from TM benchmarks in the literature, finding that it tracks observed scalability to within 17%. We apply this model to popular TM benchmark applications, observing that none scale in a distributed context because transactions are too short in relation to network latencies. Traditional latency hiding techniques such as prefetching, batching, and speculation do not help, and in fact, sometimes make performance worse. We conclude that current TM benchmarks are not appropriate workloads for a distributed system using optimistic concurrency.