LLAMA: A Cache/Storage Subsystem for Modern Hardware

  • Justin Levandoski
  • David Lomet
  • Sudipta Sengupta

Proceedings of the International Conference on Very Large Databases, VLDB 2013 |

Published by VLDB - Very Large Data Bases

LLAMA is a subsystem designed for new hardware environments that supports an API for page-oriented access methods, providing both cache and storage management. Caching (CL) and storage (SL) layers use a common mapping table that separates a page’s logical and physical location. CL supports data updates and management updates (e.g., for index re-organization) via latch-free compare-and-swap atomic state changes on its mapping table. SL uses the same mapping table to cope with page location changes produced by log structuring on every page flush. To demonstrate LLAMA’s suitability, we tailored our latch-free Bw-tree implementation to use LLAMA. The Bw-tree is a B-tree style index. Layered on LLAMA, it has higher performance and scalability using real workloads compared with Berkeley DB’s Btree, which is known for good performance.