Panthera: Holistic Memory Management for Big Data Processing over Hybrid Memories

  • Chenxi Wang ,
  • Huimin Cui ,
  • ,
  • John Zigman ,
  • Haris Volos ,
  • Onur Mutlu ,
  • Fang Lv ,
  • Xiaobing Feng ,
  • Guoqing Harry Xu

the 40th ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI'19) |

Published by ACM

To process real-world datasets, modern data-parallel systems often require extremely large amounts of memory, which are both costly and energy-inefficient. Emerging non-volatile memory (NVM) technologies offer high capacity compared to DRAM and low energy compared to SSDs. Hence, NVMs have the potential to fundamentally change the dichotomy between DRAM and durable storage in Big Data processing. However, most Big Data applications are written in managed languages and executed on top of a managed runtime that already performs various dimensions of memory management. Supporting hybrid physical memories adds in a new dimension, creating unique challenges in data replacement.

This paper proposes Panthera, a semantics-aware, fully automated memory management technique for Big Data processing over hybrid memories. Panthera analyzes user programs on a Big Data system to infer their coarse-grained access patterns, which are then passed to the Panthera runtime for efficient data placement and migration. For Big Data applications, the coarse-grained data division information is accurate enough to guide the GC for data layout, which hardly incurs overhead in data monitoring and moving. We implemented Panthera in OpenJDK and Apache Spark. Our extensive evaluation demonstrates that Panthera reduces energy by 32 – 52% at only a 1 – 9% time overhead.