SIPT: Speculatively Indexed, Physically Tagged Caches

  • Tianhao Zheng ,
  • Haishan Zhu ,
  • Mattan Erez

2018 IEEE International Symposium on High Performance Computer Architecture (HPCA) |

Publication

The L1 cache is the most frequently accessed structure in the memory hierarchy and therefore should have low expected access latency. As such, the L1 cache presents challenging tradeoffs between hit rate and access latency. Access latency includes the virtual memory address translation latency (TLB lookup), tag array access and matching, and the data access itself. In order to push latency down, all three components are ideally overlapped. Tag and data accesses are overlapped by accessing all ways simultaneously and delivering only tag-matching data. Overlapping those two accesses with address translation is more challenging because an access can not start before the address is known. The simplest cache design indeed performs translation before L1 access begins. This design is called a physically-indexed physically-tagged (PIPT) cache because virtual addresses (VAs) are not used at all in the L1. While simple, the translation overhead is not hidden and access latency is often considered too high.