Compression in cache design.

ICS(2007)

引用 13|浏览38
暂无评分
摘要
ABSTRACTIncreasing cache capacity via compression enables designers to improve performance of existing designs for small incremental cost, further leveraging the large die area invested in last level caches. This paper explores the compressed cache design space with focus on implementation feasibility. Our compression schemes use companion line pairs -- cache lines whose addresses differ by a single bit -- as candidates for compression. We propose two novel compressed cache organizations: the companion bit remapped cache and the pseudoassociative cache. Our cache organizations use fixed-width physical cache line implementation while providing a variablelength logical cache line organization, without changing the number of sets or ways and with minimal increase in state per tag. We evaluate banked and pairwise schemes as two alternatives for storing compressed companion pairs within a physical cache line. We evaluate companion line prefetching (CLP), a simple yet effective prefetching mechanism that works in conjunction with our compression scheme. CLP is nearly pollution free since it only prefetches lines that are compression candidates. Using a detailed cycle accurate IA-32 simulator, we measure the performance of several third-level compressed cache designs simulating a representative collection of workloads. Our experiments show that our cache compression designs improve IPC for all cache-sensitive workloads, even those with modest data compressibility. The pairwise pseudo-associative compressed cache organization with companion line prefetching is the best configuration, providing a mean IPC improvement of 19% for cache-sensitive workloads, and a best-case IPC improvement of 84%. Finally, our cache designs exhibit negligible overall IPC degradation for cache-insensitive workloads.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要