Athena: An Early-Fetch Architecture to Reduce on-Chip Page Walk Latencies.

PACT(2022)

引用 0|浏览27
暂无评分
摘要
Large-scale applications from various domains are becoming increasingly irregular, posing significant strains on virtual memory performance. On the other hand, increasing hardware SRAM structures like TLB is becoming challenging due to technology scaling constraints imposed by the limitations of Moore's law. This emerging trend in applications, coupled with the lack of technology scaling in hardware, requires innovations at the hardware level to avoid expensive memory accesses for traversing page tables to keep page walk latencies in check. In this work, we introduce and evaluate Athena, an early-fetch architecture that reduces the on-chip latency of page walk requests. More specifically, Athena reduces page walk latency by issuing early fetch without waiting on the Memory Management Unit to initiate the fetch. Athena improves performance by 6.5% in native non-virtualized environments, and by 15.6% in virtualized environments. Moreover, combining Athena with a recent complementary prior work, leads to further improvements of 16.5% and 23.4% in the native and virtualized environments, respectively.
更多
查看译文
关键词
Translation Lookaside Buffer, Virtual Address, Address Translation, Cache Management
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要