Reconciling Predictability and Coherent Caching

2020 9th Mediterranean Conference on Embedded Computing (MECO)(2020)

引用 2|浏览20
暂无评分
摘要
Real-time systems are required to respond to their physical environment within predictable time. While multi-core platforms provide incredible computational power and throughput, they also introduce new sources of unpredictability. For parallel applications with data shared across multiple cores, overhead to maintain data coherence is a major cause of execution time variability. This source of variability can be eliminated by application level control for limiting data caching at different levels of the cache hierarchy. This removes the requirement of explicit coherence machinery for selected data. We show that such control can reduce the worst case write request latency on shared data by 52%. Benchmark evaluations show that proposed technique has a minimal impact on average performance.
更多
查看译文
关键词
hardware/software co-design,worst-case execution time,cache coherence,memory contention
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要