TTLCache: Taming Latency in Erasure-Coded Storage Through TTL Caching

IEEE Transactions on Network and Service Management(2020)

引用 8|浏览15
暂无评分
摘要
Distributed storage systems are known to be susceptible to long response time, and higher latency leads to a reduction in customers satisfaction. An elegant solution to reduce latency in such systems is through two methods - having redundancy in contents at the storage nodes, and adding a cache close to end-users. Redundancy could be added using an erasure code because of its high resiliency with low storage overhead. It is important to quantify the performance of distributed storage systems in the presence of redundancy and caching, which is the focus of this work. This paper proposes a framework for quantifying and jointly optimizing mean and tail latency in erasure-coded storage systems with edge-caching capabilities. A novel caching policy for caching contents in erasure-coded storage systems, called time-to-live (TTLCache), is proposed. Using TTLCache policy and probabilistic server-selection techniques, bounds for mean latency and latency tail probability (LTP) are characterized. A convex combination of both metrics is optimized over the choices of probabilistic scheduling and TTLCache parameters using an efficient algorithm. In all tested cases, the experimental results show the superiority of our approach as compared to the state of the other algorithms and some competitive baselines. Implementation in a real cloud environment is further used to validate the results.
更多
查看译文
关键词
Alternating optimization,caching,distributed storage systems,erasure coding,mean latency,tail latency
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要