TOFEC: Achieving optimal throughput-delay trade-off of cloud storage using erasure codes
INFOCOM(2014)
摘要
Our paper presents solutions using erasure coding, parallel connections to storage cloud and limited chunking (i.e., dividing the object into a few smaller segments) together to significantly improve the delay performance of uploading and downloading data in and out of cloud storage. TOFEC is a strategy that helps front-end proxy adapt to level of workload by treating scalable cloud storage (e.g. Amazon S3) as a shared resource requiring admission control. Under light workloads, TOFEC creates more smaller chunks and uses more parallel connections per file, minimizing service delay. Under heavy workloads, TOFEC automatically reduces the level of chunking (fewer chunks with increased size) and uses fewer parallel connections to reduce overhead, resulting in higher throughput and preventing queueing delay. Our trace-driven simulation results show that TOFEC's adaptation mechanism converges to an appropriate code that provides the optimal delay-throughput trade-off without reducing system capacity. Compared to a non-adaptive strategy optimized for throughput, TOFEC delivers 2.5× lower latency under light workloads; compared to a non-adaptive strategy optimized for latency, TOFEC can scale to support over 3× as many requests.
更多查看译文
关键词
scalable cloud storage,front-end proxy,trace-driven simulation,latency reduction,workload level,fec,cloud storage,light-workloads,chunking level,erasure coding,queueing,tofec,admission control,queueing theory,resource allocation,overhead reduction,heavy-workloads,data uploading,delay performance improvement,tofec adaptation mechanism,resource sharing,amazon s3,optimal throughput-delay trade-off,service delay minimization,cloud computing,throughput,queueing delay prevention,delay,data downloading,parallel connections
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络