Meeting Latency Target in Transient Burst: A Case on Spark Streaming

2017 IEEE International Conference on Cloud Engineering (IC2E)(2017)

引用 7|浏览23
暂无评分
摘要
Real-time processing of big data has become a core operation in various areas of business, such as extracting value from real-time social network data. Big data workloads in the wild show a strong temporal variability that not only poses the risk of slow responsiveness in data analysis, but also leads to a high risk of service outage. The recent development of batch streaming systems based on the MapReduce framework is shown effective on non-overloaded systems. However, little is known on how to enhance the performance of the batch streaming systems for bursty workloads. In this paper, we propose a latency-driven data controller, Dslash, which aims to process as much data as possible, while processing these as fast as the application target latency and system capacity allow. In particular, we implement Dslash on Spark Streaming - an emerging and complex batch streaming system. Dslash features include (i) placing data in an augmented distributed memory, (ii) shedding out-of-date data, (iii) improving the processing locality of Map tasks, and (iv) delaying data processing in transient overloads. Extensive evaluations on a large number of workloads show that Dslash can ensure stable and fast responsiveness compared to vanilla Spark Streaming systems.
更多
查看译文
关键词
batch streaming system,latency,shedding,delaying,data placement,overload
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要