A Case of Multi-Resource Fairness for Serverless Workflows (Work In Progress Paper)

ICPE '23 Companion: Companion of the 2023 ACM/SPEC International Conference on Performance Engineering(2023)

引用 1|浏览11
暂无评分
摘要
Serverless platforms have exploded in popularity in recent years, but, today, these platforms are still unsuitable for large classes of applications. They perform well for batch-oriented workloads that perform coarse transformations over data asynchronously, but their lack of clear service level agreements (SLAs), high per-invocation overheads, and interference make deploying online applications with stringent response time demands impractical. Our assertion is that beyond the glaring issues like cold start costs, a more fundamental shift is needed in how serverless function invocations are provisioned and scheduled in order to support these more demanding applications. Specifically, we propose a platform that leverages the observability and predictability of serverless functions to enforce multi-resource fairness. We explain why we believe interference across a spectrum of resources (CPU, network, and storage) contributes to lower resource utilization and poor response times for latency-sensitive and high-fanout serverless application patterns. Finally, we propose a new distributed and hierarchical function scheduling architecture that combines lessons from multi-resource fair scheduling, hierarchical scheduling, batch-analytics resource scheduling, and statistics to create an approach that we believe will enable tighter SLAs on serverless platforms than has been possible in the past.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要