A demonstration of willump: a statistically-aware end-to-end optimizer for machine learning inference

Hosted Content(2020)

引用 1|浏览180
暂无评分
摘要
AbstractSystems for ML inference are widely deployed today, but they typically optimize ML inference workloads using techniques designed for conventional data serving workloads and miss critical opportunities to leverage the statistical nature of ML. In this demo, we present Willump, an optimizer for ML inference that introduces statistically-motivated optimizations targeting ML applications whose performance bottleneck is feature computation. Willump automatically cascades feature computation for classification queries: Willump classifies most data inputs using only high-value, low-cost features selected by a cost model, improving query performance by up to 5 x without statistically significant accuracy loss. In this demo, we use interactive and easily-downloadable Jupyter notebooks to show VLDB attendees which applications Willump can speed up, how to use Willump, and how Willump produces such large performance gains.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要