Optimizing CNN Model Inference on CPUs

USENIX ATC '19: Proceedings of the 2019 USENIX Conference on Usenix Annual Technical Conference(2019)

引用 10|浏览123
暂无评分
摘要
The popularity of Convolutional Neural Network (CNN) models and the ubiquity of CPUs imply that better performance of CNN model inference on CPUs can deliver significant gain to a large number of users. To improve the performance of CNN inference on CPUs, current approaches that treat the model as a graph mostly rely on the use of high-performance libraries such as Intel MKL-DNN and some basic graph-level optimizations, which is restrictive and misses the opportunity to optimize the end-to-end inference pipeline as a whole. This paper presents a more comprehensive approach of CNN model inference on CPUs that employs a full-stack and systematic scheme of optimizations. The proposed solution optimizes the operations as templates, which enables further improvement of the performance via operation- and graph-level joint optimization. Experiments show that the proposed solution achieves up to 3.45x lower latency for CNN model inference than the current state-of-the-art implementations on various kinds of popular CPUs.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要