TinyIREE: An ML Execution Environment for Embedded Systems From Compilation to Deployment

IEEE Micro(2022)

引用 2|浏览29
暂无评分
摘要
Machine learning model deployment for training and execution has been an important topic for industry and academic research in the last decade. Much of the attention has been focused on developing specific toolchains to support acceleration hardware. In this article, we present Intermediate Representation Execution Environment (IREE), a unified compiler and runtime stack with the explicit goal to scale down machine learning programs to the smallest footprints for mobile and edge devices, while maintaining the ability to scale up to larger deployment targets. IREE adopts a compiler-based approach and optimizes for heterogeneous hardware accelerators through the use of the Multi-Level IR (MLIR) compiler infrastructure, which provides the means to quickly design and implement multilevel compiler intermediate representations (IR). More specifically, this article is focused on TinyIREE, which is a set of deployment options in IREE that accommodate the limited memory and computation resources in embedded systems and bare-metal platforms, while also demonstrating IREE’s intuitive workflow that generates workloads for different ISA extensions and application binary interface (ABIs) through LLVM.
更多
查看译文
关键词
ml execution environment,embedded systems,compilation,deployment
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要