Backward Graph Construction and Lowering in DL Compiler for Model Training on AI Accelerators

2022 19th International SoC Design Conference (ISOCC)(2022)

引用 1|浏览0
暂无评分
摘要
A deep learning (DL) compiler is required to acceler ate model inference and training on AI accelerators. In this work, we propose a novel approach to constructing a backward graph from a PyTorch model, and lowering it to machine codes. The backward graph is constructed using information from PyTorch's autograd engine. The newly proposed lexer and parser convert the generated graph into an abstract syntax tree (AST). The AST is converted to GIR, an intermediate representation within the MLIR framework. IR lowering is then applied to the GIR, producing an LLVM IR for the LLVM backend. Among operators, those that can be accelerated using a DL accelerator are processed by the accelerator. This is achieved through the LLVM IR call function, which calls the accelerator's backend. In the experiment, the proposed compiler estimated the training loss with an average error of 1.46% within 6.7 seconds while processing 100 epochs.
更多
查看译文
关键词
AI accelerator,compiler,deep learning,training
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要