An Efficient Hardware Design for Accelerating Sparse CNNs With NAS-Based Models

IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems(2022)

引用 22|浏览99
暂无评分
摘要
Deep convolutional neural networks (CNNs) have achieved remarkable performance at the cost of huge computation. As the CNN models become more complex and deeper, compressing CNNs to sparse by pruning the redundant connection in the networks has emerged as an attractive approach to reduce the amount of computation and memory requirement. On the other hand, FPGAs have been demonstrated to be an effective hardware platform to accelerate CNN inference. However, most existing FPGA accelerators focus on dense CNN models, which are inefficient when executing sparse models as most of the arithmetic operations involve addition and multiplication with zero operands. In this work, we propose an accelerator with software–hardware co-design for sparse CNNs on FPGAs. To efficiently deal with the irregular connections in the sparse convolutional layers, we propose a weight-oriented dataflow that exploits element–matrix multiplication as the key operation. Each weight is processed individually, which yields low decoding overhead. Then, we design an FPGA accelerator that features a tile look-up table (TLUT) and a channel multiplexer (CMUX). The TLUT is designed to match the index between sparse weights and input pixels. Using TLUT, the runtime decoding overhead is mitigated by using an efficient indexing operation. Moreover, we propose a weight layout to enable efficient on-chip memory access without conflicts. To cooperate with the weight layout, a CMUX is inserted to locate the address. Finally, we build a neural architecture search (NAS) engine that leverages the reconfigurability of FPGAs to generate an efficient CNN model and choose the optimal hardware design parameters. The experiments demonstrate that our accelerator can achieve 223.4-309.0 GOP/s for the modern CNNs on Xilinx ZCU102, which provides a $2.4\times $ $12.9\times $ speedup over previous dense CNN accelerators on FPGAs. Our FPGA-aware NAS approach shows $2\times $ speedup over MobileNetV2 with 1.5% accuracy loss.
更多
查看译文
关键词
Accelerator,convolutional neural network (CNN),FPGA,neural architecture search (NAS),sparse
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要