Efficient Approximate Floating-Point Multiplier With Runtime Reconfigurable Frequency and Precision

Zhenhao Li,Zhaojun Lu, Wei Jia, Runze Yu,Haichun Zhang, Gefei Zhou,Zhenglin Liu,Gang Qu

IEEE Transactions on Circuits and Systems II: Express Briefs(2024)

引用 0|浏览0
暂无评分
摘要
Deep Neural Networks (DNNs) perform intensive matrix multiplications but can tolerate inaccurate intermediate results to some degree. This makes them a perfect target for energy reduction by approximate computing. However, current research in this direction requires DNNs redesign and does not provide the flexibility for users to trade accuracy for energy saving. In this paper, we propose a runtime reconfigurable approximate floating-point multiplier and present details of its hardware implementation. The flexible computation precision is provided by our error correction module, which is controlled by reconfigurable clock signals. The circuit design solves the glitch and metastability problems. The proposed approximate multiplier with three precision levels is evaluated on Synopsys design compiler and Xilinx FPGA platforms. Experimental results demonstrate the advantages of our approach in terms of speed, hardware overhead, and power consumption, while ensuring a controllable accuracy loss for DNNs inferences.
更多
查看译文
关键词
Approximate Computing,Deep Neural Networks,Floating-Point Multiplier,Error Correction,Reconfiguration
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要