VADF: Versatile Approximate Data Formats for Energy-Efficient Computing

ACM Transactions on Embedded Computing Systems(2023)

引用 0|浏览4
暂无评分
摘要
Approximate computing (AC) techniques provide overall performance gains in terms of power and energy savings at the cost of minor loss in application accuracy. For this reason, AC has emerged as a viable method for efficiently supporting several compute-intensive applications, e.g., machine learning, deep learning, and image processing, that can tolerate bounded errors in computations. However, most prior techniques do not consider the possibility of soft errors or malicious bit-flips in AC systems. These errors may interact with approximation-introduced errors in unforeseen ways, leading to disastrous consequences, such as the failure of computing systems. A recent research effort, FTApprox (DATE’21) proposes an error-resilient approximate data format. FTApprox stores two blocks, starting from the one containing the most significant valid (MSV) bit. It also stores location of the MSV block and protects them using error-correcting bits (ECBs). However, FTApprox has crucial limitations such as lack of flexibility, redundantly storing zeros in the MSV, etc. In this paper, we propose a novel storage format named Versatile Approximate Data Format (VADF) for storing approximate integer numbers while providing resilience to soft errors. VADF prescribes rules for storing, for example, a 32-bit number in either 8-bit, 12-bit or 16-bit numbers. VADF identifies the MSV bit and stores a certain number of bits following the MSV bit. It also stores the location of the MSV bit and protects it by ECBs. VADF does not explicitly store the MSB bit itself and this prevents VADF from accruing significant errors. VADF incurs lower error than both truncation methodologies and FTApprox. We further evaluate five image-processing and machine-learning applications and confirm that VADF provides higher application quality than FTApprox in the presence and absence of soft errors. Finally, VADF allows the use of narrow arithmetic units. For example, instead of using a 32-bit multiplier/adder, one can first use VADF (or FTApprox) to compress the data and then use a 8-bit multiplier/adder. Through this approach, VADF facilitates 95.97% and 79.3% energy savings in multiplication and addition, respectively. However, the subsequent re-conversion of the 8-bit output data to 32-bit data using Inv-VADF(16,3,32) diminishes the energy savings by 9.6% for addition and 0.56% for multiplication operation, respectively. The code is available at https://github.com/CandleLabAI/VADF-ApproximateDataFormat-TECS .
更多
查看译文
关键词
Approximate computing, approximate data formats, soft-error resilience
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要