谷歌浏览器插件
订阅小程序
在清言上使用

Pianissimo: A Sub-mW Class DNN Accelerator with Progressive Bit-by-Bit Datapath Architecture for Adaptive Inference at Edge.

Symposium on VLSI Circuits(2023)

引用 0|浏览10
暂无评分
摘要
Pianissimo is a sub-mW class inference accelerator that adaptively responds to the changing edge environmental conditions with a progressive bit-by-bit datapath architecture. SWHW cooperative control with the custom RISC and the HW counters allows Pianissimo adaptive/mixed precision and block skip, providing a better accuracy-computation tradeoff for low-power edge AI. The 40 nm chip, with 1104 KB memory, dissipates 793-1032$\mu$W at 0.7 V on MobileNetVl, achieving 0. 49-1.25TOPS/W at this ultra-low power range.
更多
查看译文
关键词
adaptive inference,block skip,changing edge environmental conditions,low-power edge AI,memory size 1104.0 KByte,Pianissimo,progressive bit-by-bit datapath architecture,size 40.0 nm,sub-mW class DNN accelerator,sub-mW class inference accelerator,voltage 0.7 V
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要