A Multi-Mode 8k-MAC HW-Utilization-Aware Neural Processing Unit With a Unified Multi-Precision Datapath in 4-nm Flagship Mobile SoC

IEEE Journal of Solid-State Circuits(2023)

引用 14|浏览32
暂无评分
摘要
This article presents an 8k-multiply-accumulate (MAC) neural processing unit (NPU) in 4-nm mobile system-on-chip (SoC). The unified multi-precision MACs support from integer (INT)4/8/16 to floating point (FP)16 data with high area and energy efficiency. When the NPU meets some layers having low hardware (HW) utilization, such as depthwise convolution or shallow layers with a few input channels, the NPU reconfigures the computational flow to enhance the utilization up to four times after getting basic tensor information from a compiler, such as operation types and shapes. The NPU supports a dynamic operation mode to cover extremely low-power to low-latency requirements. The NPU achieves 4.26 tera FP operations per second (TFLOPS)/W and 11.59 tera operations per second (TOPS)/W for DeepLabV3 (FP16) and MobileNetEdgeTPU (INT8), respectively, as well as high area efficiency (1.72 TFLOPS/mm2 and 3.45 TOPS/mm2).
更多
查看译文
关键词
Compute utilization,deep neural networks (DNNs),domain-specific architecture (DSA),inference accelerator,sparsity-aware zero skipping,unified multiply-accumulate (MAC)
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要