A Scalable, Configurable and Programmable Vector Dot-Product Unit for Edge AI.

MBMV 2022; 25th Workshop(2022)

引用 0|浏览2
暂无评分
摘要
Second generation artificial intelligence (AI) migrates inference related computations from cloud towards edge devices [19]. Due to increasingly sophisticated neural network (NN) architecture search, even more complex applications come into range for execution on the edge. This enables a significant drop in latency, power consumption and bandwidth, since data transmission to cloud becomes obsolete. We address challenges to enable edge platforms with computation hardware capable of dealing with more complex applications locally. For NN inferences in general, dot product operation is the most commonly and intensively used. Thus, defining a proper unit supporting the mentioned operation in an efficient way has huge impact. Between different applications the network hyperparameters may change, including the data formats of kernels and activations. Therefor, supporting a wide variety of data formats with the dot product unit, while keeping the area increase low, seems appealing. Additionally, the computational load varies dependent on the particular application, thus a scalable solution is desireable. Next to the configurability and programmability, area as well as power efficiency plays an important role. We propose a scalable, configurable and programmable vector dot product unit, targeting an optimized footprint for low power applications to overcome the challenges of second generation AI on edge devices. The proposed solution is supported by a Python-based HW generator, which enables the derivation of featured dot product units optimized for certain applications. It is developed with the assumption to be utilized as a standalone component as well as loosely or closely coupled component associated with a CPU instruction set extension.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要