Invited Paper: Algorithm/Hardware Co-design for Few-Shot Learning at the Edge

2023 IEEE/ACM International Conference on Computer Aided Design (ICCAD)(2023)

引用 0|浏览1
暂无评分
摘要
On-device learning is essential to achieve intelligence at the edge, where it is desirable to learn from few samples or even just a single sample. Memory-augmented neural networks (MANNs), which augment neural networks with an attentional memory, can draw on already learnt knowledge patterns and adapt to new but similar tasks. Implementing MANNs on conventional architectures can require a significant amount of costly data transfer, thereby limiting the practical use of MANNs at the edge. In this paper, we introduce algorithm/hardware co-design solutions which exploit compact designs of content addressable memories (CAMs) based on emerging non-volatile memories (e.g., FeFETs) to implement energy-efficient MANN accelerators. The design space of MANN accelerators is systematically analyzed by considering different circuit, architecture, and algorithm options. We further discuss how hyper-dimensional representations of data can be combined with MANNs to overcome the negative effect of device/circuit variabilities on learning quality, thus achieving not only energy-efficient but also accuracy-competitive on-device learning at the edge. We also investigate modeling of device-to-device (D2D) variation in FeFETs using the write-with-verify approach and detail its impact on the energy, delay, and accuracy of the MANN application.
更多
查看译文
关键词
Computing-In-Memory,Memory Augmented Neural Networks,attention,associative search
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要