BNN An Ideal Architecture for Acceleration With Resistive in Memory Computation

IEEE Transactions on Emerging Topics in Computing(2023)

引用 1|浏览2
暂无评分
摘要
Binary Neural Networks (BNN) have binarized (-1 and 1) weights and feature maps. Achieving smaller model sizes and computational simplicity, they are well suited for edge-AI systems with power and hardware constraints. Recently, memristive crossbar arrays have gained considerable attention from researchers to perform analog in-memory vector-matrix multiplications in machine learning accelerators, with low power and constant computational time. Crossbar arrays suffer from many non-ideal characteristics such as memristor device imperfections, weight noise, device drift, input/output noises, and DAC/ADC overhead. Thus, for analog AI acceleration to become viable, model architectures must be robust against these non-idealities. We propose that BNN's with their binarized weights, which are ideally mapped to fewer memristive devices with less electrical characteristic issues and higher tolerance to computational noise, are a promising architecture for analog computation. In this work, we examine the viability of deploying state of the art BNNs, with features such as real value residual connections and parametric activations with biases, to analog in-memory computational accelerators. Our simulations show that BNNs are significantly more robust to crossbar non-idealities than full-precision networks, require less chip area, and consume less power on memristive crossbar architectures.
更多
查看译文
关键词
Computer architecture,Memristors,Neural networks,Hardware,Computational modeling,Programming,Performance evaluation,Binary neural network,computer architecture,deep learning,ECRAM,in memory processing,PCM,RRAM
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要