Long Live TIME: Improving Lifetime and Security for NVM-Based Training-in-Memory Systems

IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems(2020)

引用 12|浏览223
暂无评分
摘要
Nonvolatile memory (NVM)-based training-in-memory (TIME) systems have emerged that can process the neural network (NN) training in an energy-efficient manner. However, the endurance of NVM cells is disappointing, rendering concerns about the lifetime of TIME systems, because the weights of NN models always need to be updated for thousands to millions of times during training. Gradient sparsification (GS) can alleviate this problem by preserving only a small portion of the gradients to update the weights. However, conventional GS will introduce nonuniform writes on different cells across the whole NVM crossbars, which significantly reduces the excepted available lifetime. Moreover, an adversary can easily launch malicious training tasks to exactly wear-out the target cells and fast break down the system. In this article, we propose an efficient and effective framework, referred as SGS-ARS, to improve the lifetime and security of TIME systems. The framework mainly contains a structured GS (SGS) scheme for reducing the write frequency, and an aging-aware row swapping (ARS) scheme to make the writes uniform. Meanwhile, we show that the back-propagation mechanism allows the attacker to localize and update fixed memory locations and wear them out. Therefore, we introduce Random-ARS and Refresh techniques to thwart adversarial training attacks, preventing the systems from being fast broken in an extremely short time. Our experiments show that when TIME is programmed to train ResNet-50 on ImageNet dataset, $356\times $ lifetime extension can be achieved without sacrificing the accuracy much or incurring much hardware overhead. Under the adversarial environment, the available lifetime of TIME systems can still be improved by $84\times $ .
更多
查看译文
关键词
Gradient sparsification,lifetime,neural networks,training-in-memory,wear-leveling
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要