Towards Efficient Neural Networks On-A-Chip: Joint Hardware-Algorithm Approaches

2019 China Semiconductor Technology International Conference (CSTIC)(2019)

引用 3|浏览0
暂无评分
摘要
Machine learning algorithms have made significant advances in many applications. However, their hardware implementation on the state-of-the-art platforms still faces several challenges and are limited by various factors, such as memory volume, memory bandwidth and interconnection overhead. The adoption of the crossbar architecture with emerging memory technology partially solves the problem but induces process variation and other concerns. In this paper, we will present novel solutions to two fundamental issues in crossbar implementation of Artificial Intelligence (AI) algorithms: device variation and insufficient interconnections. These solutions are inspired by the statistical properties of algorithms themselves, especially the redundancy in neural network nodes and connections. By Random Sparse Adaptation and pruning the connections following the Small-World model, we demonstrate robust and efficient performance on representative datasets such as MNIST and CIFAR-10. Moreover, we present Continuous Growth and Pruning algorithm for future learning and adaptation on hardware.
更多
查看译文
关键词
machine learning algorithms,memory volume,memory bandwidth,crossbar architecture,memory technology,crossbar implementation,neural network nodes,future learning,artificial intelligence algorithms,random sparse adaptation,neural networks on-a-chip,MNIST dataset,CIFAR-10 dataset,continuous growth and pruning algorithm
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要