Evaluating the Impact of Spiking Neural Network Traffic on Extreme-Scale Hybrid Systems

2018 IEEE/ACM Performance Modeling, Benchmarking and Simulation of High Performance Computer Systems (PMBS)(2018)

引用 4|浏览66
暂无评分
摘要
As we approach the limits of Moore's law, there is increasing interest in non-Von Neuman architectures such as neuromorphic computing to take advantage of improved compute and low power capabilities. Spiking neural network (SNN) applications have so far shown very promising results running on a number of processors, motivating the desire to scale to even larger systems having hundreds and even thousands of neuromorphic processors. Since these architectures currently do not exist in large configurations, we use simulation to scale real neuromorphic applications running on a single neuromorphic chip, to thousands of chips in an HPC class system. Furthermore, we use a novel simulation workflow to perform a full scale systems analysis of network performance and the interaction of neuromorphic workloads with traditional CPU workloads in a hybrid supercomputer environment. On average, we find Slim Fly, Fat-Tree, Dragonfly-lD, and Dragonfly-2D are 45 %, 46 %, 76 %, and 83 % respectively faster than the worst case performing topology for both convolutional and Hopfield NN workloads running alongside CPU workloads. Running in parallel with CPU workloads translates to an average slowdown of 21 % for a Hopfield type workload and 184 % for convolutional NN workloads across all HPC network topologies.
更多
查看译文
关键词
Neuromorphic computing,Interconnection networks,Discrete-event simulation,large-scale
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要