Chrome Extension
WeChat Mini Program
Use on ChatGLM

ENLARGE: an Efficient SNN Simulation Framework on GPU Clusters

IEEE transactions on parallel and distributed systems(2023)

Cited 0|Views35
No score
Abstract
Spiking Neural Networks (SNNs) are currently the most widely used computing model for neuroscience communities. There is also an increasing research interest in exploring the potential of SNN in brain-inspired computing, artificial intelligence, and other areas. As SNNs possess distinguished characteristics that originate from biological authenticity, they require dedicated simulation frameworks to achieve usability and efficiency. However, there is no widely-used, easily accessible, high performance SNN simulation framework for GPU clusters. In this paper, we propose ENLARGE, an efficient SNN simulation framework on GPU clusters. ENLARGE provides a multi-level architecture that deals with computation, communication, and synchronization hierarchically. We also propose an efficient communication method with an all-to-all communication pattern. To deal with the delay of spike delivery, which is the most distinguished SNN characteristic, several delay-aware optimization methods are also proposed. We further propose a multilevel workload management method. Various experiments are carried out to demonstrate the performance and scalability of the framework, as well as the effects of the optimization methods. Test results show that ENLARGE can achieve $3.17\times \sim 28.12\times$ speedup compared with the most widely used NEST simulator and $3.26\times \sim 13.57\times$ speedup compared with the widely used NEST GPU simulator for GPU clusters.
More
Translated text
Key words
Spiking neural network,computing framework,GPU cluster,brain-inspired computing,high performance computing
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined