L4-Norm Weight Adjustments for Converted Spiking Neural Networks

arxiv(2021)

引用 1|浏览2
暂无评分
摘要
Spiking Neural Networks (SNNs) are being explored for their potential energy efficiency benefits due to sparse, event-driven computation. Non-spiking artificial neural networks are typically trained with stochastic gradient descent using backpropagation. The calculation of true gradients for backpropagation in spiking neural networks is impeded by the non-differentiable firing events of spiking neurons. On the other hand, using approximate gradients is effective, but computationally expensive over many time steps. One common technique, then, for training a spiking neural network is to train a topologically-equivalent non-spiking network, and then convert it to an spiking network, replacing real-valued inputs with proportionally rate-encoded Poisson spike trains. Converted SNNs function sufficiently well because the mean pre-firing membrane potential of a spiking neuron is proportional to the dot product of the input rate vector and the neuron weight vector, similar to the functionality of a non-spiking network. However, this conversion only considers the mean and not the temporal variance of the membrane potential. As the standard deviation of the pre-firing membrane potential is proportional to the L4-norm of the neuron weight vector, we propose a weight adjustment based on the L4-norm during the conversion process in order to improve classification accuracy of the converted network.
更多
查看译文
关键词
neural networks
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要