Dataset Augmentation for Robust Spiking Neural Networks.

2023 IEEE International Conference on Autonomic Computing and Self-Organizing Systems Companion (ACSOS-C)(2023)

引用 0|浏览1
暂无评分
摘要
In spiking neural networks (SNNs), neurons are connected in layers but, unlike artificial neural networks (ANNs), they transmit output signals only after their input signals exceed the activation threshold. By eliding unnecessary transmissions, processors designed for SNNs can consume much less power than processors designed for ANNs, making SNNs a promising architecture for energy-constrained datacenters and Internet of Things (IoT) devices. However, training SNNs to perform machine learning tasks as well as ANNs is challenging because backpropagation, a widely used technique to train ANNs, cannot infer the changing subset of transmitting neurons and the duration of their transmissions for each input. State-of-the-art SNN platforms provide platform-specific, mechanistic models to characterize neuron activations; however, these models are often heavily tied to the specific spike distribution. In this paper, we show that SNNs trained on state-of-the-art platforms perform poorly when presented with different spike distributions. We present a platform-agnostic approach that automatically learns neuron activations from observations. Precisely, we use established approximations, combined with a generative adversarial network (GAN) to augment the training dataset with broader spike distribution data. Our approach achieved 54.57 % accuracy on the CIFAR-10 dataset with an average 1.80% improvement in accuracy over existing state-of-the-art SNNs when evaluated on differing spike distributions. These preliminary results validate our approach and lay the groundwork for future research into strengthening SNN models.
更多
查看译文
关键词
spiking neural networks,generative adversarial networks,dataset augmentation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要