BrainLeaks: On the Privacy-Preserving Properties of Neuromorphic Architectures against Model Inversion Attacks
CoRR(2024)
摘要
With the mainstream integration of machine learning into security-sensitive
domains such as healthcare and finance, concerns about data privacy have
intensified. Conventional artificial neural networks (ANNs) have been found
vulnerable to several attacks that can leak sensitive data. Particularly, model
inversion (MI) attacks enable the reconstruction of data samples that have been
used to train the model. Neuromorphic architectures have emerged as a paradigm
shift in neural computing, enabling asynchronous and energy-efficient
computation. However, little to no existing work has investigated the privacy
of neuromorphic architectures against model inversion. Our study is motivated
by the intuition that the non-differentiable aspect of spiking neural networks
(SNNs) might result in inherent privacy-preserving properties, especially
against gradient-based attacks. To investigate this hypothesis, we propose a
thorough exploration of SNNs' privacy-preserving capabilities. Specifically, we
develop novel inversion attack strategies that are comprehensively designed to
target SNNs, offering a comparative analysis with their conventional ANN
counterparts. Our experiments, conducted on diverse event-based and static
datasets, demonstrate the effectiveness of the proposed attack strategies and
therefore questions the assumption of inherent privacy-preserving in
neuromorphic architectures.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要