Efficient and persistent backdoor attack by boundary trigger set constructing against federated learning

Deshan Yang,Senlin Luo, Jinjie Zhou,Limin Pan, Xiaonan Yang,Jiyuan Xing

INFORMATION SCIENCES(2023)

Cited 0|Views10
No score
Abstract
Federated learning systems encounter various security risks, including backdoor, inference and adversarial attacks. Backdoor attacks within this context generally require careful trigger sample design involving candidate selection and automated optimization. Previous methods randomly selected trigger candidates from training dataset, disrupting sample distribution and blurring boundaries among them, which adversely affected the main task accuracy. Moreover, these methods employed non-optimized handcrafted triggers, resulting in a weakened backdoor mapping relationship and lower attack success rates. In this work, we propose a flexible backdoor attack approach, Trigger Sample Selection and Optimization (TSSO), motivated by neural network classification patterns. TSSO employs autoencoders and locality-sensitive hashing to select trigger candidates at class boundaries for precise injection. Furthermore, it iteratively refines trigger representations via the global model and historical outcomes, establishing a robust mapping relationship. TSSO is evaluated on four classical datasets with non-IID settings, outperforming state-of-the-art methods by achieving higher attack success rate in fewer rounds, prolonging the backdoor effect. In scalability tests, even with the defense deployed, TSSO achieved the attack success rate of over 80% with only 4% malicious clients (a poisoning rate of 1/ 640).
More
Translated text
Key words
Deep learning,Federated learning,Poisoning attack,Backdoor attack,Sample selection,Trigger optimization
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined