PnA: Robust Aggregation Against Poisoning Attacks to Federated Learning for Edge Intelligence
この論文をさがす
説明
<jats:p> Federated learning (FL), which holds promise for use in edge intelligence applications for smart cities, enables smart devices collaborate in training a global model by exchanging local model updates instead of sharing local training data. However, the global model can be corrupted by malicious clients conducting poisoning attacks, resulting in the failure of converging the global model, incorrect predictions on the test set, or the backdoor embedded. Although some aggregation algorithms can enhance the robustness of FL against malicious clients, our work demonstrates that existing stealthy poisoning attacks can still bypass these defense methods. In this work, we propose a robust aggregation mechanism, called <jats:italic>Parts and All</jats:italic> ( <jats:italic>PnA</jats:italic> ), to protect the global model of FL by filtering out malicious local model updates throughout the detection of poisoning attacks at layers of local model updates. We conduct comprehensive experiments on three representative datasets. The experimental results demonstrate that our proposed <jats:italic>PnA</jats:italic> is more effective than existing robust aggregation algorithms against state-of-the-art poisoning attacks. Besides, <jats:italic>PnA</jats:italic> has a stable performance against poisoning attacks with different poisoning settings. </jats:p>
収録刊行物
-
- ACM Transactions on Sensor Networks
-
ACM Transactions on Sensor Networks 2024-06-01
Association for Computing Machinery (ACM)
- Tweet
詳細情報 詳細情報について
-
- CRID
- 1871710641499603456
-
- DOI
- 10.1145/3669902
-
- ISSN
- 15504867
- 15504859
-
- データソース種別
-
- OpenAIRE