Presentation + Paper
12 June 2023 Confident federated learning to tackle label flipped data poisoning attacks
Author Affiliations +
Abstract
Federated Learning (FL) enables collaborative model building among a large number of participants without revealing the sensitive data to the central server. However, because of its distributed nature, FL has limited control over the local data and corresponding training process. Therefore, it is susceptible to data poisoning attacks where malicious workers use malicious training data to train the model. Furthermore, attackers on the worker side can easily manipulate local data by swapping the labels of training instances to initiate data poisoning attacks. And local workers under such attacks carry incorrect information to the server, poison the global model, and cause misclassifications. So, detecting and preventing poisonous training samples from local training is crucial in federated training. To address it, we propose a federated learning framework, namely Confident Federated Learning to prevent data poisoning attacks on local workers. Here, we first validate the label quality of training samples by characterizing and identifying label errors in the training data and then exclude the detected mislabeled samples from the local training. To this aim, we experiment with our proposed approach on MNIST, Fashion-MNIST, and CIFAR-10 dataset and experimental results validated the robustness of the proposed framework against the data poisoning attacks by successfully detecting the mislabeled samples with above 85% accuracy.
Conference Presentation
© (2023) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Pretom Roy Ovi, Aryya Gangopadhyay, Robert F. Erbacher, and Carl Busart "Confident federated learning to tackle label flipped data poisoning attacks", Proc. SPIE 12538, Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications V, 125380Z (12 June 2023); https://doi.org/10.1117/12.2663911
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Machine learning

Data privacy

Adversarial training

Back to Top