Ändra sökning
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Auto-Weighted Robust Federated Learning with Corrupted Data Sources
Uppsala University, Sweden.
University of Hong Kong, China.
University College London, UK.
RISE Research Institutes of Sweden, Digitala system, Datavetenskap. Uppsala University, Sweden.ORCID-id: 0000-0002-2586-8573
2022 (Engelska)Ingår i: ACM Transactions on Intelligent Systems and Technology, ISSN 2157-6904, E-ISSN 2157-6912, Vol. 13, nr 5Artikel i tidskrift (Refereegranskat) Published
Abstract [en]

Federated learning provides a communication-efficient and privacy-preserving training process by enabling learning statistical models with massive participants without accessing their local data. Standard federated learning techniques that naively minimize an average loss function are vulnerable to data corruptions from outliers, systematic mislabeling, or even adversaries. In this article, we address this challenge by proposing Auto-weighted Robust Federated Learning (ARFL), a novel approach that jointly learns the global model and the weights of local updates to provide robustness against corrupted data sources. We prove a learning bound on the expected loss with respect to the predictor and the weights of clients, which guides the definition of the objective for robust federated learning. We present an objective that minimizes the weighted sum of empirical risk of clients with a regularization term, where the weights can be allocated by comparing the empirical risk of each client with the average empirical risk of the best ( p ) clients. This method can downweight the clients with significantly higher losses, thereby lowering their contributions to the global model. We show that this approach achieves robustness when the data of corrupted clients is distributed differently from the benign ones. To optimize the objective function, we propose a communication-efficient algorithm based on the blockwise minimization paradigm. We conduct extensive experiments on multiple benchmark datasets, including CIFAR-10, FEMNIST, and Shakespeare, considering different neural network models. The results show that our solution is robust against different scenarios, including label shuffling, label flipping, and noisy features, and outperforms the state-of-the-art methods in most scenarios.

Ort, förlag, år, upplaga, sidor
Association for Computing Machinery , 2022. Vol. 13, nr 5
Nyckelord [en]
Federated learning, robustness, Auto-weighted, distributed learning, neural networks
Nationell ämneskategori
Elektroteknik och elektronik
Identifikatorer
URN: urn:nbn:se:ri:diva-59799DOI: 10.1145/3517821OAI: oai:DiVA.org:ri-59799DiVA, id: diva2:1683126
Tillgänglig från: 2022-07-14 Skapad: 2022-07-14 Senast uppdaterad: 2023-06-08Bibliografiskt granskad

Open Access i DiVA

Fulltext saknas i DiVA

Övriga länkar

Förlagets fulltexthttps://doi.org/10.1145/3517821

Person

Voigt, Thiemo

Sök vidare i DiVA

Av författaren/redaktören
Voigt, Thiemo
Av organisationen
Datavetenskap
I samma tidskrift
ACM Transactions on Intelligent Systems and Technology
Elektroteknik och elektronik

Sök vidare utanför DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetricpoäng

doi
urn-nbn
Totalt: 47 träffar
RefereraExporteraLänk till posten
Permanent länk

Direktlänk
Referera
Referensformat
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf