Planned maintenance
A system upgrade is planned for 10/12-2024, at 12:00-13:00. During this time DiVA will be unavailable.
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Split Ways: Privacy-Preserving Training of Encrypted Data Using Split Learning
Tampere University, Finland.
Tampere University, Finland.
RISE Research Institutes of Sweden, Digital Systems, Data Science. Tampere University, Finland.
2023 (English)In: CEUR Workshop Proc., CEUR-WS , 2023, Vol. 3379Conference paper, Published paper (Refereed)
Abstract [en]

Split Learning (SL) is a new collaborative learning technique that allows participants, e.g. a client and a server, to train machine learning models without the client sharing raw data. In this setting, the client initially applies its part of the machine learning model on the raw data to generate activation maps and then sends them to the server to continue the training process. Previous works in the field demonstrated that reconstructing activation maps could result in privacy leakage of client data. In addition to that, existing mitigation techniques that overcome the privacy leakage of SL prove to be significantly worse in terms of accuracy. In this paper, we improve upon previous works by constructing a protocol based on U-shaped SL that can operate on homomorphically encrypted data. More precisely, in our approach, the client applies Homomorphic Encryption (HE) on the activation maps before sending them to the server, thus protecting user privacy. This is an important improvement that reduces privacy leakage in comparison to other SL-based works. Finally, our results show that, with the optimum set of parameters, training with HE data in the U-shaped SL setting only reduces accuracy by 2.65% compared to training on plaintext. In addition, raw training data privacy is preserved. © 2023 Copyright for this paper by its authors. 

Place, publisher, year, edition, pages
CEUR-WS , 2023. Vol. 3379
Keywords [en]
Homomorphic Encryption, Privacy-preserving Machine Learning, Split Learning, Chemical activation, Privacy-preserving techniques, Activation maps, Encrypted data, Ho-momorphic encryptions, Homomorphic-encryptions, Machine learning models, Machine-learning, Privacy leakages, Privacy preserving, Machine learning
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:ri:diva-65583Scopus ID: 2-s2.0-85158972033OAI: oai:DiVA.org:ri-65583DiVA, id: diva2:1777843
Conference
2023 Workshops of the EDBT/ICDT Joint Conference, EDBT/ICDT-WS 2023 Ioannina 28 March 2023
Available from: 2023-06-30 Created: 2023-06-30 Last updated: 2023-06-30Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

ScopusFull text
By organisation
Data Science
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 161 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf