Distributed additive encryption and quantization for privacy preserving federated deep learning

Zhu H, Wang R, Jin Y, Liang K, Ning J (2021)
Neurocomputing 463: 309-327.

Zeitschriftenaufsatz | Veröffentlicht | Englisch
 
Download
Es wurden keine Dateien hochgeladen. Nur Publikationsnachweis!
Autor*in
Zhu, Hangyu; Wang, Rui; Jin, YaochuUniBi ; Liang, Kaitai; Ning, Jianting
Abstract / Bemerkung
Homomorphic encryption is a very useful gradient protection technique used in privacy preserving federated learning. However, existing encrypted federated learning systems need a trusted third party to generate and distribute key pairs to connected participants, making them unsuited for federated learning and vulnerable to security risks. Moreover, encrypting all model parameters is computationally intensive, especially for large machine learning models such as deep neural networks. In order to mitigate these issues, we develop a practical, computationally efficient encryption based protocol for federated deep learning, where the key pairs are collaboratively generated without the help of a trusted third party. By quantization of the model parameters on the clients and an approximated aggregation on the server, the proposed method avoids encryption and decryption of the entire model. In addition, a threshold based secret sharing technique is designed so that no one can hold the global private key for decryption, while aggregated ciphertexts can be successfully decrypted by a threshold number of clients even if some clients are offline. Our experimental results confirm that the proposed method significantly reduces the communication costs and computational complexity compared to existing encrypted federated learning without compromising the performance and security.
Erscheinungsjahr
2021
Zeitschriftentitel
Neurocomputing
Band
463
Seite(n)
309-327
ISSN
0925-2312
Page URI
https://pub.uni-bielefeld.de/record/2978384

Zitieren

Zhu H, Wang R, Jin Y, Liang K, Ning J. Distributed additive encryption and quantization for privacy preserving federated deep learning. Neurocomputing. 2021;463:309-327.
Zhu, H., Wang, R., Jin, Y., Liang, K., & Ning, J. (2021). Distributed additive encryption and quantization for privacy preserving federated deep learning. Neurocomputing, 463, 309-327. https://doi.org/10.1016/j.neucom.2021.08.062
Zhu, Hangyu, Wang, Rui, Jin, Yaochu, Liang, Kaitai, and Ning, Jianting. 2021. “Distributed additive encryption and quantization for privacy preserving federated deep learning”. Neurocomputing 463: 309-327.
Zhu, H., Wang, R., Jin, Y., Liang, K., and Ning, J. (2021). Distributed additive encryption and quantization for privacy preserving federated deep learning. Neurocomputing 463, 309-327.
Zhu, H., et al., 2021. Distributed additive encryption and quantization for privacy preserving federated deep learning. Neurocomputing, 463, p 309-327.
H. Zhu, et al., “Distributed additive encryption and quantization for privacy preserving federated deep learning”, Neurocomputing, vol. 463, 2021, pp. 309-327.
Zhu, H., Wang, R., Jin, Y., Liang, K., Ning, J.: Distributed additive encryption and quantization for privacy preserving federated deep learning. Neurocomputing. 463, 309-327 (2021).
Zhu, Hangyu, Wang, Rui, Jin, Yaochu, Liang, Kaitai, and Ning, Jianting. “Distributed additive encryption and quantization for privacy preserving federated deep learning”. Neurocomputing 463 (2021): 309-327.
Export

Markieren/ Markierung löschen
Markierte Publikationen

Open Data PUB

Suchen in

Google Scholar