Ternary Compression for Communication-Efficient Federated Learning

Xu J, Du W, Jin Y, He W, Cheng R (2022)
IEEE Transactions on Neural Networks and Learning Systems 33(3): 1162-1176.

Zeitschriftenaufsatz | Veröffentlicht | Englisch
 
Download
Es wurden keine Dateien hochgeladen. Nur Publikationsnachweis!
Autor*in
Xu, Jinjin; Du, Wenli; Jin, YaochuUniBi ; He, Wangli; Cheng, Ran
Abstract / Bemerkung
Learning over massive data stored in different locations is essential in many real-world applications. However, sharing data is full of challenges due to the increasing demands of privacy and security with the growing use of smart mobile devices and Internet of thing (IoT) devices. Federated learning provides a potential solution to privacy-preserving and secure machine learning, by means of jointly training a global model without uploading data distributed on multiple devices to a central server. However, most existing work on federated learning adopts machine learning models with full-precision weights, and almost all these models contain a large number of redundant parameters that do not need to be transmitted to the server, consuming an excessive amount of communication costs. To address this issue, we propose a federated trained ternary quantization (FTTQ) algorithm, which optimizes the quantized networks on the clients through a self-learning quantization factor. Theoretical proofs of the convergence of quantization factors, unbiasedness of FTTQ, as well as a reduced weight divergence are given. On the basis of FTTQ, we propose a ternary federated averaging protocol (T-FedAvg) to reduce the upstream and downstream communication of federated learning systems. Empirical experiments are conducted to train widely used deep learning models on publicly available data sets, and our results demonstrate that the proposed T-FedAvg is effective in reducing communication costs and can even achieve slightly better performance on non-IID data in contrast to the canonical federated learning algorithms.
Erscheinungsjahr
2022
Zeitschriftentitel
IEEE Transactions on Neural Networks and Learning Systems
Band
33
Ausgabe
3
Seite(n)
1162-1176
ISSN
2162-237X
eISSN
2162-2388
Page URI
https://pub.uni-bielefeld.de/record/2978344

Zitieren

Xu J, Du W, Jin Y, He W, Cheng R. Ternary Compression for Communication-Efficient Federated Learning. IEEE Transactions on Neural Networks and Learning Systems. 2022;33(3):1162-1176.
Xu, J., Du, W., Jin, Y., He, W., & Cheng, R. (2022). Ternary Compression for Communication-Efficient Federated Learning. IEEE Transactions on Neural Networks and Learning Systems, 33(3), 1162-1176. https://doi.org/10.1109/TNNLS.2020.3041185
Xu, Jinjin, Du, Wenli, Jin, Yaochu, He, Wangli, and Cheng, Ran. 2022. “Ternary Compression for Communication-Efficient Federated Learning”. IEEE Transactions on Neural Networks and Learning Systems 33 (3): 1162-1176.
Xu, J., Du, W., Jin, Y., He, W., and Cheng, R. (2022). Ternary Compression for Communication-Efficient Federated Learning. IEEE Transactions on Neural Networks and Learning Systems 33, 1162-1176.
Xu, J., et al., 2022. Ternary Compression for Communication-Efficient Federated Learning. IEEE Transactions on Neural Networks and Learning Systems, 33(3), p 1162-1176.
J. Xu, et al., “Ternary Compression for Communication-Efficient Federated Learning”, IEEE Transactions on Neural Networks and Learning Systems, vol. 33, 2022, pp. 1162-1176.
Xu, J., Du, W., Jin, Y., He, W., Cheng, R.: Ternary Compression for Communication-Efficient Federated Learning. IEEE Transactions on Neural Networks and Learning Systems. 33, 1162-1176 (2022).
Xu, Jinjin, Du, Wenli, Jin, Yaochu, He, Wangli, and Cheng, Ran. “Ternary Compression for Communication-Efficient Federated Learning”. IEEE Transactions on Neural Networks and Learning Systems 33.3 (2022): 1162-1176.

Link(s) zu Volltext(en)
Access Level
Restricted Closed Access

Export

Markieren/ Markierung löschen
Markierte Publikationen

Open Data PUB

Suchen in

Google Scholar