This letter proposes a novel communication-efficient and privacy-preserving distributed machine learning framework, coined Mix2FLD. To address uplink-downlink capacity asymmetry, local model outputs are uploaded to a server in the uplink as in federated distillation (FD), whereas global model parameters are downloaded in the downlink as in federated learning (FL). This requires a model output-to-parameter conversion at the server, after collecting additional data samples from devices. To preserve privacy while not compromising accuracy, linearly mixed-up local samples are uploaded, and inversely mixed up across different devices at the server. Numerical evaluations show that Mix2FLD achieves up to 16.7% higher test accuracy while reducing convergence time by up to 18.8% under asymmetric uplink-downlink channels compared to FL.

Oh Seungeun, Park Jihong, Jeong Eunjeong, Kim Hyesung, Bennis Mehdi, Kim Seong-Lyun

Publication type:
A1 Journal article – refereed

Place of publication:

Distributed machine learning, federated distillation, federated learning, on-device learning, uplink-downlink asymmetry


Full citation:
S. Oh, J. Park, E. Jeong, H. Kim, M. Bennis and S. -L. Kim, “Mix2FLD: Downlink Federated Learning After Uplink Federated Distillation With Two-Way Mixup,” in IEEE Communications Letters, vol. 24, no. 10, pp. 2211-2215, Oct. 2020, doi: 10.1109/LCOMM.2020.3003693


Read the publication here: