Federated learning (FL) rests on the notion of training a global model in a decentralized manner. Under this setting, mobile devices perform computations on their local data before uploading the required updates to the central aggregator for improving the global model. However, a key challenge is to maintain communication efficiency (i.e., the number of communications per iteration) when participating clients implement uncoordinated computation strategy during aggregation of model parameters. We formulate a utility maximization problem to tackle this difficulty, and propose a novel crowdsourcing framework, involving a number of participating clients with local training data to leverage FL. We show the incentive-based interaction between the crowdsourcing platform and the participating client’s independent strategies for training a global learning model, where each side maximizes its own benefit. We formulate a two-stage Stackelberg game to analyze such scenario and find the game’s equilibria. Further, we illustrate the efficacy of our proposed framework with simulation results. Results show that the proposed mechanism outperforms the heuristic approach with up to 22% gain in the offered reward to attain a level of target accuracy.
Pandey Shashi Raj, Tran Nguyen H., Bennis Mehdi, Tun Yan Kyaw, Han Zhu, Hong Choong Seon
A4 Article in conference proceedings
Place of publication:
2019 IEEE Global Communications Conference, GLOBECOM 2019
S. R. Pandey, N. H. Tran, M. Bennis, Y. K. Tun, Z. Han and C. S. Hong, “Incentivize to Build: A Crowdsourcing Framework for Federated Learning,” 2019 IEEE Global Communications Conference (GLOBECOM), Waikoloa, HI, USA, 2019, pp. 1-6, https://doi.org/10.1109/GLOBECOM38437.2019.9014329
Read the publication here: