This paper targets at the problem of radio resource management for expected long-term delay-power tradeoff in vehicular communications. At each decision epoch, the road side unit observes the global network state, allocates channels and schedules data packets for all vehicle user equipment-pairs (VUE-pairs). The decision-making procedure is modelled as a discrete-time Markov decision process (MDP). The technical challenges in solving an optimal control policy originate from highly spatial mobility of vehicles and temporal variations in data traffic. To simplify the decision-making process, we first decompose the MDP into a series of per-VUE-pair MDPs. We then propose an online long short-term memory based deep reinforcement learning algorithm to break the curse of high dimensionality in state space faced by each per-VUE-pair MDP. With the proposed algorithm, the optimal channel allocation and packet scheduling decision at each epoch can be made in a decentralized way in accordance with the partial observations of the global network state at the VUE-pairs. Numerical simulations validate the theoretical analysis and show the effectiveness of the proposed online learning algorithm.
Chen Xianfu, Wu Celimuge, Zhang Honggang, Zhang Yan, Bennis Mehdi, Vuojala Heli
A4 Article in conference proceedings
Place of publication:
ICC 2019 – 2019 IEEE International Conference on Communications (ICC)
X. Chen, C. Wu, H. Zhang, Y. Zhang, M. Bennis and H. Vuojala, “Decentralized Deep Reinforcement Learning for Delay-Power Tradeoff in Vehicular Communications,” ICC 2019 – 2019 IEEE International Conference on Communications (ICC), Shanghai, China, 2019, pp. 1-6. doi: 10.1109/ICC.2019.8761949
Read the publication here: