On Optimal Power Control for URLLC over a Non-stationary Wireless Channel using Contextual Reinforcement Learning

IEEE International Conference on Communications (ICC)(2022)

引用 0|浏览26
暂无评分
摘要
In this work we investigate the design of energy-optimal policies for ultra-reliable low-latency communications (URLLC) over a non-stationary wireless channel, using a contextual reinforcement learning (RL) framework. We consider a point-to-point communication system over a piece-wise stationary wireless channel where the Doppler frequency of the channel switches between two distinct values, depending on the underlying state of the channel. To benchmark the performance, first we consider an oracle agent which has a perfect but causal information about the switching instants, and consists of two deep RL (DRL) agents each of which is tasked with optimal decision making in a unique partially stationary environment. Comparing the performance of the oracle agent with the conventional DRL reveals that the performance gain obtained using oracle agent depends on the dynamics of the non-stationary channel. In particular, for a non-stationary channel with faster switching rate the oracle agent results in approximately 15 − 20% less energy consumption. In contrast, for a channel with slower switching rate the performance of the oracle agent is similar to the conventional DRL agent. Next, for a more realistic scenario when the information about the switching instants for the Doppler frequency of the underlying channel is not available, we model the non-stationary channel as a regime switching process modulated by a Markov process, and adapt the oracle agent by aiding a state tracking algorithm proposed for the regime switching process. Our simulation results show that the proposed algorithm yields a better performance compared to the conventional DRL agent.
更多
查看译文
关键词
Energy minimization,non-stationary wireless channel,reinforcement learning,URLLC
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要