Decentralized Computation Offloading with Cooperative UAVs: Multi-Agent Deep Reinforcement Learning Perspective

IEEE Wireless Communications(2022)

引用 5|浏览53
暂无评分
摘要
Limited computing resources of internet-of-things (IoT) nodes incur prohibitive latency in processing input data. This triggers new research opportunities toward task offloading systems where edge servers handle intensive computations of IoT devices. Deploying the computing servers at existing base stations may not be sufficient to support IoT nodes operating in a harsh environment. This requests mobile edge servers to be mounted on unmanned aerial vehicles (UAVs) that provide on-demand mobile edge computing (MEC) services. Time-varying offloading demands and mobility of UAVs need a joint design of the optimization variables for all time instances. Therefore, an online decision mechanism is essential for UAV-aided MEC networks. This article presents an overview of recent deep reinforcement learning (DRL) approaches where decisions about UAVs and IoT nodes are taken in an online manner. Specifically, joint optimization over task offloading, resource allocation, and UAV mobility is addressed from the DRL perspective. For the decentralized implementation, a multi-agent DRL method is proposed where multiple intelligent UAVs cooperatively determine their computations and communication policies without central coordination. Numerical results demonstrate that the proposed decentralized learning strategy is superior to existing DRL solutions. The proposed framework sheds light on the viability of the decentralized DRL techniques in designing self-organizing IoT networks.
更多
查看译文
关键词
Base stations, Multi-access edge computing, Reinforcement learning, Autonomous aerial vehicles, Internet of Things, Servers, Resource management, Multi-agent systems
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要