The sixth generation (6G) wireless communication aims to enable ubiquitous intelligent connectivity in future space–air–ground–ocean-integrated networks, with extremely low latency and enhanced global coverage. However, the explosive growth in Internet… Click to show full abstract
The sixth generation (6G) wireless communication aims to enable ubiquitous intelligent connectivity in future space–air–ground–ocean-integrated networks, with extremely low latency and enhanced global coverage. However, the explosive growth in Internet of Things devices poses new challenges for smart devices to process the generated tremendous data with limited resources. In 6G networks, conventional mobile edge computing (MEC) systems encounter serious problems to satisfy the requirements of ubiquitous computing and intelligence, with extremely high mobility, resource limitation, and time variability. In this article, we propose the model of wireless computing power networks (WCPNs), by jointly unifying the computing resources from both end devices and MEC servers. Furthermore, we formulate the new problem of task transfer, to optimize the allocation of computation and communication resources in WCPN. The main objective of task transfer is to minimize the execution latency and energy consumption with respect to resource limitations and task requirements. To solve the formulated problem, we propose a multiagent deep reinforcement learning (DRL) algorithm to find the optimal task transfer and resource allocation strategies. The DRL agents collaborate with others to train a global strategy model through the proposed asynchronous federated aggregation scheme. Numerical results show that the proposed scheme can improve computation efficiency, speed up convergence rate, and enhance utility performance.
               
Click one of the above tabs to view related content.