The coming B5G/6G will bring a huge amount of challenging applications with zettabytes of information. Computing power network (CPN) can provide a promising solution by accelerating the proliferation of computing… Click to show full abstract
The coming B5G/6G will bring a huge amount of challenging applications with zettabytes of information. Computing power network (CPN) can provide a promising solution by accelerating the proliferation of computing power from a set of data centers to a multitude of network edges. However, in dealing with resource-hungry and real-time applications, most of the existing research doesn't make good use of idle computing and caching resources, and is barely possible to evaluate the contribution of the individual providing the resource. Therefore, we propose the in-network pooling framework, derived from a novel modified deep reinforcement learning (DRL) scheme, in which the dynamic resource pool (RP) is first modeled to make full use of the idle network resources, then the jointly computing and caching problem is formulated as the maximization of long-term system utility. Finally, Attention-based Proximal Policy Optimization (APPO) is employed to solve the problem. Particularly, the integrated attention mechanism reveals the evaluation of the different RPs' contributions to the learning process. Experimental results also show the priorities of the proposed algorithm and outperform the other existing algorithms.
               
Click one of the above tabs to view related content.