The soaring mobile data traffic demands have spawned the innovative concept of mobile edge caching in ultra-dense next-generation networks, which mitigates their heavy traffic burden. We conceive cooperative content sharing… Click to show full abstract
The soaring mobile data traffic demands have spawned the innovative concept of mobile edge caching in ultra-dense next-generation networks, which mitigates their heavy traffic burden. We conceive cooperative content sharing between base stations (BSs) for improving the exploitation of the limited storage of a single edge cache. We formulate the cooperative caching problem as a partially observable Markov decision process (POMDP) based multi-agent decision problem, which jointly optimizes the costs of fetching contents from the local BS, from the nearby BSs and from the remote servers. To solve this problem, we devise a multi-agent actor-critic framework, where a communication module is introduced to extract and share the variability of the actions and observations of all BSs. To beneficially exploit the spatio-temporal differences of the content popularity, we harness a variational recurrent neural network (VRNN) for estimating the time-variant popularity distribution in each BS. Based on multi-agent deep reinforcement learning, we conceive a cooperative edge caching algorithm where the BSs operate cooperatively, since the distributed decision making of each agent depends on both the local and the global states. Our experiments conducted within a large scale cellular network having numerous BSs reveal that the proposed algorithm relying on the collaboration of BSs substantially improves the benefits of edge caches.
               
Click one of the above tabs to view related content.