To meet the wide range of 5G use cases in a cost-efficient way, network slicing has been advocated as a key enabler. Unlike the core network slicing in a virtualized… Click to show full abstract
To meet the wide range of 5G use cases in a cost-efficient way, network slicing has been advocated as a key enabler. Unlike the core network slicing in a virtualized environment, radio access network (RAN) slicing is still in its infancy and the corresponding realization is challenging. In this paper, we investigate the realization approach of fog RAN slicing, where two network slice instances for hotspot and vehicle-to-infrastructure scenarios are concerned and orchestrated. In particular, the framework for RAN slicing is formulated as an optimization problem of jointly tackling content caching and mode selection, in which the time-varying channel and unknown content popularity distribution are characterized. Due to the different users’ demands and the limited resources, the complexity of original optimization problem is significant high, which makes traditional optimization approaches hard to be directly applied. To deal with this dilemma, a deep reinforcement learning algorithm is proposed, whose core idea is that the cloud server makes proper decisions on the content caching and mode selection to maximize the reward performance under the dynamical channel state and cache status. The simulation results demonstrate the performance in terms of hit ratio and sum transmit rate can be significantly improved by the proposal.
               
Click one of the above tabs to view related content.