To support the diversified Quality of Service (QoS) requirements of application scenarios, network slicing has been introduced in the mobile cellular network. It allows mobile cellular network operators to accomplish… Click to show full abstract
To support the diversified Quality of Service (QoS) requirements of application scenarios, network slicing has been introduced in the mobile cellular network. It allows mobile cellular network operators to accomplish the creation of multiple logically isolated networks on common network infrastructure flexibly depending on specified demands. Meanwhile, in Internet of Vehicles (IoV), it is very intractable to supply a stable QoS for the vehicles, especially for the dynamic vehicular environments. Thus, we investigate the IoV slicing problem in this article, and propose a QoS guaranteed network slicing orchestration, namely, the long short-term memory-based deep deterministic policy gradient algorithm (LSTM-DDPG), to ensure the stable performance for the slices. Specifically, we first decouple the resource allocation problem into two subproblems. After that, the deep learning and reinforcement learning (RL) are used to allocate resources collaboratively to solve these two questions. We use deep learning LSTM to track the characteristic of the long-term vehicular environment changing, and the RL algorithm DDPG is utilized for online resource tuning. Extensive simulations have proved the effectiveness of the LSTM-DDPG, which can offer stable QoS to the vehicles with a probability greater than 92%. We also demonstrated the adaptiveness of the proposed orchestration with different slicing environments, and the performance is always optimal compared to that of other algorithms.
               
Click one of the above tabs to view related content.