For vehicle-to-network communications, handover (HO) management enables vehicles to maintain the connection with the network while transiting through coverage areas of different base stations (BSs). However, the high mobility of… Click to show full abstract
For vehicle-to-network communications, handover (HO) management enables vehicles to maintain the connection with the network while transiting through coverage areas of different base stations (BSs). However, the high mobility of vehicles means shorter connection periods with each BS that leads to frequent HOs, hence raises the necessity for optimal HO decision making for high quality infotainment services. Machine learning is capable of capturing underlying pattern via data driven methods to find optimal solutions to complex problems, and much learning-based HO optimization research has been conducted focusing on specific network setups. However, attention still needs to be paid to the actual deployment aspect and standardized datasets or simulation environments for evaluation. This paper proposes a deep reinforcement learning-based HO algorithm using the input parameters that are configurable in the existing measurement report of cellular networks. The performance of the proposed algorithm is evaluated using the well-known network simulator ns-3 with its official LTE module. A realistic network setup in the city center of Glasgow (U.K.) is configured with vehicle trajectories generated by the routes mobility model using Google Maps Directions API. Evaluation results reveal that the proposed algorithm significantly outperforms the A3 RSRP baseline with an average of 25.72% packet loss reduction per HO, suggesting significant improvement in quality of service of phone call and video streaming, etc. The proposed algorithm also has a small implementation cost compared to some state-of-the-art and should be deployed by a software update to a local BS controller.
               
Click one of the above tabs to view related content.