LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Computation Offloading and Resource Allocation in MEC-Enabled Integrated Aerial-Terrestrial Vehicular Networks: A Reinforcement Learning Approach

As important services of the future sixth-generation (6G) wireless networks, vehicular communication and mobile edge computing (MEC) have received considerable interest in recent years for their significant potential applications in… Click to show full abstract

As important services of the future sixth-generation (6G) wireless networks, vehicular communication and mobile edge computing (MEC) have received considerable interest in recent years for their significant potential applications in intelligent transportation systems. However, MEC-enabled vehicular networks depend heavily on network access and communication infrastructure, often unavailable in remote areas, making computation offloading susceptible to breaking down. To address this issue, we propose an MEC-enabled vehicular network assisted through aerial-terrestrial connectivity to provide network access and high data-rate entertainment services to a vehicular network. We present a time-varying, dynamic system model where high altitude platforms (HAPs) equipped with MEC servers, connected to a backhaul system of low-earth orbit (LEO) satellites, are used to provide computation offloading capability to the vehicles, as well as to provide network access for vehicle-to-vehicle (V2V) communications. Our main objective is to minimize the total computation and communication overhead of the joint computation offloading and resource allocation strategies for the system of vehicles. Since our formulated optimization problem is a mixed-integer non-linear programming (MINLP) problem, which is NP-hard, we propose a decentralized value-iteration-based reinforcement learning (RL) approach as a solution. In our Q-learning-assisted analysis, each vehicle acts as an intelligent agent to form optimal strategies for offloading and resource allocation. We further extend our solution to deep Q-learning (DQL) and double deep Q-learning to overcome the issues of dimensionality and the over-estimation of the value functions, as in Q-learning. Simulation results prove the effectiveness of our solution in successfully reducing system costs compared to baseline schemes.

Keywords: computation; network; offloading resource; resource allocation; computation offloading; mec enabled

Journal Title: IEEE Transactions on Intelligent Transportation Systems
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.