Mobile Edge Computing (MEC) has been widely employed to support various Internet of Things (IoT) and mobile applications. By leveraging the advantages of easily deployed and flexibility of Unmanned Aerial… Click to show full abstract
Mobile Edge Computing (MEC) has been widely employed to support various Internet of Things (IoT) and mobile applications. By leveraging the advantages of easily deployed and flexibility of Unmanned Aerial Vehicle (UAV), one of MEC primary functions is employing UAVs equipped with MEC servers to provide computation supports for the offloaded tasks by mobile users in temporally hotspot areas or some emergent scenarios, such as sports game areas or destroyed by natural disaster areas. Despite the numerous advantages of UAV carried with a MEC server, it is restricted by its limited computation resources and sensitive energy consumption. Moreover, due to the complexity of UAV-assisted MEC system, its computational resource optimization and energy consumption optimization cannot be achieved well in traditional optimization methods. Furthermore, the computational cost of the MEC system optimization is often exponentially growing with the increase of the MEC servers and mobile users. Therefore, it is considerably challenging to control the UAV positions and schedule the task offloading ratio. In this paper, we proposed a novel Deep Reinforcement Learning (DRL) method to optimize UAV trajectory controlling and users’ offloaded task ratio scheduling and improve the performance of the UAV-assisted MEC system. We maximized the system stability and minimized the energy consumption and computation latency of UAV-assisted the MEC system. The simulation results show that the proposed method outperforms existing work and has better scalability.
               
Click one of the above tabs to view related content.