LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Energy Minimization for Cellular-Connected UAV: From Optimization to Deep Reinforcement Learning

Photo from wikipedia

Cellular-connected unmanned aerial vehicles (UAVs) are expected to become integral components of future cellular networks. To this end, one of the important problems to address is how to support energy-efficient… Click to show full abstract

Cellular-connected unmanned aerial vehicles (UAVs) are expected to become integral components of future cellular networks. To this end, one of the important problems to address is how to support energy-efficient UAV operation while maintaining reliable connectivity between those aerial users and cellular networks. In this paper, we aim to minimize the energy consumption of cellular-connected UAV via jointly designing the mission completion time and UAV trajectory, as well as communication base station (BS) associations, while ensuring a satisfactory communication connectivity with the ground cellular network during the UAV flight. An optimization problem is formulated by taking into account the UAV’s flight energy consumption and various practical aspects of the air-ground communication models, including BS antenna pattern, interference from non-associated BSs and local environment. The formulated problem is difficult to tackle due to the lack of closed-form expressions and non-convexity nature. To this end, we first assume that the channel knowledge map (CKM) or radio map for the considered area is available, which contains rich information about the relatively stable (large-scale) channel parameters. By utilizing path discretization technique, we obtain a discretized equivalent problem and develop an efficient solution based on graph theory by employing convex optimization technique and a dynamic-weight shortest path algorithm over graph. Next, we study the more practical case that the CKM is unavailable initially. By transforming the optimization problem to a Markov decision process (MDP), we develop a deep reinforcement learning (DRL) algorithm based on multi-step learning and double Q-learning over a dueling Deep Q-Network (DQN) architecture, where the UAV acts as an agent to explore and learn its moving policy according to its local observations of the measured signal samples. Extensive simulations are carried out and the results show that our proposed designs significantly outperform baseline schemes. Furthermore, our results reveal new insights of energy-efficient UAV flight with connectivity requirements and unveil the tradeoff between UAV energy consumption and time duration along line segments.

Keywords: reinforcement learning; deep reinforcement; energy; cellular connected; connected uav; optimization

Journal Title: IEEE Transactions on Wireless Communications
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.