Unmanned aerial vehicles (UAVs) can be employed as aerial base stations to support communication for the ground users (GUs). However, the aerial-to-ground (A2G) channel link is dominated by line-of-sight (LoS)… Click to show full abstract
Unmanned aerial vehicles (UAVs) can be employed as aerial base stations to support communication for the ground users (GUs). However, the aerial-to-ground (A2G) channel link is dominated by line-of-sight (LoS) due to the high flying altitude, which is easily wiretapped by the ground eavesdroppers (GEs). In this case, a single UAV has limited maneuvering capacity to obtain the desired secure rate in the presence of multiple eavesdroppers. In this paper, we propose a cooperative jamming approach by letting UAV jammers help the UAV transmitter defend against GEs. To be specific, the UAV transmitter sends the confidential information to GUs, and the UAV jammers send the artificial noise signals to the GEs by 3D beamforming. We propose a multi-agent deep reinforcement learning (MADRL) approach, i.e., multi-agent deep deterministic policy gradient (MADDPG) to maximize the secure capacity by jointly optimizing the trajectory of UAVs, the transmit power from UAV transmitter and the jamming power from the UAV jammers. The MADDPG algorithm adopts centralized training and distributed execution. The simulation results show the MADRL method can realize the joint trajectory design of UAVs and achieve good performance. To improve the learning efficiency and convergence, we further propose a continuous action attention MADDPG (CAA-MADDPG) method, where the agent learns to pay attention to the actions and observations of other agents that are more relevant with it. From the simulation results, the rewards performance of CAA-MADDPG is better than the MADDPG without attention.
               
Click one of the above tabs to view related content.