LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Distributed Beamforming Techniques for Cell-Free Wireless Networks Using Deep Reinforcement Learning

Photo from wikipedia

In a cell-free network, a large number of mobile devices are served simultaneously by several base stations (BSs)/access points(APs) using the same time/frequency resources. However, this creates high signal processing… Click to show full abstract

In a cell-free network, a large number of mobile devices are served simultaneously by several base stations (BSs)/access points(APs) using the same time/frequency resources. However, this creates high signal processing demands (e.g., for beamforming) at the transmitters and receivers. In this work, we develop centralized and distributed deep reinforcement learning (DRL)-based methods to optimize beamforming at the uplink of a cell-free network. First, we propose a fully centralized uplink beamforming method (i.e., centralized learning) that uses the Deep Deterministic Policy Gradient algorithm (DDPG) for an offline-trained DRL model. We then enhance this method, in terms of convergence and performance, by using distributed experiences collected from different APs based on the Distributed Distributional Deterministic Policy Gradients algorithm (D4PG) in which the APs represent the distributed agents of the DRL model. To reduce the complexity of signal processing at the central processing unit (CPU), we propose a fully distributed DRL-based uplink beamforming scheme. This scheme divides the beamforming computations among distributed APs. The proposed schemes are then benchmarked against two common linear beamforming schemes, namely, minimum mean square estimation (MMSE) and the simplified conjugate symmetric schemes. The results show that the D4PG scheme with distributed experience achieves the best performance irrespective of the network size. Furthermore, although the proposed distributed beamforming technique reduces the complexity of centralized learning in the DDPG algorithm, it performs better than the DDPG algorithm only for small-scale networks. The performance superiority of the fully centralized DDPG model becomes more evident as the number of APs and/or UEs increases. The codes for all of our DRL implementations are available at https://github.com/RayRedd/Distributed_beamforming_rl.

Keywords: reinforcement learning; cell free; deep reinforcement; ddpg; distributed beamforming

Journal Title: IEEE Transactions on Cognitive Communications and Networking
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.