LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Power Allocation in Ultra-Dense Networks Through Deep Deterministic Policy Gradient

Photo by acfb5071 from unsplash

The existing reinforcement learning-based downlink power allocation (PA) schemes mostly consider the power optimization space as a discrete value space, however, their results will deviate from the optimal result in… Click to show full abstract

The existing reinforcement learning-based downlink power allocation (PA) schemes mostly consider the power optimization space as a discrete value space, however, their results will deviate from the optimal result in ultra-dense networks, and the deviation grows as the network size increases. This letter proposed a PA model based on deep deterministic policy gradient (DDPG), where policy-based power selection assisted with value-based evaluation is leveraged to explore the optimal result from a continuous power space. Specifically, this model uses two CNNs named actors to formulate continuous deterministic PA strategy function instead of discrete power distribution sampling, and designs another two CNNs named critics for PA strategy evaluation and actor CNNs’ update supervision. Additionally, to reduce the interference, a tunable service base station set is designed for each user and is considered for the model training. Experiments demonstrate the proposed DDPG-based PA model respectively reaches 116.2% and 95.9% sum-rate relative to the iterative algorithm in small and large-scale networks.

Keywords: deep deterministic; power; power allocation; policy; ultra dense; dense networks

Journal Title: IEEE Wireless Communications Letters
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.