LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Deep Multiagent Reinforcement-Learning-Based Resource Allocation for Internet of Controllable Things

Photo from wikipedia

Ultrareliable and low-latency communication (URLLC) is a prerequisite for the successful implementation of the Internet of Controllable Things. In this article, we investigate the potential of deep reinforcement learning (DRL)… Click to show full abstract

Ultrareliable and low-latency communication (URLLC) is a prerequisite for the successful implementation of the Internet of Controllable Things. In this article, we investigate the potential of deep reinforcement learning (DRL) for joint subcarrier-power allocation to achieve low latency and high reliability in a general form of device-to-device (D2D) networks, where each subcarrier can be allocated to multiple D2D pairs and each D2D pair is permitted to utilize multiple subcarriers. We first formulate the above problem as a Markov decision process and then propose a double deep $Q$ -network (DQN)-based resource allocation algorithm to learn the optimal policy in the absence of full instantaneous channel state information (CSI). Specifically, each D2D pair acts as a learning agent that adjusts its own subcarrier-power allocation strategy iteratively through interactions with the operating environment in a trial-and-error fashion. Simulation results demonstrate that the proposed algorithm achieves near-optimal performance in real time. It is worth mentioning that the proposed algorithm is especially suitable for cases where the environmental dynamics are not accurate and the CSI delay cannot be ignored.

Keywords: based resource; allocation; reinforcement learning; resource allocation; controllable things; internet controllable

Journal Title: IEEE Internet of Things Journal
Year Published: 2021

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.