LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Reinforcement Learning-Based Resource Management Model for Fog Radio Access Network Architectures in 5G

Photo from wikipedia

The need to cope with the continuously growing number of connected users and the increased demand for mobile broadband services in the Internet of Things has led to the notion… Click to show full abstract

The need to cope with the continuously growing number of connected users and the increased demand for mobile broadband services in the Internet of Things has led to the notion of introducing the fog computing paradigm in fifth generation (5G) mobile networks in the form of fog radio access network (F-RAN). The F-RAN approach emphasises bringing the computation capability to the edge of the network so as to reduce network bottlenecks and improve latency. However, despite the potential, the management of computational resources remains a challenge in F-RAN architectures. Thus, this paper aims to overcome the shortcomings of conventional approaches to computational resource allocation in F-RANs. Reinforcement learning (RL) is presented as a method for dynamic and autonomous resource allocation, and an algorithm is proposed based on Q-learning. RL has several benefits in resource allocation problems and simulations carried out show that it outperforms reactive methods. Furthermore, the results show that the proposed algorithm improves latency and thus has the potential to have a major impact in 5G applications, particularly the Internet of Things.

Keywords: radio access; access network; access; resource; fog radio; network

Journal Title: IEEE Access
Year Published: 2021

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.