LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Learning-Based Flexible Cross-Layer Optimization for Ultrareliable and Low-Latency Applications in IoT Scenarios

Photo by lureofadventure from unsplash

With the continuous popularization and deepening of the Internet-of-Things (IoT) technologies, trillions of IoT Devices (IoTD) are connected to the network. The huge growth of wireless communication traffic and the… Click to show full abstract

With the continuous popularization and deepening of the Internet-of-Things (IoT) technologies, trillions of IoT Devices (IoTD) are connected to the network. The huge growth of wireless communication traffic and the surge of energy consumption make it a great challenge to support various requirements of IoTDs, such as ultrareliable and low latency. The 6th-generation (6G) network has put forward new goals and visions for green communication, network flexibility and intelligence, which are expected to solve these key challenges. In this article, we propose a cross-layer optimization scheme to achieve the trade-off between energy efficiency (EE) and spectral efficiency (SE) of the 6G enabled IoT networks, where the ultrareliable and low-latency applications are considered. Flexible self-organization of three parameters is realized, namely, transmission time interval (TTI), packet duplication (PD), and resource block (RB) allocation. The key technology of flexible TTI scheduling guarantees the reduction of latency, and the PD transmission can effectively improve the reliability. Furthermore, based on machine learning (ML) method, we propose the transfer asynchronous advantage actor–critic (TA3C) algorithm to realize parameter configuration and resource allocation. The simulation results show that the EE and SE tradeoff performance of our proposed flexible scheme is improved by at least 39.29% compared with the fixed parameter configuration. In addition, the TA3C algorithm has better convergence performance and reduces the algorithm complexity by up to 91.23% compared with other ML algorithms.

Keywords: latency applications; ultrareliable low; latency; low latency; cross layer; layer optimization

Journal Title: IEEE Internet of Things Journal
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.