Fifth generation and beyond networks are envisioned to support multi industrial Internet of Things (IIoT) applications with a diverse quality-of-service (QoS) requirements. Network slicing is recognized as a flagship technology… Click to show full abstract
Fifth generation and beyond networks are envisioned to support multi industrial Internet of Things (IIoT) applications with a diverse quality-of-service (QoS) requirements. Network slicing is recognized as a flagship technology that enables IIoT networks with multiservices and resource requirements by allowing the network-as-infrastructure transition to the network-as-service. Motivated by the increasing IIoT computational capacity, and taking into consideration the QoS satisfaction and private data sharing challenges, federated reinforcement learning (RL) has become a promising approach that distributes data acquisition and computation tasks over distributed network agents, exploiting local computation capacities and agent's self-learning experiences. This article proposes a novel deep RL scheme to provide a federated and dynamic network management and resource allocation for differentiated QoS services in future IIoT networks. This involves IIoT slices resource allocation in terms of transmission power (TP) and spreading factor (SF) according to the slices QoS requirements. Toward this goal, the proposed deep federated Q-learning (DFQL) is reached into two main steps. First, we propose a multiagent deep Q-learning-based dynamic slices TP and SF adjustment process that aims at maximizing self-QoS requirements in term of throughput and delay. Second, the deep federated learning is proposed to learn multiagent self-model and enable them to find an optimal action decision on the TP and the SF that satisfy IIoT virtual network slice QoS reward, exploiting the shared experiences between agents. Simulation results show that the proposed DFQL framework achieves efficient performance compared to the traditional approaches.
               
Click one of the above tabs to view related content.