Unplanned breakdown of critical equipment interrupts production throughput in Industrial IoT (IIoT), and data-driven predictive maintenance (PdM) becomes increasingly important for companies seeking a competitive business advantage. Manufacturers, however, are… Click to show full abstract
Unplanned breakdown of critical equipment interrupts production throughput in Industrial IoT (IIoT), and data-driven predictive maintenance (PdM) becomes increasingly important for companies seeking a competitive business advantage. Manufacturers, however, are constantly faced with the onerous challenge of manually allocating suitably competent manpower resources in the event of an unexpected machine breakdown. Furthermore, human error has a negative rippling impact on both overall equipment downtime and production schedules. In this article, we formulate the complex resource management problem as a resource optimization problem to determine if a model-free deep reinforcement learning (DRL)-based PdM framework can be used to automatically learn an optimal decision policy from a stochastic environment. Unlike the existing PdM frameworks, our approach considers PdM sensor information and the resources of both physical equipment and human as part of the optimization problem. The proposed DRL-based framework and proximal policy optimization long short term memory (PPO-LSTM) model are evaluated alongside baselines results from human participants using a maintenance repair simulator. Empirical results indicate that our PPO-LSTM efficiently learns the optimal decision-policy for the resource management problem, outperforming comparable DRL methods and human participants by 53% and 65%, respectively. Overall, the simulation results corroborate the proposed DRL-based PdM frameworkâs superiority in terms of convergence efficiency, simulation performance, and flexibility.
               
Click one of the above tabs to view related content.