Neural-network-based soft sensors are widely employed in the industrial process. Such models have great significance to smart manufacturing. Considering the strict requirements of industrial production, it is vital to ensure… Click to show full abstract
Neural-network-based soft sensors are widely employed in the industrial process. Such models have great significance to smart manufacturing. Considering the strict requirements of industrial production, it is vital to ensure the safety and robustness of these models in their actual deployment. However, recent research has shown that neural networks are quite vulnerable to adversarial attacks. By imposing tiny perturbation to the original sample, the fabricated adversarial sample can cheat these models to make wrong decisions. Such a phenomenon may bring serious trouble to the practical application of soft sensors. This article focuses on the adversarial attacks on industrial soft sensors. For the first time, we verify and analyze the effectiveness and deficiencies of the existing attack methods in the industrial soft sensor scenario. Based on solving these defects, this article proposes a novel perspective for attacking soft sensors. We analyze the optimization mechanism behind this new idea and then design two algorithms to perform attacks. The proposed methods more conform to the actual situation. Besides, compared with the existing approaches, the proposed methods have potentials to cause severer damages since their attacks are not only more concealed but also more likely to cheat the technicians to execute wrong operations. The research and analyses of the proposed methods lay a solid foundation for more thorough defenses against various attacks, which is quite necessary for making the deployed soft sensors more robust and secure.
               
Click one of the above tabs to view related content.