Although deep neural networks (DNNs) have been increasingly applied in industrial cyber physical systems (ICPSs), they are vulnerable to security attacks due to the tight interaction between cyber elements and… Click to show full abstract
Although deep neural networks (DNNs) have been increasingly applied in industrial cyber physical systems (ICPSs), they are vulnerable to security attacks due to the tight interaction between cyber elements and physical elements. In this article, we aim to protect the core IP of DNNs, i.e., the model weights, against security attacks. Different from conventional approaches, a layerwise protection framework is proposed to ensure the confidentiality of DNN model weights during the inference procedure such that the security quality is maximized, while satisfying the latency constraint of the DNN task. Based on the layerwise execution characteristics of DNN tasks, the encrypted layer-related weights are decrypted and fed to the next layer of DNN in plaintext. CPU-field programmable gate array (FPGA) coscheduling is considered to accelerate the execution of confidentiality protection, where CPU is utilized to conduct the decryption of weights and FPGA is used to perform the layer execution of DNN. Considering to provide optimal confidential protection for each layer, the problem is transformed into a quality of security maximization problem subject to layerwise execution constraint and deadline constraint of the DNN application. Due to the problem being NP-hard, a fast approximation algorithm is proposed to obtain the near-optimal solution under given real-time and security constraints. Extensive experiments and a real-life ICPS application evaluate the efficiency of the proposed techniques.
               
Click one of the above tabs to view related content.