Artificial Intelligence of Things (AIoT), as a fusion of AI and Internet of Things (IoT), has become a new trend to realize the intelligentization of industry 4.0 and the data… Click to show full abstract
Artificial Intelligence of Things (AIoT), as a fusion of AI and Internet of Things (IoT), has become a new trend to realize the intelligentization of industry 4.0 and the data privacy and security is the key to its successful implementation. To enhance data privacy protection, the federated learning has been introduced in AIoT, which allows participants to jointly train AI models without sharing private data. However, in federated learning, malicious participants might provide malicious models by launching the poisoning attack, which will jeopardize the convergence and accuracy of the global model. To solve this problem, we propose a malicious model detection mechanism based on the isolation forest, named D2MIF, for the federated learning empowered AIoT. In D2MIF, an isolation forest is constructed to compute the malicious score for each model uploaded by the corresponding participant, then the models will be filtered if their malicious scores are higher than the threshold, which is dynamically adjusted using reinforcement learning (RL). The validation experiment is conducted on two public datasets Mnist and Fashion_Mnist. And the experiment results show that the proposed D2MIF can effectively detect malicious models and significantly improve the global model accuracy in federated learning empowered AIoT.
               
Click one of the above tabs to view related content.