LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

DeepEC: Adversarial attacks against graph structure prediction models

Photo by mybbor from unsplash

Abstract Inspired by the practical importance of graph structured data, link prediction, one of the most frequently applied tasks on graph data, has garnered considerable attention in recent years, and… Click to show full abstract

Abstract Inspired by the practical importance of graph structured data, link prediction, one of the most frequently applied tasks on graph data, has garnered considerable attention in recent years, and they have been widely applied in item recommendation, privacy inference attack, knowledge graph completion, fraud detection, and other fields. However, recent studies show that machine learning-based intelligent systems are vulnerable to adversarial attacks, which has recently inspired much research on the security problems of machine learning in the context of computer vision, natural language processing, physical world, etc. Nonetheless, there is a lack of understanding of the vulnerability of link prediction methods in face of adversarial attacks. To unveil the weaknesses and aid in the development of robust link prediction methods, we propose a deep architecture-based adversarial attack method, called Deep Ensemble Coding, against link prediction. In particular, based on the assumption that links play different structural roles in structure organization, we propose a deep linear coding-based structure enhancement mechanism to generate adversarial examples. We also empirically investigate other adversarial attack methods for graph data, including heuristic and evolutionary perturbation methods. Based on the comprehensive experiments conducted on various real-world networks, we can conclude that the proposed adversarial attack method has satisfactory performance for link prediction. Moreover, we can observe that state-of-the-art link prediction algorithms are vulnerable to adversarial attacks and, for adversarial defense, the attack can be viewed as a robustness evaluation for the construction of robust link prediction methods.

Keywords: attack; adversarial attacks; structure; link prediction; prediction

Journal Title: Neurocomputing
Year Published: 2021

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.