Many applications require identifying nodes that perform similar functions in a graph. For instance, identifying structurally equivalent nodes can provide insight into the structure of complex networks. Learning latent representations… Click to show full abstract
Many applications require identifying nodes that perform similar functions in a graph. For instance, identifying structurally equivalent nodes can provide insight into the structure of complex networks. Learning latent representations that capture such structural role information about nodes has recently gained a lot of attention. Existing techniques for learning such representations typically rely on manually engineered features or are very expensive in terms of time and memory requirements. In this paper, we propose SEGK, a powerful framework for computing structural node representations. SEGK learns node representations by generating (or approximating) and decomposing a kernel matrix that incorporates structural similarity between nodes. To compute the similarity between two nodes, the proposed framework builds on well-established concepts from graph mining. Specifically, it compares the neighborhood subgraphs of increasing size of two nodes using graph kernels. SEGK is very flexible, and besides unlabeled graphs, it can also handle node-labeled and node-attributed graphs. We evaluate the proposed framework on several synthetic and real-world datasets, and compare its performance to state-of-the-art techniques for learning structural node embeddings. In almost all cases, the instances of the proposed framework outperform the competing methods, while their time complexity remains very attractive.
               
Click one of the above tabs to view related content.