In recommendation tasks, we model user preferences by learning node representations (i.e., user and item embeddings) based on the observed user-item interaction data, which is a bipartite graph. Graph Neural… Click to show full abstract
In recommendation tasks, we model user preferences by learning node representations (i.e., user and item embeddings) based on the observed user-item interaction data, which is a bipartite graph. Graph Neural Networks (GNNs) are widely used to refine the representations by exploring the topology of the graph: embeddings of neighbors are propagated to each node to reconstruct its embeddings. However, the propagation strategy in existing GNNs is empirical and defective: (1) a substantial proportion of links are missed in the sparse observed graph, which causes ineffective and biased propagation; and (2) the propagation weights are determined by a coarse pre-defined rule, which only takes the degree of nodes into consideration. In this paper, we propose a dense and data-driven propagation mechanism for GNNs. Considering the graph we use to propagate embeddings in recommendation tasks is extremely sparse, we complement it and use the predicted graph as the new propagation tool. We learn the propagation matrix from the data, and propose a Self-propagation Graph Neural Network (SGNN). Since it is very space- and time-consuming to maintain a large and dense propagation matrix, we factorize it for storing and updating. In this paper, we propose three methods to complete the sparse graph and construct the propagation matrix: (1) we complete the graph based on a recommendation model; (2) we measure the node distance based on spectral clustering; (3) we predict missing links of the graph based on predictive embeddings. In SGNN, the embeddings can be propagated to not only the observed neighbors, but also the potential yet unobserved neighbors, and the propagation weights are learned based on the connection strength. Comprehensive experiments on three real-world datasets demonstrate the effectiveness and efficiency of our proposed model: SGNN outperforms recent state-of-the-art GNNs significantly. Codes are available on https://github.com/Wenhui-Yu/LCFN.
               
Click one of the above tabs to view related content.