Graph Convolutional Networks (GCNs) derive inspiration from recent advances in computer vision, by stacking layers of first-order filters followed by a nonlinear activation function to learn entity or graph embeddings.… Click to show full abstract
Graph Convolutional Networks (GCNs) derive inspiration from recent advances in computer vision, by stacking layers of first-order filters followed by a nonlinear activation function to learn entity or graph embeddings. Although GCNs have been shown to boost the performance of many network analysis tasks, they still face tremendous challenges in learning from Heterogeneous Information Networks (HINs), where relations play a decisive role in knowledge reasoning. What's more, there are multiaspect representations of entities in HINs, and a filter learned in one aspect do not necessarily apply to another. We address these challenges by proposing the Aspect-Aware Graph Attention Network (AGAT), a model that extends GCNs with alternative learnable filters to incorporate entity and relational information. Instead of focusing on learning the general entity embeddings, AGAT learns the adaptive entity embeddings based on prediction scenario. Experiments of link prediction and semi-supervised classification verify the effectiveness of our algorithm.
               
Click one of the above tabs to view related content.