Modeling sequential behaviors is the core of sequential recommendation. As users visit items in chronological order, existing methods typically capture a user's present interests from his/her past-to-present behaviors, i.e. making… Click to show full abstract
Modeling sequential behaviors is the core of sequential recommendation. As users visit items in chronological order, existing methods typically capture a user's present interests from his/her past-to-present behaviors, i.e. making recommendations with only the unidirectional past information. This article argues that future information is another critical factor for the sequential recommendation. However, directly learning from future-to-present behaviors inevitably causes data leakage. Here, it is pointed out that future information can be learned from users' collaborative behaviors. Toward this end, this article introduces sequential graphs to depict item transition relationships: where and how each item transits from and will transit to. This temporal evolution information is called the light cone in special and general relativity. Then, a bidirectional sequential graph convolutional network (BiSGCN) is proposed to learn item representations by encoding past and future light cones. Finally, a manifold translating embedding (MTE) method is proposed to model item transition patterns in Riemannian manifolds, which helps to better capture the geometric structures of light cones and item transition patterns. Experimental comparisons and ablation studies verify the outstanding performance of BiSGCN, the benefits of learning from the future, and the improvements of learning in Riemannian manifolds.
               
Click one of the above tabs to view related content.