Next location prediction aims to find the location that the user will visit next. It plays a fundamental role for location-based applications. However, the heterogeneity and sparsity of the trajectory… Click to show full abstract
Next location prediction aims to find the location that the user will visit next. It plays a fundamental role for location-based applications. However, the heterogeneity and sparsity of the trajectory data pose great challenges to the task. Recently, RNN-based methods have shown promising performance in learining the spatio-temporal characteristics of the trajectory. While the effectiveness of location prediction has been improved, the computational efficiency and the long-term preferences still leave space for further research. The self-attention mechanism is viewed as a promising solution for parallel computation and exploiting sequential regularities from sparse data. But the huge memory cost and the neglect of temporal information make it infeasible to directly modeling human mobility regularities. In this paper, we propose a temporal-context-based self-attention network named TCSA-Net, which can simultaneously exploit long- and short-term mvoement preferences from sparse and long trajectories. In particular, we design a novel two-stage self-attention architecture that can learn long-term dependency under constrained memory budget. Further, we propose a multi-modal embedding layer to model two complementary temporal contexts and provide more abundant temporal and sequential information. Extensive experiments on two real-life datasets show that the TCSA-Net significantly outperforms the state-of-the-art methods in terms of standard evaluation metrics.
               
Click one of the above tabs to view related content.