LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Memory attention enhanced graph convolution long short‐term memory network for traffic forecasting

Photo by dnevozhai from unsplash

In recent years, traffic forecasting has gradually attracted attention in data mining because of the increasing availability of large‐scale traffic data. However, it faces substantial challenges of complex temporal‐spatial correlations… Click to show full abstract

In recent years, traffic forecasting has gradually attracted attention in data mining because of the increasing availability of large‐scale traffic data. However, it faces substantial challenges of complex temporal‐spatial correlations in traffic. Recent studies mainly focus on modeling the local spatial correlations by utilizing graph neural networks and neglect the influence of long‐distance spatial correlations. Besides, most existing works utilize recurrent neural networks‐based encoder–decoder architecture to forecast multistep traffic volume and suffer from accumulative errors in recurrent neural networks. To deal with these issues, we propose the memory attention (MA) enhanced graph convolution long short‐term memory network (MAEGCLSTM), a novel deep learning model for traffic forecasting. Specifically, MAEGCLSTM combines the MA and the vanilla graph convolution long short‐term memory to capture global and local spatio‐temporal dependencies, respectively. Then MAEGCLSTM utilizes a simplified GCLSTM to effectively fuse the global and local information. Moreover, we integrate the MAEGCLSTM into an encoder–decoder architecture to forecast multistep traffic volume. Besides MAEGCLSTM, we add the convolution neural network and encoder–decoder attention into the decoder to ease accumulative errors caused by iterative prediction and gain whole historical information from the encoder. Experiments on four real‐world traffic data sets show that our model significantly outperforms by up to 6.07% $6.07 \% $ improvement in L1 $L1$ measure over 14 baselines.

Keywords: traffic forecasting; attention; traffic; graph convolution; memory

Journal Title: International Journal of Intelligent Systems
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.