For safe and efficient navigation in driving environment, autonomous vehicles are supposed to predict future trajectories of surrounding vehicles dynamically and tackle the uncertainty of environment. Early works view uncertainty… Click to show full abstract
For safe and efficient navigation in driving environment, autonomous vehicles are supposed to predict future trajectories of surrounding vehicles dynamically and tackle the uncertainty of environment. Early works view uncertainty as distributions in different modals, and perform this task by enumerating a mass of trajectories, but can hardly cover all the possible modal distributions. Recently, generative model has risen to hold promise for modelling such distributions, something that we will build upon here. Target at predefined modals of vehicle agents, this paper proposes Spatial-Temporal Generative Model (STGM), a new learning algorithm that leverages a stochastic generative model to mitigate modal distributions problem. To be more specific, a Conditional Variational Auto-Encoder (CVAE) is leveraged to construct the framework of STGM, in which the input is augmented into modal distributions to gain modality-wise trajectories. Besides, for the stability of prediction, we also come up with a new modal-wise sampling trick as an alternative of traditional random sampling in CVAE. Considering that multiple layer perception (MLP) based CVAE is incapable of extracting spatial and temporal features effectively, we use two respective encoders before CVAE: Spatial Encoder and Temporal Encoder, and align them with a closed modal set. Additionally, inspired by the Total Probability Formula, we adopt a Modal Prediction Model to refine the confidences of modal-wise trajectories. After empirical evaluation on two public datasets, we find that STGM almost outperforms the baselines, such as Covernet, MTP, and so on.
               
Click one of the above tabs to view related content.