We propose a novel method for modeling crowd video dynamics by adopting a two-stream convolutional architecture which incorporates spatial and temporal networks. Our proposed method cope with the key challenge… Click to show full abstract
We propose a novel method for modeling crowd video dynamics by adopting a two-stream convolutional architecture which incorporates spatial and temporal networks. Our proposed method cope with the key challenge of capturing the complementary information on appearance from still frames and motion between frames. In our proposed method, a motion flow field is obtained from the video through dense optical flow. We demonstrate that the proposed method trained on multi-frame dense optical flow achieves significant improvement in performance in spite of limited training data. We train and evaluate our proposed method on a benchmark crowd video dataset. The experimental results of our method show that it outperforms five reference methods. We have chosen these reference methods since they are the most relevant to our work.
               
Click one of the above tabs to view related content.