The significant achievements have been made in crowd detection and tracking due to the advancement of artificial intelligence in the autonomous driving. However, the image-based methods have strict requirements for… Click to show full abstract
The significant achievements have been made in crowd detection and tracking due to the advancement of artificial intelligence in the autonomous driving. However, the image-based methods have strict requirements for the collection conditions of video, and the development of the new generation of flexible fabrics has become potential sensors to perceive context. In this paper, an intelligent fabric space enabled by multi-sensing sensors is established to track the motion objects. We propose a behavior analysis pipeline including the modules of data preparation, trajectory coupling, motion scenario segmentation, and motion pattern measurement to capture the crowd information from micro-level and macro-level over the intelligent fabric space. After making preprocess for the multi-sensing data, a coupling mechanism is formulated to fuse the video-based trajectory and fabric-based trajectory. And an automatic motion scenario segmentation model divides the surrounding scenario into main-crowd, sub-crowd, and background according to the motion behavior. Further, we define measurement metrics to analyze the motion pattern for the different crowds. Extensive experiments prove that our proposed methods effectively fuse multiple trajectories and realize the crowd segmentation and the motion description. This will greatly help autonomous vehicles and control system perceive the surrounding pedestrians and the environment to make precise driving decisions.
               
Click one of the above tabs to view related content.