Event cameras can perceive pixel-level brightness changes to output asynchronous event streams, and have notable advantages in high temporal resolution, high dynamic range and low power consumption for challenging vision… Click to show full abstract
Event cameras can perceive pixel-level brightness changes to output asynchronous event streams, and have notable advantages in high temporal resolution, high dynamic range and low power consumption for challenging vision tasks. To apply existing learning models on event data, many researchers integrate sparse events into dense frame-based representations which can work with convolutional neural networks directly. Although these works achieve high performance on event-based classification, their models need lots of parameters to process dense event frames which do not fit with the sparsity of event data. To utilize the sparse nature of events, we propose a voxel-wise graph learning model (VMV-GCN) for spatio-temporal feature learning on event streams. Specifically, we design the volumetric multi-view fusion module (VMVF) to extract spatial and temporal information from views of voxelized event data. Then we take representative event voxels as vertices and use a novel dual-graph construction strategy to connect them. By aggregating neighborhood information based on relationships of vertices, the proposed dynamic neighborhood feature learning module (DNFL) can capture discriminative spatio-temporal features on dynamically updated graphs. Experiments show that our method achieves state-of-the-art performance with low model complexity on event-based classification tasks, such as object classification and action recognition.
               
Click one of the above tabs to view related content.