Abstract. Event-based vision has been rapidly growing in recent years justified by the unique characteristics, such as its high temporal resolutions (∼1 μs), high dynamic range (>120 dB), and output latency of… Click to show full abstract
Abstract. Event-based vision has been rapidly growing in recent years justified by the unique characteristics, such as its high temporal resolutions (∼1 μs), high dynamic range (>120 dB), and output latency of only a few microseconds. Our work further explores a hybrid, multimodal approach for object detection and tracking that leverages state-of-the-art frame-based detectors complemented by hand-crafted event-based methods to improve the overall tracking performance with minimal computational overhead. The methods presented include event-based bounding box (BB) refinement that improves the precision of the resulting BBs, as well as a continuous event-based object detection method, to recover missed detections and generate interframe detections that enable a high-temporal-resolution tracking output. The advantages of these methods are quantitatively verified by an ablation study using the higher order tracking accuracy (HOTA) metric. Results show significant performance gains resembled by an improvement in the HOTA from 56.6%, using only frames, to 64.1% and 64.9%, for the event and edge-based mask configurations combined with the two methods proposed, at the baseline frame rate of 24 Hz. Likewise, incorporating these methods with the same configurations has improved HOTA from 52.5% to 63.1% and from 51.3% to 60.2% at the high-temporal-resolution tracking rate of 384 Hz. Finally, a validation experiment is conducted to analyze the real-world single-object tracking performance using high-speed LiDAR. Empirical evidence shows that our approaches provide significant advantages compared to using frame-based object detectors at the baseline frame rate of 24 Hz and higher tracking rates of up to 500 Hz.
               
Click one of the above tabs to view related content.