Loop closure can effectively correct the accumulated error in robot localization, which plays a critical role in the long-term navigation of the robot. Traditional appearance-based methods rely on local features… Click to show full abstract
Loop closure can effectively correct the accumulated error in robot localization, which plays a critical role in the long-term navigation of the robot. Traditional appearance-based methods rely on local features and are prone to failure in ambiguous environments. On the other hand, object recognition can infer objects' category, pose, and extent. These objects can serve as stable semantic landmarks for viewpoint-independent and non-ambiguous loop closure. However, there is a critical object-level data association problem due to the lack of efficient and robust algorithms. We introduce a novel object-level data association algorithm, which incorporates IoU, instance-level embedding, and detection uncertainty, formulated as a linear assignment problem. Then, we model the objects as TSDF volumes and represent the environment as a 3D graph with semantics and topology. Next, we propose a graph matching-based loop detection based on the reconstructed 3D semantic graphs and correct the accumulated error by aligning the matched objects. Finally, we refine the object poses and camera trajectory in an object-level pose graph optimization. Experimental results show that the proposed object-level data association method significantly outperforms the commonly used nearest neighbor method in accuracy. Our graph matching-based loop closure is more robust to environmental appearance changes than existing appearance-based methods.
               
Click one of the above tabs to view related content.