In autonomous vehicles, accurate extrinsic calibration for LiDAR and camera is an essential prerequisite for multi-sensor information fusion. Automatic and targetless extrinsic calibration has become the mainstream of academic research… Click to show full abstract
In autonomous vehicles, accurate extrinsic calibration for LiDAR and camera is an essential prerequisite for multi-sensor information fusion. Automatic and targetless extrinsic calibration has become the mainstream of academic research in recent years. However, existing automatic calibration methods that rely on edge or semantic features are unrobust, or require specific scene settings. In this paper, instance segmentation is used for automatic extrinsic calibration of the LiDAR and camera for the first time. Key targets from the segmented instances are extracted and correlated. Regarding the extrinsic calibration as an optimization problem, a novel cost function based on the matching degree of the appearance and centroids from the key targets of the point cloud and image pairs is formulated. Subsequently, differential evolution is used to minimize the cost function to obtain the optimal extrinsic parameters. Extensive experiments on the KITTI dataset and Waymo Open Dataset demonstrate the accuracy and robustness of the proposed method. The MAE of rotation and translation is less than 0.3$ ^{\circ }$ and 0.05 m respectively, which outperforms semantic-based and edge-based approaches in terms of accuracy.
               
Click one of the above tabs to view related content.