Besides the large amount of information multispectral imaging offers, multispectral visual odometry remains overlooked due to the dissimilarity between modalities. In order to tackle the challenging feature matching between multispectral… Click to show full abstract
Besides the large amount of information multispectral imaging offers, multispectral visual odometry remains overlooked due to the dissimilarity between modalities. In order to tackle the challenging feature matching between multispectral stereo images and to overcome the lack of robust multispectral visual localisation solutions, a novel approach is proposed in this paper. It consists in tracking features in each modality simultaneously, in a monocular manner and then, estimating motion in a windowed bundle adjustment framework and using the geometry of the stereo setup to recover the missing scale. The estimation is robustified by selecting adequate keyframes based on feature parallax and by maximising the mutual information between all the features reprojected in the stereo pair. Furthermore, the proposed multispectral visual odometry solution is integrated in an error-state Kalman filter framework to deal with challenging environments, where the quality of images is reduced. Two measurements models, using absolute and relative camera poses, are presented. The superiority of relative poses is then shown by providing a failure recovery algorithm which relies on inertial data when visual data are not accessible. The algorithm was tested on innovative series of visible-thermal multispectral datasets, acquired from a car with real driving conditions. An overall error of around 2% of the travelled distance was achieved on these datasets.
               
Click one of the above tabs to view related content.