LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Forest: A Lightweight Semantic Image Descriptor for Robust Visual Place Recognition

Photo from wikipedia

Visual place recognition (VPR) is the process of identifying previously visited places using visual information. It is crucial for a robot to achieve fast and accurate VPR under appearance and… Click to show full abstract

Visual place recognition (VPR) is the process of identifying previously visited places using visual information. It is crucial for a robot to achieve fast and accurate VPR under appearance and viewpoint changes. Inspired by human perception intuition, this letter proposes a lightweight semantic image descriptor called Forest for robust VPR. Specifically, for each semantic object in the image, three most recognizable reference objects are first found and ranked in descending order according to a customized measure. And then, categories, position and area of the semantic object itself, along with the categories of the three recognized reference objects, are used to encode the local descriptor of the object. As a result, the set of local descriptors is used as the global descriptor of the image. Performance of the proposed descriptor is evaluated and compared with six state-of-the-art VPR methods, Ibow-LCD, CoHOG, NetVLAD, LOST-X, Region VLAD, Patch-NetVLAD, over four public datasets of urban scenes, Extended-CMU Season, RobotCar Seasons v2, SYNTHIA and Kitti, using three Precision-Recall based metrics, AUC(Area Under Curve), Recall@100%Precision and Precision@100%Recall. The results show that our method maintains competitive accuracy in scenes with appearance and viewpoint changes and exhibits high robustness. At the same time, our method is robust to the noise of semantic segmentation and exhibits high real-time performance.

Keywords: semantic image; lightweight semantic; descriptor; visual place; place recognition; image

Journal Title: IEEE Robotics and Automation Letters
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.