LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

VISEL: A visual and magnetic fusion‐based large‐scale indoor localization system with improved high‐precision semantic maps

Photo by martindorsch from unsplash

Multisource fusion localization is a mainstream scheme for acquiring accurate locations in complex indoor scenes. To overcome the interference of indoor structures on radio and illumination variation on visual features,… Click to show full abstract

Multisource fusion localization is a mainstream scheme for acquiring accurate locations in complex indoor scenes. To overcome the interference of indoor structures on radio and illumination variation on visual features, the semantic maps provide an effective way for multisource fusion localization. However, due to the lack of visual depth information, solutions of indoor semantic maps suffer from large semantic segmentation errors for similar objects, which leads to the unstable performance of localization systems. To overcome the issue in semantic and fusion localization, we develop a localization system to demonstrate the use of restudy semantic map and self‐adapting fusion localization would achieve centimeter‐level positioning accuracy, termed VISEL. VISEL uses the proposed spatial attention‐aware semantic model to enhance the discrimination of semantic features for capturing accurate semantic maps. On the basis of high‐precision semantic maps, VISEL completes an enhanced particle filter fusion localization module with adaptive reassign weight to different localization modules, which successfully improves accuracy through complementary advantages between different signals while overcoming the drawbacks of each signal and interference of complex environment. The extensive experimental results show that VISEL outperforms current state‐of‐the‐art positioning systems and achieves an average positioning accuracy of 0.4 m. VISEL utilizes semantic maps with depth features and enhanced particle filter to reduce the fusion localization error by 38%, which suggests the high‐precision semantic maps with depth features could provide a robust solution for the fusion localization system for indoor complex scenes.

Keywords: fusion; localization; high precision; semantic maps; localization system; fusion localization

Journal Title: International Journal of Intelligent Systems
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.