Much effort overlaps in designing different hardware to implement different Simultaneous localization and mapping (SLAM) algorithms. In this brief, a reconfigurable architecture with dedicated instruction sets allows the coprocessor to… Click to show full abstract
Much effort overlaps in designing different hardware to implement different Simultaneous localization and mapping (SLAM) algorithms. In this brief, a reconfigurable architecture with dedicated instruction sets allows the coprocessor to satisfy the pose estimation of a sample class of the SLAM algorithms, feature-based or learning-based methods, which can be decomposed to basic common operations. Furthermore, a memory-reused strategy in instructions was designed to avoid the demand for temporary memory for complex operations. Finally, two parallel computing cores are implemented to perform matrix operations and special computation about the pose estimation in floating-point and fixed-point arithmetic. These contribute to the low hardware resource usage and memory requirement, as illustrated in the experimental results.
               
Click one of the above tabs to view related content.