PLPF‐VSLAM: An indoor visual SLAM with adaptive fusion of point‐line‐plane features

0202 electrical engineering, electronic engineering, information engineering 02 engineering and technology
DOI: 10.1002/rob.22242 Publication Date: 2023-08-28T07:23:39Z
ABSTRACT
AbstractSimultaneous localization and mapping (SLAM) is required in many areas and especially visual‐based SLAM (VSLAM) due to the low cost and strong scene recognition capabilities conventional VSLAM relies primarily on features of scenarios, such as point features, which can make mapping challenging in scenarios with sparse texture. For instance, in environments with limited (low‐even non‐) textures, such as certain indoors, conventional VSLAM may fail due to a lack of sufficient features. To address this issue, this paper proposes a VSLAM system called visual SLAM that can adaptively fuse point‐line‐plane features (PLPF‐VSLAM). As the name implies, it can adaptively employ different fusion strategies on the PLPF for tracking and mapping. In particular, in rich‐textured scenes, it utilizes point features, while in non‐/low‐textured scenarios, it automatically selects the fusion of point, line, and/or plane features. PLPF‐VSLAM is evaluated on two RGB‐D benchmarks, namely the TUM data sets and the ICL_NUIM data sets. The results demonstrate the superiority of PLPF‐VSLAM compared to other commonly used VSLAM systems. When compared to ORB‐SLAM2, PLPFVSLAM achieves an improvement in accuracy of approximately 11.29%. The processing speed of PLPF‐VSLAM outperforms PL(P)‐VSLAM by approximately 21.57%.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (46)
CITATIONS (8)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....