SG-LPR: Semantic-Guided LiDAR-Based Place Recognition

0202 electrical engineering, electronic engineering, information engineering 02 engineering and technology
DOI: 10.3390/electronics13224532 Publication Date: 2024-11-19T11:06:54Z
ABSTRACT
Place recognition plays a crucial role in tasks such as loop closure detection and re-localization robotic navigation. As high-level representation within scenes, semantics enables models to effectively distinguish geometrically similar places, therefore enhancing their robustness environmental changes. Unlike most existing semantic-based LiDAR place (LPR) methods that adopt multi-stage relatively segregated data-processing storage pipeline, we propose novel end-to-end LPR model guided by semantic information—SG-LPR. This introduces segmentation auxiliary task guide the autonomously capturing information from scene, implicitly integrating these features into main task, thus providing unified framework of “segmentation-while-describing” avoiding additional intermediate steps. Moreover, operates only during training, not adding any time overhead testing phase. The also combines advantages Swin Transformer U-Net address shortcomings current global contextual extracting fine-grained features. Extensive experiments conducted on multiple sequences KITTI NCLT datasets validate effectiveness, robustness, generalization ability our proposed method. Our approach achieves notable performance improvements over state-of-the-art methods.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (51)
CITATIONS (1)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....