Pixel to Elevation: Learning to Predict Elevation Maps at Long Range using Images for Autonomous Offroad Navigation
Elevation (ballistics)
DOI:
10.48550/arxiv.2401.17484
Publication Date:
2024-01-30
AUTHORS (5)
ABSTRACT
Understanding terrain topology at long-range is crucial for the success of off-road robotic missions, especially when navigating high-speeds. LiDAR sensors, which are currently heavily relied upon geometric mapping, provide sparse measurements mapping greater distances. To address this challenge, we present a novel learning-based approach capable predicting elevation maps using only onboard egocentric images in real-time. Our proposed method comprised three main elements. First, transformer-based encoder introduced that learns cross-view associations between views and prior bird-eye-view map predictions. Second, an orientation-aware positional encoding to incorporate 3D vehicle pose information over complex unstructured with multi-view visual image features. Lastly, history-augmented learn-able embedding achieve better temporal consistency predictions facilitate downstream navigational tasks. We experimentally validate applicability our autonomous offroad navigation real-world driving data. Furthermore, qualitatively quantitatively compared against current state-of-the-art methods. Extensive field experiments demonstrate surpasses baseline models accurately while effectively capturing overall long-ranges. Finally, ablation studies conducted highlight understand effect key components their suitability improve capabilities.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....