Few-shot Semantic Learning for Robust Multi-Biome 3D Semantic Mapping in Off-Road Environments
Biome
DOI:
10.48550/arxiv.2411.06632
Publication Date:
2024-11-10
AUTHORS (10)
ABSTRACT
Off-road environments pose significant perception challenges for high-speed autonomous navigation due to unstructured terrain, degraded sensing conditions, and domain-shifts among biomes. Learning semantic information across these conditions biomes can be challenging when a large amount of ground truth data is required. In this work, we propose an approach that leverages pre-trained Vision Transformer (ViT) with fine-tuning on small (<500 images), sparse coarsely labeled (<30% pixels) multi-biome dataset predict 2D segmentation classes. These classes are fused over time via novel range-based metric aggregated into 3D voxel map. We demonstrate zero-shot out-of-biome the Yamaha (52.9 mIoU) Rellis (55.5 datasets along few-shot coarse labeling existing improved performance (66.6 (67.2 mIoU). further illustrate feasibility using map fusion handle common off-road hazards like pop-up hazards, overhangs, water features.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....