Semantically Safe Robot Manipulation: From Semantic Scene Understanding to Motion Safeguards
FOS: Computer and information sciences
Computer Science - Robotics
Robotics (cs.RO)
DOI:
10.48550/arxiv.2410.15185
Publication Date:
2024-10-19
AUTHORS (7)
ABSTRACT
Ensuring safe interactions in human-centric environments requires robots to understand and adhere constraints recognized by humans as "common sense" (e.g., "moving a cup of water above laptop is unsafe the may spill" or "rotating it can lead pouring its content"). Recent advances computer vision machine learning have enabled acquire semantic understanding reason about their operating environments. While extensive literature on robot decision-making exists, rarely integrated into these formulations. In this work, we propose safety filter framework certify inputs with respect semantically defined spatial relationships, behaviours, poses) geometrically environment-collision self-collision constraints). our proposed approach, given perception inputs, build map 3D environment leverage contextual reasoning capabilities large language models infer conditions. These conditions are then mapped actions through control barrier certification formulation. We evaluated approach teleoperated tabletop manipulation tasks pick-and-place tasks, demonstrating effectiveness incorporating ensure operation beyond collision avoidance.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....