Learning Adaptive Dexterous Grasping from Single Demonstrations

FOS: Computer and information sciences Artificial Intelligence (cs.AI) Robotics (cs.RO) Machine Learning (cs.LG)
DOI: 10.48550/arxiv.2503.20208 Publication Date: 2025-03-26
ABSTRACT
How can robots learn dexterous grasping skills efficiently and apply them adaptively based on user instructions? This work tackles two key challenges: efficient skill acquisition from limited human demonstrations context-driven selection. We introduce AdaDexGrasp, a framework that learns library of single demonstration per selects the most suitable one using vision-language model (VLM). To improve sample efficiency, we propose trajectory following reward guides reinforcement learning (RL) toward states close to while allowing flexibility in exploration. beyond demonstration, employ curriculum learning, progressively increasing object pose variations enhance robustness. At deployment, VLM retrieves appropriate instructions, bridging low-level learned with high-level intent. evaluate AdaDexGrasp both simulation real-world settings, showing our approach significantly improves RL efficiency enables human-like grasp strategies across varied configurations. Finally, demonstrate zero-shot transfer policies PSYONIC Ability Hand, 90% success rate objects, outperforming baseline.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....