Learning Agile Robotic Locomotion Skills by Imitating Animals
FOS: Computer and information sciences
Computer Science - Robotics
Computer Science - Machine Learning
0209 industrial biotechnology
02 engineering and technology
Robotics (cs.RO)
Machine Learning (cs.LG)
DOI:
10.15607/rss.2020.xvi.064
Publication Date:
2020-06-30T14:47:47Z
AUTHORS (6)
ABSTRACT
Reproducing the diverse and agile locomotion skills of animals has been a longstanding challenge in robotics. While manually-designed controllers have able to emulate many complex behaviors, building such involves time-consuming difficult development process, often requiring substantial expertise nuances each skill. Reinforcement learning provides an appealing alternative for automating manual effort involved controllers. However, designing objectives that elicit desired behaviors from agent can also require great deal skill-specific expertise. In this work, we present imitation system enables legged robots learn by imitating real-world animals. We show leveraging reference motion data, single learning-based approach is automatically synthesize repertoire robots. By incorporating sample efficient domain adaptation techniques into training our adaptive policies simulation then be quickly adapted deployment. To demonstrate effectiveness system, train 18-DoF quadruped robot perform variety ranging different gaits dynamic hops turns.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (180)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....