Vision-based Robotic Arm Imitation by Human Gesture
Monocular
DOI:
10.48550/arxiv.1703.04906
Publication Date:
2017-01-01
AUTHORS (2)
ABSTRACT
One of the most efficient ways for a learning-based robotic arm to learn process complex tasks as human, is directly from observing how human complete those tasks, and then imitate. Our idea based on success Deep Q-Learning (DQN) algorithm according reinforcement learning, extend Deterministic Policy Gradient (DDPG) algorithm. We developed method, combining modified DDPG visual imitation network. approach acquires frames only monocular camera, no need either construct 3D environment or generate actual points. The result we expected during training, was that robot would be able move almost same hands did.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....