Closing the Loop for Robotic Grasping: A Real-time, Generative Grasp Synthesis Approach
Generative model
Feedback loop
DOI:
10.15607/rss.2018.xiv.021
Publication Date:
2018-08-01T07:30:41Z
AUTHORS (3)
ABSTRACT
This paper presents a real-time, object-independent grasp synthesis method which can be used for closed-loop grasping.Our proposed Generative Grasping Convolutional Neural Network (GG-CNN) predicts the quality and pose of grasps at every pixel.This one-to-one mapping from depth image overcomes limitations current deep-learning grasping techniques by avoiding discrete sampling candidates long computation times.Additionally, our GG-CNN is orders magnitude smaller while detecting stable with equivalent performance to state-of-the-art techniques.The lightweight single-pass generative nature allows control up 50Hz, enabling accurate in non-static environments where objects move presence robot inaccuracies.In real-world tests, we achieve an 83% success rate on set previously unseen adversarial geometry 88% household that are moved during attempt.We also 81% accuracy when dynamic clutter.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (414)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....