Keypoint Action Tokens Enable In-Context Imitation Learning in Robotics

FOS: Computer and information sciences Computer Science - Robotics Computer Science - Machine Learning Computer Science - Neural and Evolutionary Computing Neural and Evolutionary Computing (cs.NE) Robotics (cs.RO) Machine Learning (cs.LG)
DOI: 10.48550/arxiv.2403.19578 Publication Date: 2024-03-28
ABSTRACT
We show that off-the-shelf text-based Transformers, with no additional training, can perform few-shot in-context visual imitation learning, mapping observations to action sequences emulate the demonstrator's behaviour. achieve this by transforming (inputs) and trajectories of actions (outputs) into tokens a text-pretrained Transformer (GPT-4 Turbo) ingest generate, via framework we call Keypoint Action Tokens (KAT). Despite being trained only on language, these Transformers excel at translating tokenised keypoint trajectories, performing par or better than state-of-the-art learning (diffusion policies) in low-data regime suite real-world, everyday tasks. Rather operating language domain as is typical, KAT leverages operate vision domains learn general patterns demonstration data for highly efficient indicating promising new avenues repurposing natural models embodied Videos are available https://www.robot-learning.uk/keypoint-action-tokens.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....