MoConVQ: Unified Physics-Based Motion Control via Scalable Discrete Representations
Representation
Match moving
DOI:
10.48550/arxiv.2310.10198
Publication Date:
2023-01-01
AUTHORS (6)
ABSTRACT
In this work, we present MoConVQ, a novel unified framework for physics-based motion control leveraging scalable discrete representations. Building upon vector quantized variational autoencoders (VQ-VAE) and model-based reinforcement learning, our approach effectively learns embeddings from large, unstructured dataset spanning tens of hours examples. The resultant representation not only captures diverse skills but also offers robust intuitive interface various applications. We demonstrate the versatility MoConVQ through several applications: universal tracking sources, interactive character with latent representations using supervised generation natural language descriptions GPT framework, and, most interestingly, seamless integration large models (LLMs) in-context learning to tackle complex abstract tasks.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....