Universal Facial Encoding of Codec Avatars from VR Headsets
FOS: Computer and information sciences
Computer Science - Machine Learning
Computer Vision and Pattern Recognition (cs.CV)
05 social sciences
Computer Science - Computer Vision and Pattern Recognition
0501 psychology and cognitive sciences
Machine Learning (cs.LG)
DOI:
10.1145/3658234
Publication Date:
2024-07-19T14:47:57Z
AUTHORS (10)
ABSTRACT
Faithful real-time facial animation is essential for avatar-mediated telepresence in Virtual Reality (VR). To emulate authentic communication, avatar animation needs to be efficient and accurate: able to capture both extreme and subtle expressions within a few milliseconds to sustain the rhythm of natural conversations. The oblique and incomplete views of the face, variability in the donning of headsets, and illumination variation due to the environment are some of the unique challenges in generalization to unseen faces. In this paper, we present a method that can animate a photorealistic avatar in realtime from head-mounted cameras (HMCs) on a consumer VR headset. We present a self-supervised learning approach, based on a cross-view reconstruction objective, that enables generalization to unseen users. We present a lightweight expression calibration mechanism that increases accuracy with minimal additional cost to run-time efficiency. We present an improved parameterization for precise ground-truth generation that provides robustness to environmental variation. The resulting system produces accurate facial animation for unseen users wearing VR headsets in realtime. We compare our approach to prior face-encoding methods demonstrating significant improvements in both quantitative metrics and qualitative results.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (93)
CITATIONS (1)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....