AniFaceDiff: High-Fidelity Face Reenactment via Facial Parametric Conditioned Diffusion Models
High fidelity
Parametric model
DOI:
10.48550/arxiv.2406.13272
Publication Date:
2024-06-19
AUTHORS (10)
ABSTRACT
Face reenactment refers to the process of transferring pose and facial expressions from a reference (driving) video onto static (source) image while maintaining original identity source image. Previous research in this domain has made significant progress by training controllable deep generative models generate faces based on specific identity, expression conditions. However, mechanisms used these methods control often inadvertently introduce information driving video, also causing loss expression-related details. This paper proposes new method Stable Diffusion, called AniFaceDiff, incorporating conditioning module for high-fidelity face reenactment. First, we propose an enhanced 2D snapshot approach shape alignment prevent inclusion video. Then, adapter mechanism address potential information. Our effectively preserves fidelity retaining fine details Through experiments VoxCeleb dataset, demonstrate that our achieves state-of-the-art results reenactment, showcasing superior quality, preservation, accuracy, especially cross-identity scenarios. Considering ethical concerns surrounding misuse, analyze implications method, evaluate current deepfake detectors, identify their shortcomings guide future research.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....