Face Adapter for Pre-Trained Diffusion Models with Fine-Grained ID and Attribute Control

Adapter (computing)
DOI: 10.48550/arxiv.2405.12970 Publication Date: 2024-05-21
ABSTRACT
Current face reenactment and swapping methods mainly rely on GAN frameworks, but recent focus has shifted to pre-trained diffusion models for their superior generation capabilities. However, training these is resource-intensive, the results have not yet achieved satisfactory performance levels. To address this issue, we introduce Face-Adapter, an efficient effective adapter designed high-precision high-fidelity editing models. We observe that both reenactment/swapping tasks essentially involve combinations of target structure, ID attribute. aim sufficiently decouple control factors achieve in one model. Specifically, our method contains: 1) A Spatial Condition Generator provides precise landmarks background; 2) Plug-and-play Identity Encoder transfers embeddings text space by a transformer decoder. 3) An Attribute Controller integrates spatial conditions detailed attributes. Face-Adapter achieves comparable or even terms motion precision, retention capability, quality compared fully fine-tuned Additionally, seamlessly with various StableDiffusion
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....