FLNet: Landmark Driven Fetching and Learning Network for Faithful Talking Facial Animation Synthesis
Landmark
Merge (version control)
Image warping
View synthesis
DOI:
10.1609/aaai.v34i07.6717
Publication Date:
2020-06-29T18:43:38Z
AUTHORS (3)
ABSTRACT
Talking face synthesis has been widely studied in either appearance-based or warping-based methods. Previous works mostly utilize single image as a source, and generate novel facial animations by merging other person's features. However, some regions like eyes teeth, which may be hidden the source image, can not synthesized faithfully stably. In this paper, We present landmark driven two-stream network to faithful talking animation, more details are created, preserved transferred from multiple images instead of one. Specifically, we propose consisting learning fetching stream. The sub-net directly learns attentively warp merge five distinctive landmarks, while pipeline renders organs training space compensate. Compared baseline algorithms, extensive experiments demonstrate that proposed method achieves higher performance both quantitatively qualitatively. Codes at https://github.com/kgu3/FLNet_AAAI2020.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (34)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....