ActAnywhere: Subject-Aware Video Background Generation

Generative model
DOI: 10.48550/arxiv.2401.10822 Publication Date: 2024-01-01
ABSTRACT
Generating video background that tailors to foreground subject motion is an important problem for the movie industry and visual effects community. This task involves synthesizing aligns with appearance of subject, while also complies artist's creative intention. We introduce ActAnywhere, a generative model automates this process which traditionally requires tedious manual efforts. Our leverages power large-scale diffusion models, specifically tailored task. ActAnywhere takes sequence segmentation as input image describes desired scene condition, produce coherent realistic foreground-background interactions adhering condition frame. train our on dataset human-scene interaction videos. Extensive evaluations demonstrate superior performance model, significantly outperforming baselines. Moreover, we show generalizes diverse out-of-distribution samples, including non-human subjects. Please visit project webpage at https://actanywhere.github.io.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....