Continuous-Multiple Image Outpainting in One-Step via Positional Query and A Diffusion-based Approach

DOI: 10.48550/arxiv.2401.15652 Publication Date: 2024-01-28
ABSTRACT
Image outpainting aims to generate the content of an input sub-image beyond its original boundaries. It is important task in generation yet remains open problem for generative models. This paper pushes technical frontier image two directions that have not been resolved literature: 1) with arbitrary and continuous multiples (without restriction), 2) a single step (even large expansion multiples). Moreover, we develop method does depend on pre-trained backbone network, which contrast commonly required by previous SOTA methods. The multiple achieved utilizing randomly cropped views from same during training capture relative positional information. Specifically, feeding one view embeddings as queries, can reconstruct another view. At inference, images inputting anchor corresponding embeddings. one-step ability here particularly noteworthy methods need be performed $N$ times obtain final basic fixed multiple. We evaluate proposed approach (called PQDiff adopt diffusion-based generator our embodiment, under \textbf{P}ositional \textbf{Q}uery scheme) public benchmarks, demonstrating superior performance over state-of-the-art approaches. achieves FID scores Scenery (\textbf{21.512}), Building Facades (\textbf{25.310}), WikiArts (\textbf{36.212}) datasets. Furthermore, 2.25x, 5x 11.7x settings, only takes \textbf{40.6\%}, \textbf{20.3\%} \textbf{10.2\%} time benchmark (SOTA) method.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....