Generative Image Inpainting with Contextual Attention

Inpainting Filling-in Copying Generative model Code (set theory) Texture (cosmology)
DOI: 10.48550/arxiv.1801.07892 Publication Date: 2018-01-01
ABSTRACT
Recent deep learning based approaches have shown promising results for the challenging task of inpainting large missing regions in an image. These methods can generate visually plausible image structures and textures, but often create distorted or blurry textures inconsistent with surrounding areas. This is mainly due to ineffectiveness convolutional neural networks explicitly borrowing copying information from distant spatial locations. On other hand, traditional texture patch synthesis are particularly suitable when it needs borrow regions. Motivated by these observations, we propose a new generative model-based approach which not only synthesize novel also utilize features as references during network training make better predictions. The model feed-forward, fully process images multiple holes at arbitrary locations variable sizes test time. Experiments on datasets including faces (CelebA, CelebA-HQ), (DTD) natural (ImageNet, Places2) demonstrate that our proposed generates higher-quality than existing ones. Code, demo models available at: https://github.com/JiahuiYu/generative_inpainting.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....