DiffBIR: Towards Blind Image Restoration with Generative Diffusion Prior
Generative model
Code (set theory)
DOI:
10.48550/arxiv.2308.15070
Publication Date:
2023-01-01
AUTHORS (9)
ABSTRACT
We present DiffBIR, a general restoration pipeline that could handle different blind image tasks in unified framework. DiffBIR decouples problem into two stages: 1) degradation removal: removing image-independent content; 2) information regeneration: generating the lost content. Each stage is developed independently but they work seamlessly cascaded manner. In first stage, we use modules to remove degradations and obtain high-fidelity restored results. For second propose IRControlNet leverages generative ability of latent diffusion models generate realistic details. Specifically, trained based on specially produced condition images without distracting noisy content for stable generation performance. Moreover, design region-adaptive guidance can modify denoising process during inference model re-training, allowing users balance realness fidelity through tunable scale. Extensive experiments have demonstrated DiffBIR's superiority over state-of-the-art approaches super-resolution, face both synthetic real-world datasets. The code available at https://github.com/XPixelGroup/DiffBIR.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....