Create Your World: Lifelong Text-to-Image Diffusion

Code (set theory)
DOI: 10.48550/arxiv.2309.04430 Publication Date: 2023-01-01
ABSTRACT
Text-to-image generative models can produce diverse high-quality images of concepts with a text prompt, which have demonstrated excellent ability in image generation, translation, etc. We this work study the problem synthesizing instantiations use's own never-ending manner, i.e., create your world, where new from user are quickly learned few examples. To achieve goal, we propose Lifelong text-to-image Diffusion Model (L2DM), intends to overcome knowledge "catastrophic forgetting" for past encountered concepts, and semantic neglecting" one or more prompt. In respect forgetting", our L2DM framework devises task-aware memory enhancement module elastic-concept distillation module, could respectively safeguard both prior each personalized concept. When generating solution is that concept attention artist alleviate neglecting aspect, an orthogonal reduce binding attribute aspect. end, model generate faithful across range continual prompts terms qualitative quantitative metrics, when comparing related state-of-the-art models. The code will be released at https://wenqiliang.github.io/.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....