Differentiable Augmentation for Data-Efficient GAN Training
Discriminator
Code (set theory)
Training set
Image Synthesis
High fidelity
DOI:
10.48550/arxiv.2006.10738
Publication Date:
2020-01-01
AUTHORS (5)
ABSTRACT
The performance of generative adversarial networks (GANs) heavily deteriorates given a limited amount training data. This is mainly because the discriminator memorizing exact set. To combat it, we propose Differentiable Augmentation (DiffAugment), simple method that improves data efficiency GANs by imposing various types differentiable augmentations on both real and fake samples. Previous attempts to directly augment manipulate distribution images, yielding little benefit; DiffAugment enables us adopt augmentation for generated samples, effectively stabilizes training, leads better convergence. Experiments demonstrate consistent gains our over variety GAN architectures loss functions unconditional class-conditional generation. With DiffAugment, achieve state-of-the-art FID 6.80 with an IS 100.8 ImageNet 128x128 2-4x reductions 1,000 images FFHQ LSUN. Furthermore, only 20% data, can match top CIFAR-10 CIFAR-100. Finally, generate high-fidelity using 100 without pre-training, while being par existing transfer learning algorithms. Code available at https://github.com/mit-han-lab/data-efficient-gans.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....