Towards a Better Global Loss Landscape of GANs

Maxima and minima Code (set theory)
DOI: 10.48550/arxiv.2011.04926 Publication Date: 2020-01-01
ABSTRACT
Understanding of GAN training is still very limited. One major challenge its non-convex-non-concave min-max objective, which may lead to sub-optimal local minima. In this work, we perform a global landscape analysis the empirical loss GANs. We prove that class separable-GAN, including original JS-GAN, has exponentially many bad basins are perceived as mode-collapse. also study relativistic pairing (RpGAN) couples generated samples and true samples. RpGAN no basins. Experiments on synthetic data show predicted basin can indeed appear in training. experiments support our theory better than separable-GAN. For instance, empirically performs separable-GAN with relatively narrow neural nets. The code available at https://github.com/AilsaF/RS-GAN.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....