Self-Sparse Generative Adversarial Networks
MNIST database
Feature vector
Normalization
Feature (linguistics)
Kernel (algebra)
Convolution (computer science)
DOI:
10.26599/air.2022.9150005
Publication Date:
2022-08-27T16:00:52Z
AUTHORS (4)
ABSTRACT
Generative adversarial networks (GANs) are an unsupervised generative model that learns data distribution through training. However, recent experiments indicated GANs difficult to train due the requirement of optimization in high dimensional parameter space and zero gradient problem. In this work, we propose a self-sparse network (Self-Sparse GAN) reduces alleviates Self-Sparse GAN, design self-adaptive sparse transform module (SASTM) comprising sparsity decomposition feature-map recombination, which can be applied on multi-channel feature maps obtain maps. The key idea GAN is add SASTM following every deconvolution layer generator, adaptively reduce by utilizing We theoretically prove not only search convolution kernel weight generator but also alleviate problem maintaining meaningful features batch normalization driving layers away from being negative. experimental results show our method achieves best Fréchet inception distance (FID) scores for image generation compared with Wasserstein penalty (WGAN-GP) MNIST, Fashion-MNIST, CIFAR-10, STL-10, mini-ImageNet, CELEBA-HQ, LSUN bedrooms datasets, relative decrease FID 4.76%–21.84%. Meanwhile, architectural sketch dataset (Sketch) used validate superiority proposed method.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (38)
CITATIONS (4)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....