Improved recurrent generative adversarial networks with regularization techniques and a controllable framework
Regularization
Normalization
Generative adversarial network
DOI:
10.1016/j.ins.2020.05.116
Publication Date:
2020-06-04T15:21:11Z
AUTHORS (5)
ABSTRACT
Abstract Generative Adversarial Network (GAN), a deep learning framework to generate synthetic but realistic samples, has produced astonishing results for image synthesis. However, because GAN is routinely used for image datasets, regularization methods for GAN have been developed for convolutional layers. In this study, to expand these methods for time-series data, which are one of the most common data types in various real datasets, modified regularization methods are proposed for Long Short-Term Memory (LSTM)-based GANs. Specifically, the spectral normalization, hinge loss, orthogonal regularization, and the truncation trick are modified and assessed for LSTM-based GANs. Furthermore, a conditional GAN architecture called Controllable GAN (ControlGAN) is applied to LSTM-based GANs to produce the desired samples. The evaluations are conducted with sine wave data, air pollution datasets, and a medical time-series dataset obtained from intensive care units. As a result, ControlGAN with the spectral normalization on gates and cell states consistently outperforms the others, including the conventional model, called Recurrent Conditional GAN (RCGAN).
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (39)
CITATIONS (12)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....