Long Text Generation via Adversarial Training with Leaked Information
Discriminative model
Closed captioning
Turing test
Generative model
DOI:
10.1609/aaai.v32i1.11957
Publication Date:
2022-11-03T06:52:39Z
AUTHORS (6)
ABSTRACT
Automatically generating coherent and semantically meaningful text has many applications in machine translation, dialogue systems, image captioning, etc. Recently, by combining with policy gradient, Generative Adversarial Nets(GAN) that use a discriminative model to guide the training of generative as reinforcement learning shown promising results generation. However, scalar guiding signal is only available after entire been generated lacks intermediate information about structure during process. As such, it limits its success when length samples long (more than 20 words). In this paper, we propose new framework, called LeakGAN, address problem for We allow net leak own high-level extracted features further help guidance. The generator incorporates such informative signals into all generation steps through an additional MANAGER module, which takes current words outputs latent vector WORKER module next-word generation.Our extensive experiments on synthetic data various real-world tasks Turing test demonstrate LeakGAN highly effective also improves performance short scenarios. More importantly, without any supervision, would be able implicitly learn sentence structures interaction between WORKER.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (183)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....