Multi-type Disentanglement without Adversarial Training
Discriminator
Representation
Feature vector
Feature Learning
Feature (linguistics)
Value (mathematics)
DOI:
10.1609/aaai.v35i11.17146
Publication Date:
2022-09-08T19:26:50Z
AUTHORS (2)
ABSTRACT
Controlling the style of natural language by disentangling latent space is an important step towards interpretable machine learning. After disentangled, a sentence can be transformed tuning representation without affecting other features sentence. Previous works usually use adversarial training to guarantee that disentangled vectors do not affect each other. However, methods are difficult train. Especially when there multiple (e.g., sentiment, or tense, which we call types in this paper), feature requires separate discriminator for extracting vector corresponding feature. In paper, propose unified distribution-controlling method, provides specific value (the types, e.g., positive past tense) with unique representation. This method contributes solid theoretical basis avoid multi-type disentanglement. We also loss functions achieve style-content disentanglement as well among types. addition, observe if two different always have some values occur together dataset, they will transferring values. phenomenon bias , and function alleviate such while conduct experiments on datasets (Yelp service reviews Amazon product reviews) evaluate style-disentangling effect unsupervised style-transfer performance types: sentiment tense. The experimental results show effectiveness our model.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (3)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....