Consistent Alignment of Word Embedding Models

Word embedding
DOI: 10.48550/arxiv.1702.07680 Publication Date: 2017-01-01
ABSTRACT
Word embedding models offer continuous vector representations that can capture rich contextual semantics based on their word co-occurrence patterns. While these vectors provide very effective features used in many NLP tasks such as clustering similar words and inferring learning relationships, challenges open research questions remain. In this paper, we propose a solution aligns variations of the same model (or different models) joint low-dimensional latent space leveraging carefully generated synthetic data points. This generative process is inspired by observation variety linguistic relationships captured simple linear operations embedded space. We demonstrate our approach lead to substantial improvements recovering embeddings local neighborhoods.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....