SwitchTab: Switched Autoencoders Are Effective Tabular Learners
DOI:
10.1609/aaai.v38i14.29523
Publication Date:
2024-03-25T11:28:46Z
AUTHORS (12)
ABSTRACT
Self-supervised representation learning methods have achieved significant success in computer vision and natural language processing (NLP), where data samples exhibit explicit spatial or semantic dependencies. However, applying these to tabular is challenging due the less pronounced dependencies among samples. In this paper, we address limitation by introducing SwitchTab, a novel self-supervised method specifically designed capture latent data. SwitchTab leverages an asymmetric encoder-decoder framework decouple mutual salient features pairs, resulting more representative embeddings. These embeddings, turn, contribute better decision boundaries lead improved results downstream tasks. To validate effectiveness of conduct extensive experiments across various domains involving The showcase superior performance end-to-end prediction tasks with fine-tuning. Moreover, demonstrate that pre-trained embeddings can be utilized as plug-and-play enhance traditional classification (e.g., Logistic Regression, XGBoost, etc.). Lastly, highlight capability create explainable representations through visualization decoupled space.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (13)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....