De-biased Attention Supervision for Text Classification with Causality
Causality
DOI:
10.1609/aaai.v38i17.29897
Publication Date:
2024-03-25T11:58:25Z
AUTHORS (8)
ABSTRACT
In text classification models, while the unsupervised attention mechanism can enhance performance, it often produces distributions that are puzzling to humans, such as assigning high weight seemingly insignificant conjunctions. Recently, numerous studies have explored Attention Supervision (AS) guide model toward more interpretable distributions. However, AS impact especially in specialized domains. this paper, we address issue from a causality perspective. Firstly, leverage causal graph reveal two biases AS: 1) Bias caused by label distribution of dataset. 2) words' different occurrence ranges some words occur across labels others only particular label. We then propose novel De-biased (DAS) method eliminate these with techniques. Specifically, adopt backdoor adjustment on label-caused bias and reduce word-caused subtracting direct effect word. Through extensive experiments professional datasets (e.g., medicine law), demonstrate our achieves improved accuracy along coherent
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (3)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....