Prompt tuning discriminative language models for hierarchical text classification

Discriminative model
DOI: 10.1017/nlp.2024.51 Publication Date: 2024-10-10T12:59:02Z
ABSTRACT
Abstract Hierarchical text classification (HTC) is a natural language processing task which aims to categorise document into set of classes from hierarchical class structure. Recent approaches solve HTC tasks focus on leveraging pre-trained models (PLMs) and the structure by allowing these components interact in various ways. Specifically, Hierarchy-aware Prompt Tuning (HPT) method has proven be effective applying prompt tuning paradigm Bidirectional Encoder Representations Transformers (BERT) for tasks. reduce gap between pre-training fine-tuning phases transforming downstream PLM. Discriminative PLMs, use replaced token detection (RTD) task, have also shown perform better flat when using instead vanilla fine-tuning. In this paper, we propose PLMs (HPTD) approach injects RTD used pre-train discriminative PLMs. Furthermore, make several improvements that enable scale much larger structures. Through comprehensive experiments, show our robust outperforms current state-of-the-art two out three benchmark datasets.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (35)
CITATIONS (0)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....