Bipartite Graph Pre-training for Unsupervised Extractive Summarization with Graph Convolutional Auto-Encoders

Rank (graph theory)
DOI: 10.48550/arxiv.2310.18992 Publication Date: 2023-01-01
ABSTRACT
Pre-trained sentence representations are crucial for identifying significant sentences in unsupervised document extractive summarization. However, the traditional two-step paradigm of pre-training and sentence-ranking, creates a gap due to differing optimization objectives. To address this issue, we argue that utilizing pre-trained embeddings derived from process specifically designed optimize cohensive distinctive helps rank sentences. do so, propose novel graph auto-encoder obtain by explicitly modelling intra-sentential features inter-sentential cohesive through sentence-word bipartite graphs. These then utilized graph-based ranking algorithm Our method produces predominant performance summarization frameworks providing summary-worthy representations. It surpasses heavy BERT- or RoBERTa-based downstream tasks.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....