Extractive Summarization via ChatGPT for Faithful Summary Generation
Benchmark (surveying)
DOI:
10.18653/v1/2023.findings-emnlp.214
Publication Date:
2023-12-10T21:58:19Z
AUTHORS (3)
ABSTRACT
Extractive summarization is a crucial task in natural language processing that aims to condense long documents into shorter versions by directly extracting sentences. The recent introduction of large models has attracted significant interest the NLP community due its remarkable performance on wide range downstream tasks. This paper first presents thorough evaluation ChatGPT’s extractive and compares it with traditional fine-tuning methods various benchmark datasets. Our experimental analysis reveals ChatGPT exhibits inferior terms ROUGE scores compared existing supervised systems, while achieving higher based LLM-based metrics. In addition, we explore effectiveness in-context learning chain-of-thought reasoning for enhancing performance. Furthermore, find applying an extract-then-generate pipeline yields improvements over abstractive baselines summary faithfulness. These observations highlight potential directions capabilities faithful using two-stage approaches.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (30)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....