Exploring the Potential of the Large Language Models (LLMs) in Identifying Misleading News Headlines
FOS: Computer and information sciences
Computer Science - Computers and Society
Computer Science - Machine Learning
Computer Science - Computation and Language
Computers and Society (cs.CY)
Computation and Language (cs.CL)
Machine Learning (cs.LG)
DOI:
10.48550/arxiv.2405.03153
Publication Date:
2024-05-06
AUTHORS (5)
ABSTRACT
In the digital age, prevalence of misleading news headlines poses a significant challenge to information integrity, necessitating robust detection mechanisms. This study explores efficacy Large Language Models (LLMs) in identifying versus non-misleading headlines. Utilizing dataset 60 articles, sourced from both reputable and questionable outlets across health, science & tech, business domains, we employ three LLMs- ChatGPT-3.5, ChatGPT-4, Gemini-for classification. Our analysis reveals variance model performance, with ChatGPT-4 demonstrating superior accuracy, especially cases unanimous annotator agreement on The emphasizes importance human-centered evaluation developing LLMs that can navigate complexities misinformation detection, aligning technical proficiency nuanced human judgment. findings contribute discourse AI ethics, emphasizing need for models are not only technically advanced but also ethically aligned sensitive subtleties interpretation.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....