Enhancing Large Language Models with Pseudo- and Multisource- Knowledge Graphs for Open-ended Question Answering
FOS: Computer and information sciences
Computer Science - Computation and Language
Artificial Intelligence (cs.AI)
Computer Science - Artificial Intelligence
Computation and Language (cs.CL)
DOI:
10.48550/arxiv.2402.09911
Publication Date:
2024-01-01
AUTHORS (5)
ABSTRACT
Mitigating the hallucinations of Large Language Models is a crucial task. Although some existing methods employ self-enhancement techniques, they fall short of effectively addressing unknown factual hallucinations. Meanwhile, Knowledge Graph (KG) enhancement approaches fail to address the generalization across different KG sources and the enhancement of open-ended answer questions simultaneously. To tackle these limitations, we propose a framework that combines Pseudo-Graph Generation and Atomic Knowledge Verification (PG\&AKV). Enhancement of open-ended question-answering begins with leveraging the Pseudo-Graph Generation to provide the related knowledge framework. Subsequently, Atomic Knowledge Verification utilizes atomic-level knowledge querying and verification to achieve generalizability under different KG sources. Compared to the baseline, this approach yields a minimum improvement of 11.5 in the ROUGE-L score for open-ended questions. For precise-answered questions, we observe a minimum accuracy improvement of 7.5%. Moreover, PG\&AKV also exhibits generalizability across different KG sources. Utilizing KG different from the question sources, PG\&AKV can even achieve at least a 3.5 % performance improvement. In summary, our results pave the way for enhancing LLMs by incorporating Pseudo- and Multisource-KGs, particularly in the filed of open-ended questions.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....