Knowledge Graph-augmented Language Models for Complex Question Answering
Knowledge graph
DOI:
10.18653/v1/2023.nlrse-1.1
Publication Date:
2023-08-05T00:57:42Z
AUTHORS (3)
ABSTRACT
Large language models have shown impressive abilities to reason over input text, however, they are prone hallucinations. On the other hand, end-to-end knowledge graph question answering (KGQA) output responses grounded in facts, but still struggle with complex reasoning, such as comparison or ordinal questions. In this paper, we propose a new method for where combine retriever based on an KGQA model that reasons retrieved facts return answer. We observe augmenting prompts KG improves performance using alone by average of 83%. particular, see improvements questions requiring count, intersection, multi-hop reasoning operations.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (10)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....