Boosting Language Models Reasoning with Chain-of-Knowledge Prompting
Boosting
Commonsense knowledge
DOI:
10.48550/arxiv.2306.06427
Publication Date:
2023-01-01
AUTHORS (5)
ABSTRACT
Recently, Chain-of-Thought (CoT) prompting has delivered success on complex reasoning tasks, which aims at designing a simple prompt like ``Let's think step by step'' or multiple in-context exemplars with well-designed rationales to elicit Large Language Models (LLMs) generate intermediate steps. However, the generated often come mistakes, making unfactual and unfaithful chains. To mitigate this brittleness, we propose novel Chain-of-Knowledge (CoK) prompting, where aim eliciting LLMs explicit pieces of knowledge evidence in form structure triple. This is inspired our human behaviors, i.e., can draw mind map as brain before answering question. Benefiting from CoK, additionally introduce F^2-Verification method estimate reliability chains terms factuality faithfulness. For unreliable response, wrong be indicated LLM rethink. Extensive experiments demonstrate that further improve performance commonsense, factual, symbolic, arithmetic tasks.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....