Relevant or Random: Can LLMs Truly Perform Analogical Reasoning?
Analogical reasoning
DOI:
10.48550/arxiv.2404.12728
Publication Date:
2024-04-19
AUTHORS (8)
ABSTRACT
Analogical reasoning is a unique ability of humans to address unfamiliar challenges by transferring strategies from relevant past experiences. One key finding in psychology that compared with irrelevant experiences, recalling ones can help better handle new tasks. Coincidentally, the NLP community has also recently found self-generating examples context large language models (LLMs) solve given problem than hand-crafted prompts. However, it yet not clear whether relevance factor eliciting such capability, i.e., LLMs benefit more self-generated ones? In this work, we systematically explore truly perform analogical on diverse set With extensive experiments and analysis, show random surprisingly achieve comparable or even performance, e.g., 4% performance boost GSM8K biological examples. We find accuracy subsequently design two improved methods significantly reduced inference costs. Overall, aim advance deeper understanding LLM hope work stimulates further research contexts.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....