Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis
CLARION
Causal chain
DOI:
10.1609/aaaiss.v2i1.27706
Publication Date:
2024-01-23T00:53:48Z
AUTHORS (4)
ABSTRACT
This paper explores the integration of two AI subdisciplines employed in development artificial agents that exhibit intelligent behavior: Large Language Models (LLMs) and Cognitive Architectures (CAs). We present three approaches, each grounded theoretical models supported by preliminary empirical evidence. The modular approach, which introduces four with varying degrees integration, makes use chain-of-thought prompting, draws inspiration from augmented LLMs, Common Model Cognition, simulation theory cognition. agency motivated Society Mind LIDA cognitive architecture, proposes formation agent collections interact at micro macro levels, driven either LLMs or symbolic components. neuro-symbolic takes CLARION a model where bottom-up learning extracts representations an LLM layer top-down guidance utilizes to direct prompt engineering layer. These approaches aim harness strengths both CAs, while mitigating their weaknesses, thereby advancing more robust systems. discuss tradeoffs challenges associated approach.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (2)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....