Comparing LLMs for Prompt-Enhanced ACT-R and Soar Model Development: A Case Study in Cognitive Simulation

DOI: 10.1609/aaaiss.v2i1.27710 Publication Date: 2024-01-23T00:50:41Z
ABSTRACT
This paper presents experiments on using ChatGPT4 and Google Bard to create ACT-R and Soar models. The study involves two simulated cognitive tasks, where ChatGPT4 and Google Bard (Large Language Models, LLMs) serve as conversational interfaces within the ACT-R and Soar framework development environments. The first task involves creating an intelligent driving model using ACT-R with motor and perceptual behavior and can further interact with an unmodified interface. The second task evaluates the development of educational skills using Soar. Prompts were designed to represent cognitive operations and actions, including providing context, asking perception-related questions, decision-making scenarios, and evaluating the system's responses, and they were iteratively refined based on model behavior evaluation. Results demonstrate the potential of using LLMs to serve as interactive interfaces to develop ACT-R and Soar models within a human-in-the-loop model development process. We documented the mistakes LLMs made during this integration and provided corresponding resolutions when adopting this modeling approach. Furthermore, we presented a framework of prompt patterns that maximizes LLMs interaction for artificial cognitive architectures.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (0)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....