Comparing LLMs for Prompt-Enhanced ACT-R and Soar Model Development: A Case Study in Cognitive Simulation

Soar
DOI: 10.1609/aaaiss.v2i1.27710 Publication Date: 2024-01-23T00:50:41Z
ABSTRACT
This paper presents experiments on using ChatGPT4 and Google Bard to create ACT-R Soar models. The study involves two simulated cognitive tasks, where (Large Language Models, LLMs) serve as conversational interfaces within the framework development environments. first task creating an intelligent driving model with motor perceptual behavior can further interact unmodified interface. second evaluates of educational skills Soar. Prompts were designed represent operations actions, including providing context, asking perception-related questions, decision-making scenarios, evaluating system's responses, they iteratively refined based evaluation. Results demonstrate potential LLMs interactive develop models a human-in-the-loop process. We documented mistakes made during this integration provided corresponding resolutions when adopting modeling approach. Furthermore, we presented prompt patterns that maximizes interaction for artificial architectures.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (0)
CITATIONS (0)