Large language models for generating rules, yay or nay?

Software Engineering (cs.SE) FOS: Computer and information sciences Computer Science - Software Engineering
DOI: 10.48550/arxiv.2406.06835 Publication Date: 2024-06-10
ABSTRACT
Engineering safety-critical systems such as medical devices and digital health intervention is complex, where long-term engagement with subject-matter experts (SMEs) needed to capture the systems' expected behaviour. In this paper, we present a novel approach that leverages Large Language Models (LLMs), GPT-3.5 GPT-4, potential world model accelerate engineering of software systems. This involves using LLMs generate logic rules, which can then be reviewed informed by SMEs before deployment. We evaluate our rule set, created from pandemic monitoring system in collaboration professionals during COVID-19. Our experiments show 1) have bootstraps implementation, 2) generated less number rules compared experts, 3) do not capacity thresholds for each rule. work shows how augment requirements' elicitation process providing access domains.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....