LLM-Agents Driven Automated Simulation Testing and Analysis of small Uncrewed Aerial Systems

DOI: 10.48550/arxiv.2501.11864 Publication Date: 2025-01-20
ABSTRACT
Thorough simulation testing is crucial for validating the correct behavior of small Uncrewed Aerial Systems (sUAS) across multiple scenarios, including adverse weather conditions (such as wind, and fog), diverse settings (hilly terrain, or urban areas), varying mission profiles (surveillance, tracking). While various sUAS tools exist to support developers, entire process creating, executing, analyzing tests remains a largely manual cumbersome task. Developers must identify test set up environment, integrate System under Test (SuT) with tools, formulate plans, collect analyze results. These labor-intensive tasks limit ability developers conduct exhaustive wide range scenarios. To alleviate this problem, in paper, we propose AutoSimTest, Large Language Model (LLM)-driven framework, where LLM agents collaborate process. This includes: (1) creating scenarios that subject SuT unique environmental contexts; (2) preparing environment per scenario; (3) generating missions execute; (4) results providing an interactive analytics interface. Further, design framework flexible variety use cases, input requirements. We evaluated our approach by (a) conducting PX4 ArduPilot flight-controller-based SuTs, (b) performance each agent, (c) gathering feedback from developers. Our findings indicate AutoSimTest significantly improves efficiency scope process, allowing more comprehensive varied scenario evaluations while reducing effort.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....