Using Large Language Models to Generate JUnit Tests: An Empirical Study
Empirical Research
DOI:
10.1145/3661167.3661216
Publication Date:
2024-06-14T16:24:25Z
AUTHORS (6)
ABSTRACT
A code generation model generates code by taking a prompt from a code comment, existing code, or a combination of both. Although code generation models (e.g., GitHub Copilot) are increasingly being adopted in practice, it is unclear whether they can successfully be used for unit test generation without fine-tuning for a strongly typed language like Java. To fill this gap, we investigated how well three models (Codex, GPT-3.5-Turbo, and StarCoder) can generate unit tests. We used two benchmarks (HumanEval and Evosuite SF110) to investigate the effect of context generation on the unit test generation process. We evaluated the models based on compilation rates, test correctness, test coverage, and test smells. We found that the Codex model achieved above 80% coverage for the HumanEval dataset, but no model had more than 2% coverage for the EvoSuite SF110 benchmark. The generated tests also suffered from test smells, such as Duplicated Asserts and Empty Tests.<br/>Accepted in Research Track of The 28th International Conference on Evaluation and Assessment in Software Engineering (EASE 2024)<br/>
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (68)
CITATIONS (28)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....