CoCoFuzzing: Testing Neural Code Models with Coverage-Guided Fuzzing

Fuzz testing Robustness Deep Neural Networks Code (set theory)
DOI: 10.48550/arxiv.2106.09242 Publication Date: 2021-01-01
ABSTRACT
Deep learning-based code processing models have shown good performance for tasks such as predicting method names, summarizing programs, and comment generation. However, despite the tremendous progress, deep learning are often prone to adversarial attacks, which can significantly threaten robustness generalizability of these by leading them misclassification with unexpected inputs. To address above issue, many testing approaches been proposed, however, mainly focus on applications in domains image, audio, text analysis, etc., cannot be directly applied neural due unique properties programs. In this paper, we propose a coverage-based fuzzing framework, CoCoFuzzing, models. particular, first ten mutation operators automatically generate valid semantically preserving source examples tests; then neuron approach guide generation tests. We investigate CoCoFuzzing three state-of-the-art models, i.e., NeuralCodeSum, CODE2SEQ, CODE2VEC. Our experiment results demonstrate that improve coverage. Moreover, tests used target through retraining.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....