Out of Context: How important is Local Context in Neural Program Repair?

Benchmark (surveying) Context model
DOI: 10.48550/arxiv.2312.04986 Publication Date: 2023-01-01
ABSTRACT
Deep learning source code models have been applied very successfully to the problem of automated program repair. One standing issues is small input window current which often cannot fully fit context required for a bug fix (e.g., method or class declarations project). Instead, restricted local context, that is, lines below and above location. In this work we study importance on repair success: how much needed?; before after location more important? tied type? To answer these questions train evaluate Transformer in many different configurations three datasets two programming languages. Our results indicate overall success increases with size (albeit not all types) confirm common practice roughly 50-60% should be used leading bug. are only relevant researchers working Transformer-based APR tools but also benchmark dataset creators who must decide what include their datasets.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....