Improving Text-to-Code Generation with Features of Code Graph on GPT-2
Code (set theory)
DOI:
10.3390/electronics10212706
Publication Date:
2021-11-05T14:33:09Z
AUTHORS (2)
ABSTRACT
Code generation, as a very hot application area of deep learning models for text, consists two different fields: code-to-code and text-to-code. A recent approach, GraphCodeBERT uses code graph, which is called data flow, showed good performance improvement. The base model architecture it bidirectional encoder representations from transformers (BERT), the part transformer. On other hand, generative pre-trained transformer (GPT)—another multiple architecture—uses decoder shows great in multilayer perceptron model. In this study, we investigate improvement graphs with several variances on GPT-2 to refer abstract semantic tree used collect features variables code. Here, mainly focus additional that allow learn effect stream. experimental phase divided into parts: fine-tuning existing model, pre-training scratch using data. When pre-train new scratch, produces an outperformed result compared graph enough
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (23)
CITATIONS (6)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....