Granite Code Models: A Family of Open Foundation Models for Code Intelligence

Code (set theory) Foundation (evidence)
DOI: 10.48550/arxiv.2405.04324 Publication Date: 2024-05-07
ABSTRACT
Large Language Models (LLMs) trained on code are revolutionizing the software development process. Increasingly, LLMs being integrated into environments to improve productivity of human programmers, and LLM-based agents beginning show promise for handling complex tasks autonomously. Realizing full potential requires a wide range capabilities, including generation, fixing bugs, explaining documenting code, maintaining repositories, more. In this work, we introduce Granite series decoder-only models generative tasks, with written in 116 programming languages. The Code family consists ranging size from 3 34 billion parameters, suitable applications application modernization on-device memory-constrained use cases. Evaluation comprehensive set demonstrates that consistently reaches state-of-the-art performance among available open-source LLMs. model was optimized enterprise workflows performs well across coding (e.g. explanation), making it versatile all around model. We release our under an Apache 2.0 license both research commercial use.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....