GOFA: A Generative One-For-All Model for Joint Graph Language Modeling
Generative model
DOI:
10.48550/arxiv.2407.09709
Publication Date:
2024-07-12
AUTHORS (7)
ABSTRACT
Foundation models, such as Large Language Models (LLMs) or Vision (LVMs), have emerged one of the most powerful tools in respective fields. However, unlike text and image data, graph data do not a definitive structure, posing great challenges to developing Graph Model (GFM). For example, current attempts at designing general models either transform into language format for LLM-based prediction still train GNN model with LLM an assistant. The former can handle unlimited tasks, while latter captures structure much better -- yet, no existing work achieve both simultaneously. In this paper, we identify three key desirable properties GFM: self-supervised pretraining, fluidity awareness. To account these properties, extend conventional modeling domain propose novel generative GOFA solve problem. interleaves randomly initialized layers frozen pre-trained so that semantic structural abilities are organically combined. is on newly proposed graph-level next-word prediction, question-answering, tasks obtain above GFM properties. further fine-tuned downstream task-solving ability. evaluated various demonstrating strong ability contextual problems zero-shot scenarios. code available https://github.com/JiaruiFeng/GOFA.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....