Papers
arxiv:2402.07197

GraphTranslator: Aligning Graph Model to Large Language Model for Open-ended Tasks

Published on Feb 11, 2024
Authors:
,
,
,
,
,
,
,
,

Abstract

Large language models (LLMs) like ChatGPT, exhibit powerful zero-shot and instruction-following capabilities, have catalyzed a revolutionary transformation across diverse fields, especially for open-ended tasks. While the idea is less explored in the graph domain, despite the availability of numerous powerful graph models (GMs), they are restricted to tasks in a pre-defined form. Although several methods applying LLMs to graphs have been proposed, they fail to simultaneously handle the pre-defined and open-ended tasks, with LLM as a node feature enhancer or as a standalone predictor. To break this dilemma, we propose to bridge the pretrained GM and LLM by a Translator, named <PRE_TAG>GraphTranslator</POST_TAG>, aiming to leverage GM to handle the pre-defined tasks effectively and utilize the extended interface of LLMs to offer various open-ended tasks for GM. To train such Translator, we propose a Producer capable of constructing the graph-text alignment data along node information, neighbor information and model information. By translating node representation into tokens, <PRE_TAG>GraphTranslator</POST_TAG> empowers an LLM to make predictions based on language instructions, providing a unified perspective for both pre-defined and open-ended tasks. Extensive results demonstrate the effectiveness of our proposed <PRE_TAG>GraphTranslator</POST_TAG> on zero-shot node classification. The graph question answering experiments reveal our <PRE_TAG>GraphTranslator</POST_TAG> potential across a broad spectrum of open-ended tasks through language instructions. Our code is available at: https://github.com/alibaba/<PRE_TAG>GraphTranslator</POST_TAG>.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2402.07197 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2402.07197 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.