Instruction-based Hypergraph Pretraining
arxiv(2024)
摘要
Pretraining has been widely explored to augment the adaptability of graph
learning models to transfer knowledge from large datasets to a downstream task,
such as link prediction or classification. However, the gap between training
objectives and the discrepancy between data distributions in pretraining and
downstream tasks hinders the transfer of the pretrained knowledge. Inspired by
instruction-based prompts widely used in pretrained language models, we
introduce instructions into graph pretraining. In this paper, we propose a
novel pretraining framework named Instruction-based Hypergraph Pretraining. To
overcome the discrepancy between pretraining and downstream tasks, text-based
instructions are applied to provide explicit guidance on specific tasks for
representation learning. Compared to learnable prompts, whose effectiveness
depends on the quality and the diversity of training data, text-based
instructions intrinsically encapsulate task information and support the model
to generalize beyond the structure seen during pretraining. To capture
high-order relations with task information in a context-aware manner, a novel
prompting hypergraph convolution layer is devised to integrate instructions
into information propagation in hypergraphs. Extensive experiments conducted on
three public datasets verify the superiority of IHP in various scenarios.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要