Graph Inference Acceleration by Learning MLPs on Graphs without Supervision

Zehong Wang, Zheyuan Zhang,Chuxu Zhang,Yanfang Ye

CoRR(2024)

引用 0|浏览3
暂无评分
摘要
Graph Neural Networks (GNNs) have demonstrated effectiveness in various graph learning tasks, yet their reliance on message-passing constraints their deployment in latency-sensitive applications such as financial fraud detection. Recent works have explored distilling knowledge from GNNs to Multi-Layer Perceptrons (MLPs) to accelerate inference. However, this task-specific supervised distillation limits generalization to unseen nodes, which are prevalent in latency-sensitive applications. To this end, we present SimMLP, a Simple yet effective framework for learning MLPs on graphs without supervision, to enhance generalization. SimMLP employs self-supervised alignment between GNNs and MLPs to capture the fine-grained and generalizable correlation between node features and graph structures, and proposes two strategies to alleviate the risk of trivial solutions. Theoretically, we comprehensively analyze SimMLP to demonstrate its equivalence to GNNs in the optimal case and its generalization capability. Empirically, SimMLP outperforms state-of-the-art baselines, especially in settings with unseen nodes. In particular, it obtains significant performance gains (7∼26%) over MLPs and inference acceleration over GNNs (90∼126×) on large-scale graph datasets. Our codes are available at: .
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要