Multi-hop Attention Graph Neural Networks.

IJCAI(2021)

引用 41|浏览693
暂无评分
摘要
Self-attention mechanism in graph neural networks (GNNs) led to state-of-the-art performance on many graph representation learning tasks. Currently, at every layer, a node computes attention independently for each of its graph neighbors. However, such attention mechanism is limited as it does not consider nodes that are not connected by an edge but can provide important network context information. Here we propose Multi-hop Attention Graph Neural Network (MAGNA), a principled way to incorporate multi-hop context information into every layer of GNN attention computation. MAGNA diffuses the attention scores across the network, which increases the “receptive field” for every layer of the GNN. Unlike previous approaches, MAGNA uses a diffusion prior on attention values, to efficiently account for all paths between the pair of not-connected nodes. We demonstrate theoretically and experimentally that MAGNA captures large-scale structural information in every layer, and has a low-pass effect that eliminates noisy high-frequency information from the graph. Experimental results on node classification as well as knowledge graph completion benchmarks show that MAGNA achieves state-of-the-art results: MAGNA achieves up to 5.7% relative error reduction over the previous state-of-the-art on Cora, Citeseer, and Pubmed. MAGNA also obtains strong performance on a large-scale Open Graph Benchmark dataset. Finally, on knowledge graph completion MAGNA advances state-of-the-art on WN18RR and FB15k-237 across four different performance metrics.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要