Context-sensitive graph representation learning

International Journal of Machine Learning and Cybernetics(2023)

引用 1|浏览56
暂无评分
摘要
Graph representation learning, which maps high-dimensional graphs or sparse graphs into a low-dimensional vector space, has shown its superiority in numerous learning tasks. Recently, researchers have identified some advantages of context-sensitive graph representation learning methods in functions such as link predictions and ranking recommendations. However, most existing methods depend on convolutional neural networks or recursive neural networks to obtain additional information outside a node, or require community algorithms to extract multiple contexts of a node, or focus only on the local neighboring nodes without their structural information. In this paper, we propose a novel context-sensitive representation method, C ontext- S ensitive G raph R epresentation L earning (CSGRL), which simultaneously combines attention networks and a variant of graph auto-encoder to learn weighty information about various aspects of participating neighboring nodes. The core of CSGRL is to utilize an asymmetric graph encoder to aggregate information about neighboring nodes and local structures to optimize the learning goal. The main benefit of CSGRL is that it does not need additional features and multiple contexts for the node. The message of neighboring nodes and their structures spread through the encoder. Experiments are conducted on three real datasets for both tasks of link prediction and node clustering, and the results demonstrate that CSGRL can significantly improve the effectiveness of all challenging learning tasks compared with 14 state-of-the-art baselines.
更多
查看译文
关键词
Context-sensitive,Graph Representation learning,Graph auto-encoder
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要