Attention-based network embedding with higher-order weights and node attributes

Xian Mo, Binyuan Wan,Rui Tang, Junkai Ding, Guangdi Liu

CAAI TRANSACTIONS ON INTELLIGENCE TECHNOLOGY(2024)

引用 1|浏览1
暂无评分
摘要
Network embedding aspires to learn a low-dimensional vector of each node in networks, which can apply to diverse data mining tasks. In real-life, many networks include rich attributes and temporal information. However, most existing embedding approaches ignore either temporal information or network attributes. A self-attention based architecture using higher-order weights and node attributes for both static and temporal attributed network embedding is presented in this article. A random walk sampling algorithm based on higher-order weights and node attributes to capture network topological features is presented. For static attributed networks, the algorithm incorporates first-order to k-order weights, and node attribute similarities into one weighted graph to preserve topological features of networks. For temporal attribute networks, the algorithm incorporates previous snapshots of networks containing first-order to k-order weights, and nodes attribute similarities into one weighted graph. In addition, the algorithm utilises a damping factor to ensure that the more recent snapshots allocate a greater weight. Attribute features are then incorporated into topological features. Next, the authors adopt the most advanced architecture, Self-Attention Networks, to learn node representations. Experimental results on node classification of static attributed networks and link prediction of temporal attributed networks reveal that our proposed approach is competitive against diverse state-of-the-art baseline approaches.
更多
查看译文
关键词
data mining,deep neural networks,social network
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要