Multi-Interest Learning for Multi-Modal Paper Recommendation

Xiaoteng Shen,Liangcai Su,Xi Xiao, Yi Li

ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)(2024)

引用 0|浏览3
暂无评分
摘要
To help researchers find papers of interest quickly and accurately, paper recommendations are widely deployed. However, they face unique challenges, including the exploitation of rich multi-modal features and the modeling of complex relationships. In terms of multi-modal features, previous work has focused on textual information but ignored visual information, as paper images may be missing or their meanings are highly domain specific. On the other hand, although conventional recommendation methods are suitable for modeling the user-paper relationship, the lack of specific designs for the paper-paper relationship leads to sub-optimal results. To overcome these limitations, we propose a Multi-interest based Multi-modal paper recommendation model named TMRec. TMRec utilizes screenshots of papers as visual impressions to capture paper style features while avoiding the dilemma of missing paper images. Additionally, we have specially designed the multi-interest extraction module and the multi-level interaction module to consider both the multi-interest and the citation relationship between papers in the user behavior sequence. Compared to various strong baselines, TMRec has a relative improvement for up to 20% in terms of recall rate on real-world datasets, which demonstrates the superiority of TMRec and effectiveness of the visual impression feature.
更多
查看译文
关键词
Paper recommendation,multi-modal recommendation,multi-interest,visual modality,prototype feature
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要