A (More) Realistic Evaluation Setup for Generalisation of Community Models on Malicious Content Detection
arxiv(2024)
摘要
Community models for malicious content detection, which take into account the
context from a social graph alongside the content itself, have shown remarkable
performance on benchmark datasets. Yet, misinformation and hate speech continue
to propagate on social media networks. This mismatch can be partially
attributed to the limitations of current evaluation setups that neglect the
rapid evolution of online content and the underlying social graph. In this
paper, we propose a novel evaluation setup for model generalisation based on
our few-shot subgraph sampling approach. This setup tests for generalisation
through few labelled examples in local explorations of a larger graph,
emulating more realistic application settings. We show this to be a challenging
inductive setup, wherein strong performance on the training graph is not
indicative of performance on unseen tasks, domains, or graph structures.
Lastly, we show that graph meta-learners trained with our proposed few-shot
subgraph sampling outperform standard community models in the inductive setup.
We make our code publicly available.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要