The strength of weak bots.

Online Soc. Networks Media(2021)

引用 9|浏览3
暂无评分
摘要
Abstract Some fear that social bots, automated accounts on online social networks, propagate falsehoods that can harm public opinion formation and democratic decision-making. Empirical research, however, resulted in puzzling findings. On the one hand, the content emitted by bots tends to spread very quickly in the networks. On the other hand, it turned out that bots’ ability to contact human users tends to be very limited. Here we analyze an agent-based model of social influence in networks explaining this inconsistency. We show that bots may be successful in spreading falsehoods not despite their limited direct impact on human users, but because of this limitation. Our model suggests that bots with limited direct impact on humans may be more and not less effective in spreading their views in the social network, because their direct contacts keep exerting influence on users that the bot does not reach directly. Highly active and well-connected bots, in contrast, may have a strong impact on their direct contacts, but these contacts grow too dissimilar from their network neighbors to further spread the bot’s content. To demonstrate this effect, we included bots in Axelrod’s seminal model of the dissemination of cultures and conducted simulation experiments demonstrating the strength of weak bots. A series of sensitivity analyses show that the finding is robust, in particular when the model is tailored to the context of online social networks. We discuss implications for future empirical research and developers of approaches to detect bots and misinformation.
更多
查看译文
关键词
Social bots,Misinformation,Fake news,Social influence,Agent-based model
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要