Cross-Modality Consistent Regression For Joint Visual-Textual Sentiment Analysis Of Social Multimedia

WSDM 2016: Ninth ACM International Conference on Web Search and Data Mining San Francisco California USA February, 2016(2016)

引用 196|浏览93
暂无评分
摘要
Sentiment analysis of online user generated content is important for many social media analytic's tasks. Researchers have largely relied on textual sentiment analysis to develop systems to predict political elections, measure economic indicators, and so on. Recently, social media users are increasingly using additional images and videos to express their opinions and share their experiences. Sentiment analysis of such large-scale textual and visual content can help better extract user sentiments toward events or topics. Motivated by the needs to leverage large-scale social multimedia content for sentiment analysis, we propose a cross-modality consistent regression (CCR) model, which is able to utilize both the state-of-the-art visual and textual sentiment analysis techniques. We first, tine-tune a convolutional neural network (CNN) for image sentiment analysis and train a paragraph vector model for textual sentiment analysis. On top of them, we train our multi-modality regression model. We use sentimental queries to obtain half a million training samples from Getty Images. We have conducted extensive experiments on both machine weakly labeled and manually labeled image tweets. The results show that the proposed model can achieve better performance than the state of the art textual and visual sentiment analysis algorithms alone.
更多
查看译文
关键词
sentiment analysis,cross-modality regression,multimodality analysis
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要