Describing Videos by Exploiting Temporal Structure

2015 IEEE International Conference on Computer Vision (ICCV)(2015)

引用 1298|浏览317
暂无评分
摘要
Recent progress in using recurrent neural networks (RNNs) for image description has motivated the exploration of their application for video description. However, while images are static, working with videos requires modeling their dynamic temporal structure and then properly integrating that information into a natural language description. In this context, we propose an approach that successfully takes into account both the local and global temporal structure of videos to produce descriptions. First, our approach incorporates a spatial temporal 3-D convolutional neural network (3-D CNN) representation of the short temporal dynamics. The 3-D CNN representation is trained on video action recognition tasks, so as to produce a representation that is tuned to human motion and behavior. Second we propose a temporal attention mechanism that allows to go beyond local temporal modeling and learns to automatically select the most relevant temporal segments given the text-generating RNN. Our approach exceeds the current state-of-art for both BLEU and METEOR metrics on the Youtube2Text dataset. We also present results on a new, larger and more challenging dataset of paired video and natural language descriptions.
更多
查看译文
关键词
recurrent neural networks,image description,video description,static images,dynamic temporal structure modeling,natural language description model,local temporal structure,global temporal structure,spatial temporal 3D convolutional neural network,3D CNN representation,temporal dynamics,video action recognition,human motion,human behavior,temporal attention mechanism,local temporal modeling,temporal segments,text-generating RNN,BLEU metrics,METEOR metrics,Youtube2Text dataset
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要