Automatic semantic video annotation in wide domain videos based on similarity and commonsense knowledgebases

Signal and Image Processing Applications(2009)

引用 17|浏览43
暂无评分
摘要
In this paper, we introduce a novel framework for automatic Semantic Video Annotation. As this framework detects possible events occurring in video clips, it forms the annotating base of a video search engine. To achieve this purpose, the system has to able to operate on uncontrolled wide-domain videos. Thus, all layers have to be based on generic features. The aim is to help bridge the “semantic gap“, which is the difference between the low-level visual features and the human's perception, by finding videos with similar visual events, then analyzing their free text annotation to find the best description for this new video using commonsense knowledgebases. Experiments were performed on wide-domain video clips from the TRECVID 2005 BBC rush standard database. Results from these experiments show promising integrity between those two layers in order to find expressing annotations for the input video. These results were evaluated based on retrieval performance.
更多
查看译文
关键词
wide domain video,video signal processing,trecvid 2005 bbc rush standard database,commonsense knowledgebases,wide domain video clips,semantic video annotation,video information retrieval,trecvid bbc rush,uncontrolled wide domain videos,video search engine,video indexing,low level visual features,content based similarity,search engines,video retrieval,automatic semantic video annotation,ontologies,information retrieval,databases,trajectory,visualization,semantic gap,semantics,feature extraction
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要