Learning Long-form Video Prior via Generative Pre-Training
arxiv(2024)
摘要
Concepts involved in long-form videos such as people, objects, and their
interactions, can be viewed as following an implicit prior. They are notably
complex and continue to pose challenges to be comprehensively learned. In
recent years, generative pre-training (GPT) has exhibited versatile capacities
in modeling any kind of text content even visual locations. Can this manner
work for learning long-form video prior? Instead of operating on pixel space,
it is efficient to employ visual locations like bounding boxes and keypoints to
represent key information in videos, which can be simply discretized and then
tokenized for consumption by GPT. Due to the scarcity of suitable data, we
create a new dataset called Storyboard20K from movies to serve as a
representative. It includes synopses, shot-by-shot keyframes, and fine-grained
annotations of film sets and characters with consistent IDs, bounding boxes,
and whole body keypoints. In this way, long-form videos can be represented by a
set of tokens and be learned via generative pre-training. Experimental results
validate that our approach has great potential for learning long-form video
prior. Code and data will be released at
.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要