TraveLER: A Multi-LMM Agent Framework for Video Question-Answering
CoRR(2024)
摘要
Recently, Large Multimodal Models (LMMs) have made significant progress in
video question-answering using a frame-wise approach by leveraging large-scale,
image-based pretraining in a zero-shot manner. While image-based methods for
videos have shown impressive performance, a current limitation is that they
often overlook how key timestamps are selected and cannot adjust when incorrect
timestamps are identified. Moreover, they are unable to extract details
relevant to the question, instead providing general descriptions of the frame.
To overcome this, we design a multi-LMM agent framework that travels along the
video, iteratively collecting relevant information from keyframes through
interactive question-asking until there is sufficient information to answer the
question. Specifically, we propose TraveLER, a model that can create a plan to
"Traverse" through the video, ask questions about individual frames to "Locate"
and store key information, and then "Evaluate" if there is enough information
to answer the question. Finally, if there is not enough information, our method
is able to "Replan" based on its collected knowledge. Through extensive
experiments, we find that the proposed TraveLER approach improves performance
on several video question-answering benchmarks, such as NExT-QA, STAR, and
Perception Test, without the need to fine-tune on specific datasets.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要