SHE-Net: Syntax-Hierarchy-Enhanced Text-Video Retrieval
arxiv(2024)
摘要
The user base of short video apps has experienced unprecedented growth in
recent years, resulting in a significant demand for video content analysis. In
particular, text-video retrieval, which aims to find the top matching videos
given text descriptions from a vast video corpus, is an essential function, the
primary challenge of which is to bridge the modality gap. Nevertheless, most
existing approaches treat texts merely as discrete tokens and neglect their
syntax structures. Moreover, the abundant spatial and temporal clues in videos
are often underutilized due to the lack of interaction with text. To address
these issues, we argue that using texts as guidance to focus on relevant
temporal frames and spatial regions within videos is beneficial. In this paper,
we propose a novel Syntax-Hierarchy-Enhanced text-video retrieval method
(SHE-Net) that exploits the inherent semantic and syntax hierarchy of texts to
bridge the modality gap from two perspectives. First, to facilitate a more
fine-grained integration of visual content, we employ the text syntax
hierarchy, which reveals the grammatical structure of text descriptions, to
guide the visual representations. Second, to further enhance the multi-modal
interaction and alignment, we also utilize the syntax hierarchy to guide the
similarity calculation. We evaluated our method on four public text-video
retrieval datasets of MSR-VTT, MSVD, DiDeMo, and ActivityNet. The experimental
results and ablation studies confirm the advantages of our proposed method.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要