T-VSL: Text-Guided Visual Sound Source Localization in Mixtures
CVPR 2024(2024)
摘要
Visual sound source localization poses a significant challenge in identifying
the semantic region of each sounding source within a video. Existing
self-supervised and weakly supervised source localization methods struggle to
accurately distinguish the semantic regions of each sounding object,
particularly in multi-source mixtures. These methods often rely on audio-visual
correspondence as guidance, which can lead to substantial performance drops in
complex multi-source localization scenarios. The lack of access to individual
source sounds in multi-source mixtures during training exacerbates the
difficulty of learning effective audio-visual correspondence for localization.
To address this limitation, in this paper, we propose incorporating the text
modality as an intermediate feature guide using tri-modal joint embedding
models (e.g., AudioCLIP) to disentangle the semantic audio-visual source
correspondence in multi-source mixtures. Our framework, dubbed T-VSL, begins by
predicting the class of sounding entities in mixtures. Subsequently, the
textual representation of each sounding source is employed as guidance to
disentangle fine-grained audio-visual source correspondence from multi-source
mixtures, leveraging the tri-modal AudioCLIP embedding. This approach enables
our framework to handle a flexible number of sources and exhibits promising
zero-shot transferability to unseen classes during test time. Extensive
experiments conducted on the MUSIC, VGGSound, and VGGSound-Instruments datasets
demonstrate significant performance improvements over state-of-the-art methods.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要