Learning to Rematch Mismatched Pairs for Robust Cross-Modal Retrieval
CVPR 2024(2024)
摘要
Collecting well-matched multimedia datasets is crucial for training
cross-modal retrieval models. However, in real-world scenarios, massive
multimodal data are harvested from the Internet, which inevitably contains
Partially Mismatched Pairs (PMPs). Undoubtedly, such semantical irrelevant data
will remarkably harm the cross-modal retrieval performance. Previous efforts
tend to mitigate this problem by estimating a soft correspondence to
down-weight the contribution of PMPs. In this paper, we aim to address this
challenge from a new perspective: the potential semantic similarity among
unpaired samples makes it possible to excavate useful knowledge from mismatched
pairs. To achieve this, we propose L2RM, a general framework based on Optimal
Transport (OT) that learns to rematch mismatched pairs. In detail, L2RM aims to
generate refined alignments by seeking a minimal-cost transport plan across
different modalities. To formalize the rematching idea in OT, first, we propose
a self-supervised cost function that automatically learns from explicit
similarity-cost mapping relation. Second, we present to model a partial OT
problem while restricting the transport among false positives to further boost
refined alignments. Extensive experiments on three benchmarks demonstrate our
L2RM significantly improves the robustness against PMPs for existing models.
The code is available at https://github.com/hhc1997/L2RM.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要