Exploring Homogeneous and Heterogeneous Consistent Label Associations for Unsupervised Visible-Infrared Person ReID
CoRR(2024)
摘要
Unsupervised visible-infrared person re-identification (USL-VI-ReID) aims to
retrieve pedestrian images of the same identity from different modalities
without annotations. While prior work focuses on establishing cross-modality
pseudo-label associations to bridge the modality-gap, they ignore maintaining
the instance-level homogeneous and heterogeneous consistency in pseudo-label
space, resulting in coarse associations. In response, we introduce a
Modality-Unified Label Transfer (MULT) module that simultaneously accounts for
both homogeneous and heterogeneous fine-grained instance-level structures,
yielding high-quality cross-modality label associations. It models both
homogeneous and heterogeneous affinities, leveraging them to define the
inconsistency for the pseudo-labels and then minimize it, leading to
pseudo-labels that maintain alignment across modalities and consistency within
intra-modality structures. Additionally, a straightforward plug-and-play Online
Cross-memory Label Refinement (OCLR) module is proposed to further mitigate the
impact of noisy pseudo-labels while simultaneously aligning different
modalities, coupled with a Modality-Invariant Representation Learning (MIRL)
framework. Experiments demonstrate that our proposed method outperforms
existing USL-VI-ReID methods, highlighting the superiority of our MULT in
comparison to other cross-modality association methods. The code will be
available.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要