Cross-Modality Paired-Images Generation for RGB-Infrared Person Re-Identification

Modality (human–computer interaction) RGB color model Identification
DOI: 10.48550/arxiv.2002.04114 Publication Date: 2020-01-01
ABSTRACT
RGB-Infrared (IR) person re-identification is very challenging due to the large cross-modality variations between RGB and IR images. The key solution learn aligned features bridge modalities. However, lack of correspondence labels every pair images, most methods try alleviate with set-level alignment by reducing distance entire sets. this may lead misalignment some instances, which limits performance for RGB-IR Re-ID. Different from existing methods, in paper, we propose generate paired-images perform both global fine-grained instance-level alignments. Our proposed method enjoys several merits. First, our can disentangling modality-specific modality-invariant features. Compared conventional ours explicitly remove modality variation be better reduced. Second, given unpaired-images a person, paired images exchanged With them, directly minimizing distances Extensive experimental results on two standard benchmarks demonstrate that model favourably against state-of-the-art methods. Especially, SYSU-MM01 dataset, achieve gain 9.2% 7.7% terms Rank-1 mAP. Code available at https://github.com/wangguanan/JSIA-ReID.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....