Feature‐Level Compensation and Alignment for Visible‐Infrared Person Re‐Identification

DOI: 10.1049/cvi2.70005 Publication Date: 2025-03-12T05:57:16Z
ABSTRACT
ABSTRACTVisible‐infrared person re‐identification (VI‐ReID) aims to match pedestrian images captured by nonoverlapping visible and infrared cameras. Most existing compensation‐based methods try to generate images of missing modality from the other ones. However, the generated images often fail to possess enough quality due to severe discrepancies between different modalities. Moreover, it is generally assumed that person images are roughly aligned during the extraction of part‐based local features. However, this does not always hold true, typically when they are cropped via inaccurate pedestrian detectors. To alleviate such problems, the authors propose a novel feature‐level compensation and alignment network (FCA‐Net) for VI‐ReID in this paper, which tries to compensate for the missing modality information on the channel‐level and align part‐based local features. Specifically, the visible and infrared features of low‐level subnetworks are first processed by a channel feature compensation (CFC) module, which enforces the network to learn consistent distribution patterns of channel features, and thereby the cross‐modality discrepancy is narrowed. To address spatial misalignment, a pairwise relation module (PRM) is introduced to incorporate human structural information into part‐based local features, which can significantly enhance the feature discrimination power. Besides, a cross‐modality part alignment loss (CPAL) is designed on the basis of a dynamic part matching algorithm, which can promote more accurate local matching. Extensive experiments on three standard VI‐ReID datasets are conducted to validate the effectiveness of the proposed method, and the results show that state‐of‐the‐art performance is achieved.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES (57)
CITATIONS (0)
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....