Synthesizing Efficient Data with Diffusion Models for Person Re-Identification Pre-Training

Identification Training set
DOI: 10.48550/arxiv.2406.06045 Publication Date: 2024-06-10
ABSTRACT
Existing person re-identification (Re-ID) methods principally deploy the ImageNet-1K dataset for model initialization, which inevitably results in sub-optimal situations due to large domain gap. One of key challenges is that building large-scale Re-ID datasets time-consuming. Some previous efforts address this problem by collecting images from internet e.g., LUPerson, but it struggles learn unlabeled, uncontrollable, and noisy data. In paper, we present a novel paradigm Diffusion-ReID efficiently augment generate diverse based on known identities without requiring any cost data collection annotation. Technically, unfolds two stages: generation filtering. During stage, propose Language Prompts Enhancement (LPE) ensure ID consistency between input image sequence generated images. diffusion process, Diversity Injection (DI) module increase attribute diversity. order make have higher quality, apply confidence threshold filter further remove low-quality Benefiting our proposed paradigm, first create new Diff-Person, consists over 777K 5,183 identities. Next, build stronger backbone pre-trained Diff-Person. Extensive experiments are conducted four benchmarks six widely used settings. Compared with other pre-training self-supervised competitors, approach shows significant superiority.
SUPPLEMENTAL MATERIAL
Coming soon ....
REFERENCES ()
CITATIONS ()
EXTERNAL LINKS
PlumX Metrics
RECOMMENDATIONS
FAIR ASSESSMENT
Coming soon ....
JUPYTER LAB
Coming soon ....