Translator Disclaimer
24 June 2020 Deep feature learning with attributes for cross-modality person re-identification
Author Affiliations +

Cross-modality person re-identification (Re-ID) between RGB and infrared domains is a hot and challenging problem, which aims to retrieve pedestrian images cross-modality and cross-camera views. Since there is a huge gap between two modalities, the difficulty of solving the problem is how to bridge the cross-modality gap with images. However, most approaches solve this issue mainly by increasing interclass discrepancy between features, and few research studies focus on decreasing intraclass cross-modality discrepancy, which is crucial for cross-modality Re-ID. Moreover, we find that despite the huge gap, the attribute representations of the pedestrian are generally unchanged. We provide a different view of the cross-modality person Re-ID problem, which uses additional attribute labels as auxiliary information to increase intraclass cross-modality similarity. First, we manually annotate attribute labels for a large-scale cross-modality Re-ID dataset. Second, we propose an end-to-end network to learn modality-invariant and identity-specific local features with the joint supervision of attribute classification loss and identity classification loss. The experimental results on a large-scale cross-modality Re-ID benchmarks show that our model achieves competitive Re-ID performance compared with the state-of-the-art methods. To demonstrate the versatility of the model, we report the results of our model on the Market-1501 dataset.

© 2020 SPIE and IS&T 1017-9909/2020/$28.00© 2020 SPIE and IS&T
Shikun Zhang, Changhong Chen, Wanru Song, and Zongliang Gan "Deep feature learning with attributes for cross-modality person re-identification," Journal of Electronic Imaging 29(3), 033017 (24 June 2020).
Received: 29 December 2019; Accepted: 4 June 2020; Published: 24 June 2020

Back to Top