8 March 2018 Learning deep features with adaptive triplet loss for person reidentification
Author Affiliations +
Proceedings Volume 10609, MIPPR 2017: Pattern Recognition and Computer Vision; 106090G (2018) https://doi.org/10.1117/12.2283478
Event: Tenth International Symposium on Multispectral Image Processing and Pattern Recognition (MIPPR2017), 2017, Xiangyang, China
Person reidentification (re-id) aims to match a specified person across non-overlapping cameras, which remains a very challenging problem. While previous methods mostly focus on feature extraction or metric learning, this paper makes the attempt in jointly learning both the global full-body and local body-parts features of the input persons with a multichannel convolutional neural network (CNN) model, which is trained by an adaptive triplet loss function that serves to minimize the distance between the same person and maximize the distance between different persons. The experimental results show that our approach achieves very promising results on the large-scale Market-1501 and DukeMTMC-reID datasets.
© (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Zhiqiang Li, Zhiqiang Li, Nong Sang, Nong Sang, Kezhou Chen, Kezhou Chen, Changxin Gao, Changxin Gao, Ruolin Wang, Ruolin Wang, } "Learning deep features with adaptive triplet loss for person reidentification", Proc. SPIE 10609, MIPPR 2017: Pattern Recognition and Computer Vision, 106090G (8 March 2018); doi: 10.1117/12.2283478; https://doi.org/10.1117/12.2283478


Back to Top