Paper
8 March 2018 Learning deep features with adaptive triplet loss for person reidentification
Zhiqiang Li, Nong Sang, Kezhou Chen, Changxin Gao, Ruolin Wang
Author Affiliations +
Proceedings Volume 10609, MIPPR 2017: Pattern Recognition and Computer Vision; 106090G (2018) https://doi.org/10.1117/12.2283478
Event: Tenth International Symposium on Multispectral Image Processing and Pattern Recognition (MIPPR2017), 2017, Xiangyang, China
Abstract
Person reidentification (re-id) aims to match a specified person across non-overlapping cameras, which remains a very challenging problem. While previous methods mostly focus on feature extraction or metric learning, this paper makes the attempt in jointly learning both the global full-body and local body-parts features of the input persons with a multichannel convolutional neural network (CNN) model, which is trained by an adaptive triplet loss function that serves to minimize the distance between the same person and maximize the distance between different persons. The experimental results show that our approach achieves very promising results on the large-scale Market-1501 and DukeMTMC-reID datasets.
© (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Zhiqiang Li, Nong Sang, Kezhou Chen, Changxin Gao, and Ruolin Wang "Learning deep features with adaptive triplet loss for person reidentification", Proc. SPIE 10609, MIPPR 2017: Pattern Recognition and Computer Vision, 106090G (8 March 2018); https://doi.org/10.1117/12.2283478
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Cameras

Convolution

Lithium

Convolutional neural networks

Databases

Image compression

Feature extraction

Back to Top