Person re-identification (Re-ID) is an object recognition method based on visual appearance information. It is mainly restricted by the changes in person posture, shooting angles, the changes in the front, back and light of people that are mainly captured and the noises caused by shake or blur. Currently, single-frame person Re-ID is still the mainstream research. In view of the limited information of single-frame images, this paper adopts temporal attention sequence modeling to conduct research on person Re-ID based on video sequences, considering not only the content information of images but also the movement information between frames, etc.
In this paper, a temporal attention quality aware network (TA-QAN) is proposed. By extracting the temporal information between frames, all the frame sequences in the complementary information are effectively aggregated, and the influence of the quality image region is significantly reduced. The temporal attention quality aware network is used to extract temporal information between frames through temporal convolution. The comparison experiment with other feature extraction methods shows that the proposed method has the best performance in PRID 2011 and iLIDS-VID2014 data sets.