In this paper, a temporal attention quality aware network (TA-QAN) is proposed. By extracting the temporal information between frames, all the frame sequences in the complementary information are effectively aggregated, and the influence of the quality image region is significantly reduced. The temporal attention quality aware network is used to extract temporal information between frames through temporal convolution. The comparison experiment with other feature extraction methods shows that the proposed method has the best performance in PRID 2011 and iLIDS-VID2014 data sets.
You have requested a machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Neither SPIE nor the owners and publishers of the content make, and they explicitly disclaim, any express or implied representations or warranties of any kind, including, without limitation, representations and warranties as to the functionality of the translation feature or the accuracy or completeness of the translations.
Translations are not retained in our system. Your use of this feature and the translations is subject to all use restrictions contained in the Terms and Conditions of Use of the SPIE website.