19 June 2017 Part-based deep representation for product tagging and search
Author Affiliations +
Proceedings Volume 10443, Second International Workshop on Pattern Recognition; 104431D (2017) https://doi.org/10.1117/12.2280300
Event: Second International Workshop on Pattern Recognition, 2017, Singapore, Singapore
Abstract
Despite previous studies, tagging and indexing the product images remain challenging due to the large inner-class variation of the products. In the traditional methods, the quantized hand-crafted features such as SIFTs are extracted as the representation of the product images, which are not discriminative enough to handle the inner-class variation. For discriminative image representation, this paper firstly presents a novel deep convolutional neural networks (DCNNs) architect true pre-trained on a large-scale general image dataset. Compared to the traditional features, our DCNNs representation is of more discriminative power with fewer dimensions. Moreover, we incorporate the part-based model into the framework to overcome the negative effect of bad alignment and cluttered background and hence the descriptive ability of the deep representation is further enhanced. Finally, we collect and contribute a well-labeled shoe image database, i.e., the TBShoes, on which we apply the part-based deep representation for product image tagging and search, respectively. The experimental results highlight the advantages of the proposed part-based deep representation.
© (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Keqing Chen, Keqing Chen, } "Part-based deep representation for product tagging and search", Proc. SPIE 10443, Second International Workshop on Pattern Recognition, 104431D (19 June 2017); doi: 10.1117/12.2280300; https://doi.org/10.1117/12.2280300
PROCEEDINGS
8 PAGES


SHARE
Back to Top