Publisher’s Note: This paper, originally published on 5/12/2016, was replaced with a corrected/revised version on 5/18/2016. If you downloaded the original PDF but are unable to access the revision, please contact SPIE Digital Library Customer Service for assistance.
When dealing with sparse or no labeled data in the target domain, transfer learning shows its appealing performance by borrowing the supervised knowledge from external domains. Recently deep structure learning has been exploited in transfer learning due to its attractive power in extracting effective knowledge through multi-layer strategy, so that deep transfer learning is promising to address the cross-domain mismatch. In general, cross-domain disparity can be resulted from the difference between source and target distributions or different modalities, e.g., Midwave IR (MWIR) and Longwave IR (LWIR). In this paper, we propose a Weighted Deep Transfer Learning framework for automatic target classification through a task-driven fashion. Specifically, deep features and classifier parameters are obtained simultaneously for optimal classification performance. In this way, the proposed deep structures can extract more effective features with the guidance of the classifier performance; on the other hand, the classifier performance is further improved since it is optimized on more discriminative features. Furthermore, we build a weighted scheme to couple source and target output by assigning pseudo labels to target data, therefore we can transfer knowledge from source (i.e., MWIR) to target (i.e., LWIR). Experimental results on real databases demonstrate the superiority of the proposed algorithm by comparing with others.
For classifying images with various appearances, graph embedding based subspace learning has difficulty in taking a comprehensive consideration of both local geometrical structure and between-class discriminative information. In addition, when no sufficient training samples exist, using only the simple weight graph corresponding to labeled samples, the embedding subspace may not be accurately modeled. We present a semisupervised graph embedding algorithm by combining graph embedding and sparse representation. This algorithm can effectively learn a compact and semantic subspace by using a locally connected graph, which can model the geometrical structure and essential correlation of subclusters within a class and can fully utilize both labeled and unlabeled samples. Moreover, using L2,1-norm, the proposed algorithm can preserve the sparse representation property of images from the original space in the lower dimensional projected space. Our experiments demonstrate that the proposed algorithm has better performance than the alternatives reported in recent literature.