Deep convolutional neural networks (DCNN) are extensively used in image classification. However, it is typically impossible to receive ideal effects if directly using it in synthetic aperture radar (SAR) image target recognition. During the data acquisition process, it is difficult to acquire a large number of labeled training data and to ensure the image targets in the center. To extract the width and height features of SAR images and reduce the computational complexity, we constructed an asymmetric parallel convolution module. The module avoids severe over-fitting due to limited training samples and effectively deals with displacement changes with test samples. Meanwhile, the residual learning method is used in the algorithm to avoid the deep network degradation and improve algorithm recognition accuracy (RA). Experimental results show that the RA of the DCNN with residual-learning-based APCRLNet reaches 99.75% under standard operating conditions, which is superior to existing recognition methods. Furthermore, the algorithm also performs well for incomplete training samples and test samples with displacement changes.
You have requested a machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Neither SPIE nor the owners and publishers of the content make, and they explicitly disclaim, any express or implied representations or warranties of any kind, including, without limitation, representations and warranties as to the functionality of the translation feature or the accuracy or completeness of the translations.
Translations are not retained in our system. Your use of this feature and the translations is subject to all use restrictions contained in the Terms and Conditions of Use of the SPIE website.