Aircraft is a kind of valuable military equipment and transportation, so using target detection technology to detect ground aircraft in the optical remote sensing image has important research and application value. Although some achievements have been made in the relevant research, how to realize fast and effective ground aircraft target detection is still a challenging task because of the complex background of remote sensing image, large scale change and small imaging size, etc. Aiming at the application scenarios of multi-frame imaging, such as embedded detection and tracking system, this thesis proposes an aircraft target detection scheme based on hierarchical screening, which can improve the detection speed and reduce false alarm. Firstly, by analyzing the background characteristics, a target candidate region extraction method based on gray variance is adopted, and the acceleration is realized by integrating graph and shared computation. Then, the haar-like features are extracted in the candidate regions, which are then classified by the cascade AdaBoost classifier. Afterwards, a union-find-sets algorithm is used to merge the redundancy detection results and evaluate the confidence. Finally, the inter-frame correlation information is used to remove the false alarm. And we carried out experimental verification and proved the effectiveness of the algorithm.
Deep convolutional neural networks are increasingly used in various parallel embedded platforms such as mobile GPUs, AMD APUs, and FPGAs. At the same time, many new models have been developed for embedded platforms, such as MobileNet. In order to balance accuracy, speed and resource requirements and achieve cross-platform versatility, we have developed a software framework for in-depth research. Generated an OpenCL code that takes full advantage of parallel resources and improves the parallel efficiency of OpenCL code. Another advantage is that it optimizes and consolidates the network and compiles offline, making the entire application most efficient. MobileNets uses nonlongitudinal separable convolution (deep separable convolution) instead of standard convolution. Experiments with MobileNet have shown that the OpenCL code generation framework can significantly improve the efficiency of use.
As an excellent method for extracting distinctive invariant features from images, SIFT (scale-invariant feature transform) can effectively resist affine transformation such as translation and rotation of images, and theoretically has better resistance to illumination changes [1]. However, in practical applications the performance of SIFT is always affected by the contrast reduction caused by illumination changes. In this paper, the performance of SIFT under different contrasts is systematically analyzed and evaluated, and a reasonable explanation is given for the reason of SIFT performance change under different illumination conditions. And a SIFT fast matching method based on contrast compression is proposed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.