Images captured in the underwater environment often suffer from color distortion and blurring owing to the effects of light absorption and scattering. An integrative framework is proposed to effectively restore the underwater images. First, a modified color constancy-based algorithm is designed for correcting the color of the underwater images. Second, an effective underwater image degradation model is constructed to model the statistics of the scattering event. Then, the underwater image deblurring is achieved using a group-based sparse representation method. To evaluate the performance of the proposed method, we compare our results with several existing approaches using the subjective technique as well as the objective technique. The results show that the proposed method can achieve better restoration for color fidelity and visibility compared to all other state-of-the-art algorithms.
Scene classification is one of the most important issues in remote sensing (RS) image processing. We find that features from different channels (shape, spectral, texture, etc.), levels (low-level and middle-level), or perspectives (local and global) could provide various properties for RS images, and then propose a heterogeneous feature framework to extract and integrate heterogeneous features with different types for RS scene classification. The proposed method is composed of three modules (1) heterogeneous features extraction, where three heterogeneous feature types, called DS-SURF-LLC, mean-Std-LLC, and MS-CLBP, are calculated, (2) heterogeneous features fusion, where the multiple kernel learning (MKL) is utilized to integrate the heterogeneous features, and (3) an MKL support vector machine classifier for RS scene classification. The proposed method is extensively evaluated on three challenging benchmark datasets (a 6-class dataset, a 12-class dataset, and a 21-class dataset), and the experimental results show that the proposed method leads to good classification performance. It produces good informative features to describe the RS image scenes. Moreover, the integration of heterogeneous features outperforms some state-of-the-art features on RS scene classification tasks.
Visual saliency detection has been receiving great attention in recent years since it can facilitate a wide range of applications in computer vision. A variety of saliency models have been proposed based on different assumptions within which saliency detection via sparse representation is one of the newly arisen approaches. However, most existing sparse representation-based saliency detection methods utilize partial characteristics of sparse representation, lacking of in-depth analysis. Thus, they may have limited detection performance. Motivated by this, this paper proposes an algorithm for detecting visual saliency based on in-depth analysis of sparse representation. A number of discriminative dictionaries are first learned with randomly sampled image patches by means of inner product-based dictionary atom classification. Then, the input image is partitioned into many image patches, and these patches are classified into salient and nonsalient ones based on the in-depth analysis of sparse coding coefficients. Afterward, sparse reconstruction errors are calculated for the salient and nonsalient patch sets. By investigating the sparse reconstruction errors, the most salient atoms, which tend to be from the most salient region, are screened out and taken away from the discriminative dictionaries. Finally, an effective method is exploited for saliency map generation with the reduced dictionaries. Comprehensive evaluations on publicly available datasets and comparisons with some state-of-the-art approaches demonstrate the effectiveness of the proposed algorithm.
Aiming at the difficulties in change detection caused by the complexity of high-resolution remote sensing images that exist in varied ecological environments and artificial objects, in order to overcome the limitations in traditional pixel-oriented change detection methods and improve the detection precision, an innovative object-oriented change detection approach based on multiscale fusion is proposed. This approach introduced the classical color texture segmentation algorithm J-segmentation (JSEG) to change detection and achieved the multiscale feature extraction and comparison of objects based on the sequence of J-images produced in JSEG. By comprehensively using the geometry, spectrum, and texture features of objects, and proposing two different multiscale fusing strategies, respectively, based on Dempster/Shafer evidence theory and weighted data fusion, the algorithm further improves the divisibility between changed and unchanged areas, thereby establishing an integrated framework of object-oriented change detection based on multiscale fusion. Experiments were performed on high-resolution airborne and SPOT 5 remote sensing images. Compared with different object-oriented and pixel-oriented detection methods, results of the experiments verified the validity and reliability of the proposed approach.
Proc. SPIE. 8541, Electro-Optical and Infrared Systems: Technology and Applications IX
KEYWORDS: Infrared search and track, Infrared imaging, Detection and tracking algorithms, Video, Software development, Infrared radiation, Video processing, Microsoft Foundation Class Library, Infrared technology, Filtering (signal processing)
In this paper, some basic principles and the implementing flow charts of a series of algorithms for target tracking are
described. On the foundation of above works, a moving target tracking software base on the OpenCV is developed by the
software developing platform MFC. Three kinds of tracking algorithms are integrated in this software. These two
tracking algorithms are Kalman Filter tracking method and Camshift tracking method. In order to explain the software
clearly, the framework and the function are described in this paper. At last, the implementing processes and results are
analyzed, and those algorithms for tracking targets are evaluated from the two aspects of subjective and objective. This
paper is very significant in the application of the infrared target tracking technology.
In this paper, some basic principles and the implementing flow charts of a series of algorithms for target detecting are
described. Then, according to actual needs and the comparison results of those algorithms, some of them are optimized
in combination with the image pre-processing. On the foundation of above works, a moving target detecting and tracking
software base on the OpenCV is developed by the software developing platform MFC. Three kinds of detecting
algorithms are integrated in this software. These three detecting algorithms are Frame Difference method, Background
Estimation method and Mixture Gaussian Modeling method. In order to explain the software clearly, the framework and
the function are described in this paper. At last, the implementing processes and results are analyzed, and those
algorithms for detecting targets are evaluated from the two aspects of subjective and objective. This paper is very
significant in the application of the infrared target detecting technology.