Fisheye cameras have been widely used in many applications including close range visual navigation and observation and cyber city reconstruction because its field of view is much larger than that of a common pinhole camera. This means that a fisheye camera can capture more information than a pinhole camera in the same scenario. However, the fisheye image contains serious distortion, which may cause trouble for human observers in recognizing the objects within. Therefore, in most practical applications, the fisheye image should be rectified to a pinhole perspective projection image to conform to human cognitive habits. The traditional mathematical model-based methods cannot effectively remove the distortion, but the digital distortion model can reduce the image resolution to some extent. Considering these defects, this paper proposes a new method that combines the physical spherical model and the digital distortion model. The distortion of fisheye images can be effectively removed according to the proposed approach. Many experiments validate its feasibility and effectiveness.
Optical coherence tomography (OCT) images are usually degraded by significant speckle noise, which will strongly hamper their quantitative analysis. However, speckle noise reduction in OCT images is particularly challenging because of the difficulty in differentiating between noise and the information components of the speckle pattern. To address this problem, the spiking cortical model (SCM)-based nonlocal means method is presented. The proposed method explores self-similarities of OCT images based on rotation-invariant features of image patches extracted by SCM and then restores the speckled images by averaging the similar patches. This method can provide sufficient speckle reduction while preserving image details very well due to its effectiveness in finding reliable similar patches under high speckle noise contamination. When applied to the retinal OCT image, this method provides signal-to-noise ratio improvements of >16 dB with a small 5.4% loss of similarity.
A novel noise detector based on the spiking cortical model (SCM) is proposed for switching-based filters. In the proposed noise detector, the corrupted pixels are firstly identified as noise candidates based on the firing time of the SCM, and then the misclassified noise-free pixels are dismissed from noise candidates based on the absolute difference of the firing time between the considered neurons and their neighboring neurons. Extensive simulations show that although the proposed noise detector generally has lower computational efficiency than several state-of-the-art noise detectors, it outperforms all the compared noise detectors in noise detection accuracy by classifying the pixels in the corrupted images with very few or no mistakes at the various noise ratios.
Factor analysis is an efficient technique to the analysis of dynamic structures in medical image sequences and recently
has been used in contrast-enhanced ultrasound (CEUS) of hepatic perfusion. Time-intensity curves (TICs) extracted by
factor analysis can provide much more diagnostic information for radiologists and improve the diagnostic rate of focal
liver lesions (FLLs). However, one of the major drawbacks of factor analysis of dynamic structures (FADS) is
nonuniqueness of the result when only the non-negativity criterion is used. In this paper, we propose a new method of
replace-approximation based on apex-seeking for ambiguous FADS solutions. Due to a partial overlap of different
structures, factor curves are assumed to be approximately replaced by the curves existing in medical image sequences.
Therefore, how to find optimal curves is the key point of the technique. No matter how many structures are assumed, our
method always starts to seek apexes from one-dimensional space where the original high-dimensional data is mapped.
By finding two stable apexes from one dimensional space, the method can ascertain the third one. The process can be
continued until all structures are found. This technique were tested on two phantoms of blood perfusion and compared to
the two variants of apex-seeking method. The results showed that the technique outperformed two variants in
comparison of region of interest measurements from phantom data. It can be applied to the estimation of TICs derived
from CEUS images and separation of different physiological regions in hepatic perfusion.
With ever-growing archives of multi-source raster images and maps, many spatial applications such as multi-scale database updating, progressive web mapping and 3D terrain visualization call for rapidly automatic integration of GIS and imagery data. The object-oriented methodology display novel characteristics for multi-scale representation. While, management for multi-scale datasets is still lag behind, especially for multi-source data form different spatial reference system (DSRS). In this paper, we review problems with state of the art integration of multisource data. A hierarchical
grid framework has been introduced, spatial information multi-grids (SIMG). Three fundamental components to do multi-scale and multi-source datasets analysis are required for SIMG. First, it is necessary to fastly unify different spatial reference system (DSRS) data. Secondly, efficient spatial grid and scale encoding must be applied to support flexible management of multi-scale datasets. Moreover, strategy delineated image data simplification from detailed to broad scale must to be developed. The approaches including the optimal scale identification, object-oriented upscaling and spatial grid and scale encoding for image-objects have been presented. And the experimental was implemented by applying the framework to integrate vector map of SRS in Beijing54 with Landsat TM image of SRS in WGS84, to detect city region sprawl in Zhengzhou located by the Yellow river, China. It is suggested by results that real-time
DSRS integration need fewer time cost than traditional method. The image classification accuracy at optimal scale reached 90.4 percent of kappa, and upscaling results of multi-scale datasets here were more outstanding than multilevel wavelets method. So, this study was easily operated with great effectiveness.
Being put forward by the researchers in computer vision, self calibration commonly deals with camera with linear model.
Since the distortion is practically existed especially for ordinary camera, the result of calibration can't meet the demand
of vision measurement with high accuracy regardless of the distortion. Being obedience to systematism mainly, the
distortion is the target function of distortion coefficient, principal point, principal distance ratio and skew factor etc. So
there exists a group of parameters including of distortion coefficient, principal point, principal distance ratio and skew
factor and fundamental matrix which make homologous point meets epipolar restriction theoretically. Accordingly, the
paper advances the way titled self calibration of camera with non-linear imaging model which is on basis of the Kruppa
equation. In calculating the fundamental matrix, we can obtain interior elements except principal distance by taking into
account distortion correction about image coordinate. Then the principal distance can be obtained by using Kruppa
equation. This way only need some homologous points between two images, not need any known information about
objects. Lots of experiments have proven its correctness and reliability.