Obtaining accurate and noise-free three-dimensional (3D) reconstructions from real world scenes has grown in importance in recent decades. In this paper, we propose a novel strategy for the reconstruction of a 3D point cloud of an object from a single 4D light field (LF) image based on the transformation of point-plane correspondences. Considering a 4D LF image as an input, we first estimate the depth map using point correspondences between sub-aperture images. We then apply histogram equalization and histogram stretching to enhance the separation between depth planes. The main aim of this step is to increase the distance between adjacent depth layers and to enhance the depth map. We then detect edge contours of the original image using fast canny edge detection, and combine linearly the result with that of the previous steps. Following this combination, by transforming the point-plane correspondence, we can obtain the 3D structure of the point cloud. The proposed method avoids feature extraction, segmentation and the extraction of occlusion masks required by other methods, and due to this, our method can reliably mitigate noise. We tested our method with synthetic and real world image databases. To verify the accuracy of our method, we compared our results with two different state-of-the-art algorithms. In this way, we used the LOD (Level of Detail) to compare the number of points needed to describe an object. The results showed that our method had the highest level of detail compared to other existing methods.
This paper presents an image restoration technique incorporating local statistical knowledge in the cost function. Instead of using a conventional grayscale-based error measurement such as the mean squared error, we compare local statistical information about regions in two images using a new error measure. Transient features such as edges and textures are more strongly emphasized than relatively homogeneous regions. With the addition of this local information, we attempt to provide a measure closer to human visual appraisal. We then extend the popular constrained squared-error cost function by incorporating this image error measure. Due to its nonlinear nature, conventional restoration algorithms cannot optimize this cost function efficiently. Therefore we seek an iterative approach. In particular, an extended neural network algorithm is proposed to perform the restoration. It is shown that this technique is efficient, effective, and robust. It compares favorably with other techniques when applied to both grayscale and color images. The results of a subjective survey comparing the proposed algorithm with a more conventional neural network algorithm are presented. The subjects tested in the survey overwhelmingly favored the results provided by the proposed method.
Application of computational intelligence techniques--often called soft computing--to the problem of adaptive image processing is the focus of this text. Imaging professionals and others with a background in mathematical science, computer software, and related fields will find this book a readable, useful resource. It also is suitable as a textbook in graduate-level or professional course in image processing.
Copublished with CRC Press.
An adaptive image regularization algorithm, based on the NoN neural computing theory, is applied to enhance mine signatures. The algorithm, developed by Guan and Sutton (GS), uses vector connections among model neurons to delineate dynamic boundaries corresponding to critical features of images. The boundaries subdivide large networks into many smaller networks, where each smaller network has, in many instances, attractor properties. In this report, the GS algorithm is applied to deblur and segment three sets of underwater mine data. The results suggest that the GS algorithm requires minimal training, performs well under inhomogeneous conditions and generates contours, which can be fed into other NoN architectures for further processing, including classification.
An adaptive scaled mean square error (SMSE) filter using a Hopfield neural-network-based algorithm is presented. We show the development of the original SMSE filter from the minimum mean square error (MMSE) filter and the parametric mean square error (PMSE) filter, both of which suffer from the oversmooth phenomena. The SMSE filter is more efficient than the PMSE filter in terms of noise removal as it does not take into account all the correlation factors used for image enhancement. To further improve the performance of the SMSE filter, an adaptive approach is introduced. The adaptive SMSE filter uses a mask operation technique. A userdefined mask is moved across the image and the filtering parameters are computed based on the local image statistics of the region below the mask. The original and the adaptive SMSE filters are implemented using a Hopfield neural-network-based algorithm. A number of experiments were performed to test the filter characteristics.
This paper presents an implementation and enhancement of the SMSE (scaled mean square error) filter, using a Hopfield neural network based algorithm. We show the development of the original SMSE filter from the MMSE (minimum mean square error) filter and the PMSE (parametric mean square error) filter, both of which suffer from the oversmooth phenomena. The SMSE filter is more efficient than the PMSE filter in terms of noise removal as it does not take into account all the correlation factors used for image restoration. An adaptive SMSE filter is also presented. The adaptive SMSE filter uses a mask operation technique. A user- defined mask is moved across the image and the filtering parameters are computed based on the local image statistics of the region below the mask. The original and adaptive SMSE filters are implemented using a Hopfield neural network based algorithm. A number of experiments were performed to test the filter characteristics.