Advances in printing technology has made multilevel halftoning become important, as printers can now print inks of different intensities. This work presents a multilevel halftoning algorithm based on multiscale error diffusion. This algorithm takes care of constrained pixels before handling the unconstrained pixels, and diffuses errors with a noncausal filter such that halftones of better image quality can be achieved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Special Section on Quality Control by Artificial Vision
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
We are interested particularly in the estimation of passenger flows entering or exiting from buses. To achieve this measurement, we propose a counting system based on stereo vision. To extract three-dimensional information in a reliable way, we use a dense stereo-matching procedure in which the winner-takes-all technique minimizes a correlation score. This score is an improved version of the sum of absolute differences, including several similarity criteria determined on pixels or regions to be matched. After calculating disparity maps for each image, morphological operations and a binarization with multiple thresholds are used to localize the heads of people passing under the sensor. The markers describing the heads of the passengers getting on or off the bus are then tracked during the image sequence to reconstitute their trajectories. Finally, people are counted from these reconstituted trajectories. The technique suggested was validated by several realistic experiments. We showed that it is possible to obtain counting accuracy of 99% and 97% on two large realistic data sets of image sequences showing realistic scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
A novel unsupervised texture image segmentation using a multilayer data condensation spectral clustering algorithm is presented. First, the texture features of each image pixel are extracted by the stationary wavelet transform and a multilayer data condensation method is performed on this texture features data set to obtain a condensation subset. Second, the spectral clustering algorithm based on the manifold similarity measure is used to cluster the condensation subset. Finally, according to the clustering result of the condensation subset, the nearest-neighbor method is adopted to obtain the original image-segmentation result. In the experiments, we apply our method to solve the texture and synthetic aperture radar image segmentation and take self-tuning k-nearest-neighbor spectral clustering and Nyström methods for baseline comparisons. The experimental results show that the proposed method is more robust and effective for texture image segmentation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
In the case of thermographic inspection, the workpiece is heated in a particular manner followed by the observation of the resulting temperature increase at the material surface by means of an infrared camera. Inhomogeneities such as surface cracks cause a nonuniform distribution of the temperature; consequently, they can be localized in the infrared images. For metallic pieces, the most efficient way is inductive heating, whereby the induced eddy current generates heat directly in the surface skin of the sample. Experiments have been carried out on how steel workpieces, especially castings, can be thermographically inspected to detect cracks. The testing is a nondestructive and contact-free method. The goal is to develop a fully automated testing equipment with high throughput, where the flawed pieces are identified by evaluation and classification of the infrared images. The classification task is to distinguish between temperature increase around a crack and additional heating at the edges of the workpieces. Neural network has been used to train and to classify about 750 images, and good results have been achieved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
We extend the theory of polynomial moments by proving their spectral behavior with respect to Gaussian noise. This opens the door to doing computations on the signal-to-noise ratios of polynomial filters and with this, the comparability to classical filters is made possible. The compactness of the information in the polynomial and Fourier spectra can be compared to determine which solution will give the best performance and numerical efficiency. A general formalism for filtering with orthogonal basis functions is proposed. The frequency response of the polynomials is determined by analyzing the projection onto the basis functions. This reveals the tendency of polynomials to oscillate at the boundaries of the support; the resonant frequency of this oscillation can be determined. The new theory is applied to the extraction of 3-D embossed digits from cluttered surfaces. A three-component surface model is used consisting of a global component, corresponding to the surface; a Gaussian noise component, and local anomalies corresponding to the digits. The extraction of the geometric information associated with the digits is a preprocessing step for digit recognition. It is shown that the discrete polynomial basis functions are better suited than Fourier basis functions to fulfill this task.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Flatness is a major geometrical feature of rolled products specified by both production and quality needs. Real-time inspection of flatness is the basis of automatic flatness control. Industrial facilities where rolled products are manufactured have adverse environments that affect artificial vision systems. We present a low-cost flatness inspection system based on optical triangulation by means of a laser stripe emitter and a CMOS matrix camera, designed to be part of an online flatness control system. An accurate and robust method to extract a laser stripe in adverse conditions over rough surfaces is proposed and designed to be applied in real time. Laser extraction relies on a local and a global search. The global search is based on an adjustment of curve segments based on a split-and-merge technique. A real-time recording method of the input data of the flatness inspection system is proposed. It stores information about manufacturing conditions for an offline tuning of the laser stripe extraction method using real data. Flatness measurements carried out over steel strips are evaluated quantitatively and qualitatively. Moreover, the real-time performance of the proposed system is analyzed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
This article presents a new in situ method to monitor the particle size distribution (PSD) during batch solution crystallization processes. Using a new in situ imaging probe, the "EZProbe sensor," real time acquisition of 2-D images of particles during the batch process is now possible. To analyze these images, a novel image analysis method is carried out. First, segmentation and restoration algorithms are performed to identify the particles and thereafter geometrical particle measurements are achieved to obtained the PSD of the batch crystallization process over time. Satisfactory measurements are obtained provided that the overall solid concentration does not exceed a threshold above which too many overlapping crystals make discrimination between particles impossible.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Registration of radiographic and computed tomography (CT) data has the potential to allow automated metrology and defect detection. While registration of the three-dimensional reconstructed data is a common task in the medical industry for registration of data sets from multiple detection systems, registration of projection sets has only seen development in the area of tomotherapy. Efforts in projection registration have employed a method named Fourier phase matching (FPM). This work discusses implementation and results for the application of the FPM method to industrial applications for the nondestructive testing (NDT) community. The FPM method has been implemented and modified for industrial application. Testing with simulated and experimental x-ray CT data shows excellent performance with respect to the resolution of the imaging system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
We define a new matching metric — corner Cauchy-Schwarz divergence (CCSD) and present a new approach based on the proposed CCSD and subpixel localization for image registration. First, we detect the corners in an image by a multiscale Harris operator and take them as initial interest points. And then, a subpixel localization technique is applied to determine the locations of the corners and eliminate the false and unstable corners. After that, CCSD is defined to obtain the initial matching corners. Finally, we use random sample consensus to robustly estimate the parameters based on the initial matching. The experimental results demonstrate that the proposed algorithm has a good performance in terms of both accuracy and efficiency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
We address the issue of producing automatic video abstracts in the context of the video indexing of animated movies. For a quick browse of a movie's visual content, we propose a storyboard-like summary, which follows the movie's events by retaining one key frame for each specific scene. To capture the shot's visual activity, we use histograms of cumulative interframe distances, and the key frames are selected according to the distribution of the histogram's modes. For a preview of the movie's exciting action parts, we propose a trailer-like video highlight, whose aim is to show only the most interesting parts of the movie. Our method is based on a relatively standard approach, i.e., highlighting action through the analysis of the movie's rhythm and visual activity information. To suit every type of movie content, including predominantly static movies or movies without exciting parts, the concept of action depends on the movie's average rhythm. The efficiency of our approach is confirmed through several end-user studies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
We present a comparative study of several state-of-the-art background subtraction methods. Approaches ranging from simple background subtraction with global thresholding to more sophisticated statistical methods have been implemented and tested on different videos with ground truth. The goal is to provide a solid analytic ground to underscore the strengths and weaknesses of the most widely implemented motion detection methods. The methods are compared based on their robustness to different types of video, their memory requirements, and the computational effort they require. The impact of a Markovian prior as well as some postprocessing operators are also evaluated. Most of the videos used come from state-of-the-art benchmark databases and represent different challenges such as poor SNR, multimodal background motion, and camera jitter. Overall, we not only help to better understand for which type of videos each method best suits but also estimate how, sophisticated methods are better compared to basic background subtraction methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
We propose an efficient regularized restoration model associating a spatial and a frequential regularizer in order to better model the intrinsic properties of the original image to be recovered and to obtain a better restoration result. An adaptive and rescaling scheme is also proposed to balance the influence of these two different regularization constraints, preventing an overwhelming importance for one of them from prevailing over the other, enabling them to be efficiently fused during the iterative deconvolution process. This hybrid regularization approach, mixing these two constraints and, more precisely, favoring a solution image that is both efficiently denoised [due to the denoising ability of a thresholding procedure in the discrete cosine transform (DCT) domain] and edge-preserved [due to the generalized Gaussian Markov random field (GGMRF) constraint]; yields significant improvements in terms of image quality and higher signal-to-noise ratio improvement results compared to a single GGMRF or DCT prior model and leads to competitive restoration results in benchmark tests, for various levels of blur, blurred signal to noise ratio (BSNR), and noise degradations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
We provide an efficient image-denoising prior, spatial-gradient-local-inhomogeneity (SGLI), which can be successfully applied to image reconstruction. The SGLI prior employs two complementary discontinuity measures: spatial gradient and local inhomogeneity. The spatial gradient measures effectively preserves strong edge components of images, while the local inhomogeneity measure successfully detects locations of the significant discontinuities considering uniformity of small regions. The two complementary measures are elaborately combined into the SGLI prior for image denoising. Thus, the SGLI prior effectively preserves feature components such as edges and textures of images while reducing noise. Comparative results indicate that the proposed SGLI prior is very effective in dealing with the image denoising problem from corrupted images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Three-dimensional human faces have been applied in many fields, such as face animation, identity recognition, and facial plastic surgery. Segmenting and aligning 3-D faces from raw scanned data is the first vital step toward making these applications successful. However, the existence of artifacts, facial expressions, and noises poses many challenges to this problem. We propose an automatic and robust method to segment and align 3-D face surfaces by locating the nose tip and nose ridge. Taking a raw scanned surface as input, a novel feature-based moment analysis on scale spaces is presented to locate the nose tip accurately and robustly, which is then used to crop the face region. A technique called the geodesic Euclidean ratio is then developed to find the nose ridge. Each face is aligned based on the locations of nose tip and nose ridge. The proposed method is not only invariant to translations and rotations, but also robust in the presence of facial expressions and artifacts such as hair, clothing, other body parts, etc. Experimental results on two large 3-D face databases demonstrate the accuracy and robustness of the proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Image denoising algorithms often require their parameters to be adjusted according to the noise level. We propose a fast and reliable method for estimating image noise. The input image is assumed to be contaminated by an additive white Gaussian noise process. To exclude structures or details from contributing to the estimation of noise variance, a Sobel edge detection operator with a self-determined threshold is first applied to each image block. Then a filter operation, followed by an averaging of the convolutions over the selected blocks, provides a very accurate estimation of noise variance. We successfully combine the effectiveness of filter-based approaches with the efficiency of block-based approaches, and the simulated results demonstrate that the proposed method performs well for a variety of images over a large range of noise variances. Performance comparisons against other approaches are also provided.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
In image/video coding standards, the zigzag scan provides an effective encoding order of the quantized transform coefficients such that the quantized coefficients can be arranged statistically from large to small magnitudes. Generally, the optimal scan should transfer the 2-D transform coefficients into 1-D data in descending order of their average power levels. With the optimal scan order, we can achieve more efficient variable length coding. In H.264 advanced video coding (AVC), the residuals resulting from various intramode predictions have different statistical characteristics. After analyzing the transformed residuals, we propose an adaptive scan order scheme, which optimally matches up with intraprediction mode, to further improve the efficiency of intracoding. Simulation results show that the proposed adaptive scan scheme can improve the context-adaptive variable length coding to achieve better rate-distortion performance for the H.264/AVC video coder without the increase of computation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
We propose a novel image-fusion framework for compressive imaging (CI), which is a new technology for simultaneous sampling and compressing of images based on the principle of compressive sensing (CS). Unlike previous fusion work operated on conventional images, we directly perform fusion on the measurement vectors from multiple CI sensors according to the similarity classification. First, we define a metric to evaluate the data similarity of two given CI measurement vectors and present its potential advantage for classification. Second, the fusion rules for CI measurement vectors in different similarity types are investigated to generate a comprehensive measurement vector. Finally, the fused image is reconstructed from the combined measurements via an optimization algorithm. Simulation results demonstrate that the reconstructed images in our fusion framework are visually more appealing than the fused images using other fusion rules, and our fusion method for CI significantly saves computational complexity against the fusion-after-reconstruction scheme.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
In the emerging international standard for scalable video coding (SVC) as an extension of H.264/AVC, a computationally expensive exhaustive mode decision is employed to select the best prediction mode for each macroblock (MB). Although this technique achieves the highest possible coding efficiency, it results in extremely large computation complexity, which obstructs SVC from the practical application. We propose a fast mode decision algorithm for SVC, comprising two fast mode decision techniques: early SKIP mode decision and adaptive early termination for mode decision. They make use of the coding information of spatial neighboring MBs in the same frame and neighboring MBs from base layer to early terminate the mode decision procedure. Experimental results show that the proposed fast mode decision algorithm can achieve the average computational savings of about 70% with almost no loss of rate distortion performance in the enhancement layer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
Among existing biometrics, iris recognition systems are among the most accurate personal biometric identification systems. However, the acquisition of a workable iris image requires strict cooperation of the user; otherwise, the image will be rejected by a verification module because of its poor quality, inducing a high false reject rate (FRR). The FRR may also increase when iris localization fails or when the pupil is too dilated. To improve the existing methods, we propose to use video sequences acquired in real time by a camera. In order to keep the same computational load to identify the iris, we propose a new method to estimate the iris characteristics. First, we propose a new iris texture characterization based on Fourier-Mellin transform, which is less sensitive to pupil dilatations than previous methods. Then, we develop a new iris localization algorithm that is robust to variations of quality (partial occlusions due to eyelids and eyelashes, light reflects, etc.), and finally, we introduce a fast and new criterion of suitable image selection from an iris video sequence for an accurate recognition. The accuracy of each step of the algorithm in the whole proposed recognition process is tested and evaluated using our own iris video database and several public image databases, such as CASIA, UBIRIS, and BATH.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
We discuss the concept of the direction image multiresolution, which is derived as a property of the 2-D discrete Fourier transform, when it splits by 1-D transforms. The N×N image, where N is a power of 2, is considered as a unique set of splitting-signals in paired representation, which is the unitary 2-D frequency and 1-D time representation. The number of splitting-signals is 3N-2, and they have different durations, carry the spectral information of the image in disjoint subsets of frequency points, and can be calculated from the projection data along one of 3N/2 angles. The paired representation leads to the image composition by a set of 3N-2 direction images, which defines the directed multiresolution and contains periodic components of the image. We also introduce the concept of the resolution map, as a result of uniting all direction images into log2N series. In the resolution map, all different periodic components (or structures) of the image are packed into a N×N matrix, which can be used for image processing in enhancement, filtration, and compression
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
In a recent work, an interleaved sensor that parallels the human retina was proposed to enhance the capabilities to acquire images under very different lighting conditions. We present an implementation of that interleaved sensor based on the new transverse field detector, a CMOS photosensitive device for imaging applications that perform color detection without color filters. This implementation adds some advantages of the filterless detector in terms of color acquisition and spatial resolution in very high or very low lighting conditions. It also adds the flexibility to electronically reconfigure the interleaved sensor pattern, adapting it to the present light conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
We present a field-programmable gate array (FPGA)-based hardware architecture for image processing as well as novel algorithms for fast autoexposure control and color filter array (CFA) demosaicing utilizing a CMOS image sensor (CIS). The proposed hardware architecture includes basic color processing functions of black-level correction, noise reduction, autoexposure control, auto-white-balance adjustment, CFA demosaicing, and gamma correction while applying advanced peripheral bus architecture to implement the hardware architecture. The speed of traditional autoexposure control algorithms to reach a proper exposure level is so slow that it is necessary to develop a fast autoexposure control method. Based on the optical-electrical characteristics of the CIS, we present a fast auto-exposure-control algorithm that can guarantee speed and accuracy. To ensure the peak SNR performance of the demosaiced images of the CIS and reduce the computational cost at the same time, the proposed demosaicing algorithm improves on the adaptive edge-sensitive algorithm and the fuzzy assignment algorithm. The experimental results show that the proposed hardware architecture works well on the FPGA development board and produces better quality images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
TOPICS: Video, 3D image processing, Receivers, Computer programming, Imaging systems, Multimedia, 3D displays, 3D video compression, Cameras, Image quality
We present a 3-D mobile broadcasting system based on a depth-image-based rendering (DIBR) technique in terrestrial digital multimedia broadcasting (T-DMB). It is well known that a 3-D mobile broadcasting service based on the DIBR technique can be one of the solutions to meet service requirements, because the required bit rates of depth images in DIBR schemes are less than additional video bit rates of other 3-D formats, while keeping good 3-D quality and guaranteeing backward compatibility with conventional broadcasting systems. We propose an implementation of a DIBR-based 3-D T-DMB system that supports real-time rendering with good image quality and realistic depth effect at the receiver, verifying that it could be perfectly applicable in mobile broadcasting. Specifically, the proposed 3-D T-DMB receiver adopts a look-up table (LUT)-based simultaneous method to accomplish the real-time implementation of DIBR algorithms, including warping, hole filling, and interleaving. Moreover, we establish the parameter values that are needed for generating the LUT based on theoretical analysis. The verification is accomplished through objective and subjective evaluations, based on simulation and real-time implementation of the system under actual service conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
We propose a chromatic aberration (CA) reduction technique that removes artifacts caused by lateral CA and longitudinal CA, simultaneously. In general, most visible CA-related artifacts appear locally in the neighborhoods of strong edges. Because these artifacts usually have local characteristics, they cannot be removed well by regular global warping methods. Therefore, we designed a nonlinear partial differential equation (PDE) in which the local characteristics of the CA are taken into account. The proposed algorithm estimates the regions with apparent CA artifacts and the ratios of the magnitudes between the color channels. Using this information, the proposed PDE matches the gradients of the edges in the red and blue channels to the gradient in the green channel, which results in an alignment of the positions of the edges while simultaneously performing a deblurring process on the edges. Experimental results show that the proposed method can effectively remove even significant CA artifacts, such as purple fringing as identified by the image sensor. The experimental results show that the proposed algorithm achieves better performance than existing algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.
TOPICS: Machine vision, Computer vision technology, Human vision and color perception, Object recognition, 3D modeling, Visualization, Visual system, Analytical research, Visual process modeling, Multimedia
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.