PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE
Proceedings Volume 7537, including the Title Page, Copyright
information, Table of Contents, the Conference Committee listing and introduction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We are just a few years away from celebrating the 200th anniversary of photography. The first permanent photographic
record was made by Niepce in 1826, the view from his window at Le Gras. After many development cycles, including
some periods of stagnation, photography is now experience an amazing period of growth. Change since the mid 90's
going into the next several years will completely modify photography and its industry. We propose that the digital
photography revolution can be divided into two phases. The first, from about 1994 to 2009, was primarily the
transformation of film-based equipment into their digital counterparts. Now, in the second phase, photography is starting
to change into something completely different, with forces like social networks, cell phone cameras and computational
photography changing the business, the methods and the use of photographs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this study, we evaluated the effect that pixel size has upon people's preferences for images. We used multispectral
images of faces as the scene data and simulated the response from sensors with different pixel size while the other
sensor parameters were kept constant. Subjects were asked to choose between pairs of images; we found that
preference judgments were primarily influenced by the visibility of uncorrelated noise in the images. We used the SCIELAB
metric (ΔE) to predict the visibility of the uncorrelated image noise. The S-CIELAB difference between a
test image and an ideal reference image was monotonically related to the preference score.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Reducing hot pixels is a challenge commonly faced in the image sensor industry and there are various techniques used to
address this problem, including image processing and process optimization. This paper discusses an approach to reduce
hot pixels by using Technology Computer Aided Design (TCAD) simulations to optimize the pixel at the process level.
A correlation between empirical hot pixel data and simulated electric field is discussed. For this given process, there is
good correlation between hot pixel count and the electric field along the top p-n junction of the photodiode. By
optimizing the top p-n junction, we were able to reduce the hot pixel count to less than 100ppm at 45C for a threshold
value of 15% of full scale. However, careful consideration must be made during the process optimization. When
photodiode implant doses and energies are changed, image lag performance can deteriorate. Changing photodiode
implant doses and energies can also result in n-type penetration through the polysilicon gate, which can lead to increased
dark current. A careful design will avoid such problems. During our process optimization, we successfully reduced hot
pixel count while still achieving low dark current. These achievements can be observed in dark current of less than 3 e-
/sec-pixel at 45C.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Accurate noise level estimation is essential to assure good performance of noise reduction filters. Noise contaminating
raw images is typically modeled as additive white and Gaussian distributed (AWGN); however raw images
are affected by a mixture of noise sources that overlap according to a signal dependent noise model. Hence, the
assumption of constant noise level through all the dynamic range represents a simplification that does not allow
precise sensor noise characterization and filtering; consequently, local noise standard deviation depends on signal
levels measured at each location of the CFA (Color Filter Array) image.
This work proposes a method for determining the noise curves that map each CFA signal intensity to its
corresponding noise level, without the need of a controlled test environment and specific test patterns. The
process consists in analyzing sets of heterogeneous raw CFA images, allowing noise characterization of any image
sensor. In addition we show how the estimated noise level curves can be exploited to filter a CFA image, using
an adaptive signal dependent Gaussian filter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Threshold estimation in various image noise suppression and edge sharpening algorithms may still contain
traces of subjective determination, with groups of subjects preferring more or less than optimal amounts of noise
reduction and respectively edge enhancement. We propose in this paper a novel image noise reduction and edge
enhancement technique that includes a profiling step for optimal noise threshold selection and resulting spatial frequency
response (SFR). Our filter is implemented in the wavelet domain due to flexibility of spatially examining frequencies of
interest. The method allows for selectively steering the algorithm to filter the low light 'chrominance noise' of certain
hues in color filter array (CFA) cameras more so than others, and for building of a camera noise 'pseudo profile'.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Four channel sensors were evaluated with a image sensor model and their performance was compared with
three channel sensors considering both color reproduction accuracy and photo shot noise. When noise was not
considered, a sensor with usual RGB plus an additional channel which resided between G and B was the best.
But when the emphasis was on noise, a sensor with B, R and two Gs was the best because reducing noise of
G should be effective in reducing noise of all L*a*b* components. Bayer color filter array (CFA) samples twice
density of G than R or B. This CFA is considered to be efficient in resolution but the result suggests it is also
efficient in SNR. Comparing to three channel sensors, the four channel sensors were better in color reproduction
but worse in noise. An image preference model proposed by Kuniba and Berns was used to evaluate them and
it was shown that neither one was superior than the other.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The current approach used for demosaic algorithm evaluation is mostly empirical and does not offer a meaningful
quantitative metric - this disconnects the theoretical results from the results seen in practice. In camera phones, the
difference is even bigger due to the low signal to noise ratios and also due to the overlapping of the color filters. This
implies that a demosaic algorithm has to be designed to allow for graceful degradation in presence of noise. Also, the
demosaic algorithm has to be tolerant to high color correlations. In this paper we propose a special class of images and a
methodology that can be used to produce a metric indicative of a real case demosaic algorithm performance. The test
image that we propose is formed by using a dual chirp signal that is a function of the distance from the center.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As white balance algorithms employed in mobile phone cameras become increasingly sophisticated by using, e.g.,
elaborate white-point estimation methods, a proper color calibration is necessary. Without such a calibration,
the estimation of the light source for a given situation may go wrong, giving rise to large color errors. At the
same time, the demands for efficiency in the production environment require the calibration to be as simple
as possible. Thus it is important to find the correct balance between image quality and production efficiency
requirements.
The purpose of this work is to investigate camera color variations using a simple model where the sensor and
IR filter are specified in detail. As input to the model, spectral data of the 24-color Macbeth Colorchecker was
used. This data was combined with the spectral irradiance of mainly three different light sources: CIE A, D65
and F11. The sensor variations were determined from a very large population from which 6 corner samples were
picked out for further analysis. Furthermore, a set of 100 IR filters were picked out and measured. The resulting
images generated by the model were then analyzed in the CIELAB space and color errors were calculated using
the ΔE94 metric. The results of the analysis show that the maximum deviations from the typical values are
small enough to suggest that a white balance calibration is sufficient. Furthermore, it is also demonstrated that
the color temperature dependence is small enough to justify the use of only one light source in a production
environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently, R. H. Chan, T. F. Chan, L. Shen, and Z. Shen proposed an image-restoration method, referred to as the C2-S2
method, in the shift-invariant Haar wavelet transform domain. The C2-S2 method is suitable for image-restoration in a
digital color camera and restores a sharp color image in a lightly noisy case; but in a heavily noisy case it produces
colored artifacts originating from noise. In such a case, the image-restoration process should be split into a colorinterpolation
stage, a denoising stage, and a deblurring stage; and along this line we present an approach to restore a
high ISO-sensitivity color image. Our approach firstly de-mosaics observed color data with the bi-linear interpolation
method, which is robust against observation noise but causes image blurs. Next, our approach applies our previouslyproposed
spatially-adaptive soft color-shrinkage to each shift-invariant Haar wavelet coefficient of the demosaicked
color-image, to produce a denoised color-image. Finally, our approach applies to the denoised color-image a colorimage
restoration method that corresponds to an extension of the C2-S2 method and employs our previously proposed
soft color-shrinkage. The experimental simulations conducted on raw color data observed with a SLR digital color
camera with 6400 ISO-sensitivity demonstrate that our approach restores a high-quality color image even in a high ISOsensitivity
case, without producing noticeable artifacts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Conventional point spread function (PSF) measurement methods often use parametric models for the estimation
of the PSF. This limits the shape of the PSF to a specific form provided by the model. However, there are
unconventional imaging systems like multispectral cameras with optical bandpass filters, which produce an, e.g.,
unsymmetric PSF. To estimate such PSFs we have developed a new measurement method utilizing a random noise
test target with markers: After acquisition of this target, a synthetic prototype of the test target is geometrically
transformed to match the acquired image with respect to its geometric alignment. This allows us to estimate the
PSF by direct comparison between prototype and image. The noise target allows us to evaluate all frequencies
due to the approximately "white" spectrum of the test target - we are not limited to a specifically shaped PSF.
The registration of the prototype pattern gives us the opportunity to take the specific spectrum into account
and not just a "white" spectrum, which might be a weak assumption in small image regions. Based on the PSF
measurement, we perform a deconvolution. We present comprehensive results for the PSF estimation using our
multispectral camera and provide deconvolution results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We introduce a new metric, the visible signal-to-noise ratio (vSNR), to analyze how pixel-binning and resizing methods
influence noise visibility in uniform areas of an image. The vSNR is the inverse of the standard deviation of the SCIELAB
representation of a uniform field; its units are 1/ΔE. The vSNR metric can be used in simulations to predict
how imaging system components affect noise visibility. We use simulations to evaluate two image rendering methods:
pixel binning and digital resizing. We show that vSNR increases with scene luminance, pixel size and viewing distance
and decreases with read noise. Under low illumination conditions and for pixels with relatively high read noise, images
generated with the binning method have less noise (high vSNR) than resized images. The binning method has
noticeably lower spatial resolution. The binning method reduces demands on the ADC rate and channel throughput.
When comparing binning and resizing, there is an image quality tradeoff between noise and blur. Depending on the
application users may prefer one error over another.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The dead leaves model was recently introduced as a method for measuring the spatial frequency response (SFR) of
camera systems. The target consists of a series of overlapping opaque circles with a uniform gray level distribution and
radii distributed as r-3. Unlike the traditional knife-edge target, the SFR derived from the dead leaves target will be
penalized for systems that employ aggressive noise reduction. Initial studies have shown that the dead leaves SFR
correlates well with sharpness/texture blur preference, and thus the target can potentially be used as a surrogate for more
expensive subjective image quality evaluations. In this paper, the dead leaves target is analyzed for measurement of
camera system spatial frequency response. It was determined that the power spectral density (PSD) of the ideal dead
leaves target does not exhibit simple power law dependence, and scale invariance is only loosely obeyed. An extension
to the ideal dead leaves PSD model is proposed, including a correction term to account for system noise. With this
extended model, the SFR of several camera systems with a variety of formats was measured, ranging from 3 to 10 megapixels;
the effects of handshake motion blur are also analyzed via the dead leaves target.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe the procedure to evaluate the image quality of a camera in terms of texture preservation. We use a
stochastic model coming from stochastic geometry, and known as the dead leaves model. It intrinsically reproduces
occlusions phenomena, producing edges at any scale and any orientation with a possibly low level of contrast. An
advantage of this synthetic model is that it provides a ground truth in terms of image statistics. In particular, its power
spectrum is a power law, as many natural textures. Therefore, we can define a texture MTF as the ratio of the Fourier
transform of the camera picture by the Fourier transform of the original target and we fully describe the procedure to
compute it. We will compare the results with the traditional MTF (computed on a slanted edge as defined in the ISO
12233 standard) and will show that the texture MTF is indeed more appropriate for describing fine detail rendering.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The aim of the paper is to define an objective measurement for evaluating the performance of a digital camera. The
challenge is to mix different flaws involving geometry (as distortion or lateral chromatic aberrations), light (as
luminance and color shading), or statistical phenomena (as noise). We introduce the concept of information capacity that
accounts for all the main defects than can be observed in digital images, and that can be due either to the optics or to the
sensor. The information capacity describes the potential of the camera to produce good images. In particular, digital
processing can correct some flaws (like distortion). Our definition of information takes possible correction into account
and the fact that processing can neither retrieve lost information nor create some. This paper extends some of our
previous work where the information capacity was only defined for RAW sensors. The concept is extended for cameras
with optical defects as distortion, lateral and longitudinal chromatic aberration or lens shading.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently, we have proposed a new image device called gigavision camera whose most important characteristic
is that pixels have binary response. The response function of a gigavision sensor is non-linear and similar
to a logarithmic function, which makes the camera suitable for high dynamic range imaging. One important
parameter in the gigavision camera is the threshold for generating binary pixels. Threshold T relates to the
number of photo-electrons necessary for the pixel output to switch from "0" to "1". In this paper, a theoretical
analysis of the threshold influence in the gigavision camera is given. If the threshold in the gigavision sensor is
large, there will be a "dead zone" in the response function of a gigavision sensor. A method of adding artificial
light is proposed to solve the "dead zone" problem. Through theoretical analysis and experimental results based
on synthesized images, we show that for high light intensity, the gigavision camera with a large threshold and
added light works better than one with unity threshold. Experimental results with a prototype camera based on
a single photon avalanche diodes (SPAD) camera are also presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Defocus imaging techniques, involving the capture and reconstruction of purposely out-of-focus images, have
recently become feasible due to advances in deconvolution methods. This paper evaluates the feasibility of
defocus imaging as a means of increasing the effective dynamic range of conventional image sensors. Blurring
operations spread the energy of each pixel over the surrounding neighborhood; bright regions transfer energy to
nearby dark regions, reducing dynamic range. However, there is a trade-off between image quality and dynamic
range inherent in all conventional sensors.
The approach involves optically blurring the captured image by turning the lens out of focus, modifying that
blurred image with a filter inserted into the optical path, then recovering the desired image by deconvolution.
We analyze the properties of the setup to determine whether any combination can produce a dynamic range
reduction with acceptable image quality. Our analysis considers both properties of the filter to measure local
contrast reduction, as well as the distribution of image intensity at different scales as a measure of global contrast
reduction. Our results show that while combining state-of-the-art aperture filters and deconvolution methods
can reduce the dynamic range of the defocused image, providing higher image quality than previous methods,
rarely does the loss in image fidelity justify the improvements in dynamic range.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent research demonstrates the advantage of designing electro-optical imaging systems by jointly optimizing the optical
and digital subsystems. The optical systems designed using this joint approach intentionally introduce large and often
space-varying optical aberrations that produce blurry optical images. Digital sharpening restores reduced contrast due to
these intentional optical aberrations. Computational imaging systems designed in this fashion have several advantages
including extended depth-of-field, lower system costs, and improved low-light performance. Currently, most consumer
imaging systems lack the necessary computational resources to compensate for these optical systems with large aberrations
in the digital processor. Hence, the exploitation of the advantages of the jointly designed computational imaging system
requires low-complexity algorithms enabling space-varying sharpening.
In this paper, we describe a low-cost algorithmic framework and associated hardware enabling the space-varying finite
impulse response (FIR) sharpening required to restore largely aberrated optical images. Our framework leverages the
space-varying properties of optical images formed using rotationally-symmetric optical lens elements. First, we describe
an approach to leverage the rotational symmetry of the point spread function (PSF) about the optical axis allowing computational
savings. Second, we employ a specially designed bank of sharpening filters tuned to the specific radial variation
common to optical aberrations. We evaluate the computational efficiency and image quality achieved by using this low-cost
space-varying FIR filter architecture.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The under constrained nature of illuminant estimation determines that in order to resolve the problem, certain
assumptions are needed, such as the gray world theory. Including more constraints in this process may help explore the
useful information in an image and improve the accuracy of the estimated illuminant, providing that the constraints hold.
Based on the observation that most personal images have contents of one or more of the following categories: neutral
objects, human beings, sky, and plants, we propose a method for illuminant estimation through the clustering of pixels of
gray and three dominant memory colors: skin tone, sky blue, and foliage green. Analysis shows that samples of the
above colors cluster around small areas under different illuminants and their characteristics can be used to effectively
detect pixels falling into each of the categories. The algorithm requires the knowledge of the spectral sensitivity response
of the camera, and a spectral database consisted of the CIE standard illuminants and reflectance or radiance database of
samples of the above colors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Face detection has been implemented in many digital still cameras and camera phones with the promise of enhancing
existing camera functions (e.g. auto exposure) and adding new features to cameras (e.g. blink detection). In this study we
examined the use of face detection algorithms in assisting auto exposure (AE). The set of 706 images, used in this study,
was captured using Canon Digital Single Lens Reflex cameras and subsequently processed with an image processing
pipeline. A psychophysical study was performed to obtain optimal exposure along with the upper and lower bounds of
exposure for all 706 images. Three methods of marking faces were utilized: manual marking, face detection algorithm A
(FD-A), and face detection algorithm B (FD-B). The manual marking method found 751 faces in 426 images, which
served as the ground-truth for face regions of interest. The remaining images do not have any faces or the faces are too
small to be considered detectable. The two face detection algorithms are different in resource requirements and in
performance. FD-A uses less memory and gate counts compared to FD-B, but FD-B detects more faces and has less false
positives. A face detection assisted auto exposure algorithm was developed and tested against the evaluation results from
the psychophysical study. The AE test results showed noticeable improvement when faces were detected and used in
auto exposure. However, the presence of false positives would negatively impact the added benefit.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The high level context image analysis regards many fields as face recognition, smile detection, automatic red eye removal,
iris recognition, fingerprint verification, etc. Techniques involved in these fields need to be supported by more powerful
and accurate routines. The aim of the proposed algorithm is to detect elliptical shapes from digital input images. It can
be successfully applied in topics as signal detection or red eye removal, where the elliptical shape degree assessment can
improve performances. The method has been designed to handle low resolution and partial occlusions. The algorithm is
based on the signature contour analysis and exploits some geometrical properties of elliptical points. The proposed method
is structured in two parts: firstly, the best ellipse which approximates the object shape is estimated; then, through the
analysis and the comparison between the reference ellipse signature and the object signature, the algorithm establishes if
the object is elliptical or not. The first part is based on symmetrical properties of the points belonging to the ellipse, while
the second part is based on the signature operator which is a functional representation of a contour. A set of real images
has been tested and results point out the effectiveness of the algorithm in terms of accuracy and in terms of execution time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The aim of this research is to derive illuminant-independent type of HDR imaging modules which can optimally multispectrally
reconstruct of every color concerned in high-dynamic-range of original images for preferable cross-media
color reproduction applications. Each module, based on either of broadband and multispectral approach, would be
incorporated models of perceptual HDR tone-mapping, device characterization. In this study, an xvYCC format of HDR
digital camera was used to capture HDR scene images for test. A tone-mapping module was derived based on a multiscale
representation of the human visual system and used equations similar to a photoreceptor adaptation equation,
proposed by Michaelis-Menten. Additionally, an adaptive bilateral type of gamut mapping algorithm, using approach of
a multiple conversing-points (previously derived), was incorporated with or without adaptive Un-sharp Masking (USM)
to carry out the optimization of HDR image rendering. An LCD with standard color space of Adobe RGB (D65) was
used as a soft-proofing platform to display/represent HDR original RGB images, and also evaluate both renditionquality
and prediction-performance of modules derived. Also, another LCD with standard color space of sRGB was
used to test gamut-mapping algorithms, used to be integrated with tone-mapping module derived.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Motion due to digital camera movement during the image capture process is a major factor that degrades the quality of
images and many methods for camera motion removal have been developed. Central to all techniques is the correct
recovery of what is known as the Point Spread Function (PSF). A very popular technique to estimate the PSF relies on
using a pair of gyroscopic sensors to measure the hand motion. However, the errors caused either by the loss of the
translational component of the movement or due to the lack of precision in gyro-sensors measurements impede the
achievement of a good quality restored image. In order to compensate for this, we propose a method that begins with an
estimation of the PSF obtained from 2 gyro sensors and uses a pair of under-exposed image together with the blurred
image to adaptively improve it.
The luminance of the under-exposed image is equalized with that of the blurred image. An initial estimation of the PSF
is generated from the output signal of 2 gyro sensors. The PSF coefficients are updated using 2D-Least Mean Square
(LMS) algorithms with a coarse-to-fine approach on a grid of points selected from both images.
This refined PSF is used to process the blurred image using known deblurring methods. Our results show that the
proposed method leads to superior PSF support and coefficient estimation. Also the quality of the restored image is
improved compared to 2 gyro only approach or to blind image de-convolution results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper relates to a method for effective reduction of artifacts, caused by lossy compression algorithms based on
block-based discreet cosine transform (DCT) coding, known as JPEG coding. Most common artifacts produced by such
type of coding, are blocking and ringing artifacts. To reduce the effect of coding artifacts caused by significant
information loss, a variety of different algorithms and methods has been suggested. However, the majority of solutions
propose to process all blocks in the image, which leads to increase of processing time, required resources, as well as
image over-blurring after processing of blocks, not affected by blocking artifacts. Techniques for ringing artifact
detection usually rely on edge-detection step, a complicated and versatile procedure with unknown optimal parameters.
In this paper we describe very effective procedures for detection of artifacts, and their subsequent correction. This
approach helps to save notable amount of computational resources, since not all the blocks are involved in correction
procedures. Detection steps are performed in frequency domain, using only DCT coefficients of an image. Numerous
examples have been analyzed and compared with the existent solutions, and results prove the effectiveness of proposed
technique.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The pixel densities of current CMOS sensors increase and bring new challenges for image sensor designers.
Todays sensor modules with miniature lenses often exhibit a considerable amount of color lens shading. This
shading is spatial-variant and can be easily identified when capturing a flat textureless Lambertian surface and
inspecting the light fall-off and hue change from the image center towards the borders. In this paper we discuss
lens shade compensation using spatially dependent gains for each of the four color channel in the Bayer color
filter array. We determine reference compensation functions in off-line calibration and efficiently parameterize
each function with a bilinear spline which we fit to the reference function using constrained least-squares and
Lagrangian conditions ensuring continuity between the piece-wise bilinear functions. For each spline function
we optimize a rectilinear grid on which the spline knots are aligned by minimizing the square errors between
reference and approximated compensation function. Our evaluations provide quantitative results with real image
data using three recent CMOS sensor modules.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The classic shrinkage works well for monochrome-image denoising. To utilize inter-channel color correlations, a noisy
image undergoes the color-transformation from the RGB to the luminance-and-chrominance color space, and the
luminance and the chrominance components are separately denoised. However, this approach cannot cope with signaldependent
noise of a digital color camera. To utilize the noise's signal-dependencies, previously we have proposed the
soft color-shrinkage where the inter-channel color correlations are directly utilized in the RGB color space. The soft
color-shrinkage works well; but involves a large amount of computations. To alleviate the drawback, taking up the l0-l2
optimization problem whose solution yields the hard shrinkage, we introduce the l0 norms of color differences and the l0
norms of color sums into the model, and derive hard color-shrinkage as its solution. For each triplet of three primary
colors, the hard color-shrinkage has 24 feasible solutions, and from among them selects the optimal feasible solution
giving the minimal energy. We propose a method to control its shrinkage parameters spatially-adaptively according to
both the local image statistics and the noise's signal-dependencies, and apply the spatially-adaptive hard color-shrinkage
to removal of signal-dependent noise in a shift-invariant wavelet transform domain. The hard color-shrinkage performs
mostly better than the soft color-shrinkage, from objective and subjective viewpoints.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Correct image orientation is often assumed by common imaging applications such as enhancement, browsing, and
retrieval. However, the information provided by camera metadata is often missing or incorrect. In these cases
manual correction is required, otherwise the images cannot be correctly processed and displayed. In this work
we propose a system which automatically detects the correct orientation of digital photographs. The system
exploits the information provided by a face detector and a set of low-level features related to distributions in the
image of color and edges. To prove the effectiveness of the proposed approach we evaluated it on two datasets
of consumer photographs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have designed a new self-adaptive image cropping algorithm that is able to detect several relevant regions in the
image. These regions can then be sequentially proposed as thumbnails, to the user according to their relevance order,
thus allowing the viewer to visualize the relevant image content and eventually to display or print only those regions in
which he is more interested in. The algorithm exploits both visual and semantic information. Visual information is
obtained by a visual attention model, while semantic information relates to the detection and recognition of particularly
significant objects. In this work we concentrate our attention on the two common objects found in personal photos, such
as face and skin regions. Examples are shown to illustrate the effectiveness of the proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many times when digital cameras are tested time is short. May it be because the test is performed on a production line
where productivity is one of the most important aspects or in a lab where a schedule needs to be met. To reduce testing
time one of the important aspects is to reduce the amount of images that need to be taken and analyzed. This can be
achieved by combining different features into a single test target.
Another reason to use combined targets is to eliminate any shot to shot variations in exposure, focus setting, or image
processing. Last but not least a single target may be cheaper than buying several targets for the different image quality
aspects.
But when different features are incorporated into a single chart a couple of questions arise. Can the different aspects be
determined independently? Is it better to use a transparent or a reflective target? Can e.g. a reflective chart with its
limited contrast be used to measure the dynamic range of a camera? This paper discusses the pros and cons of combined
test charts. It describes the related possibilities and problems for the main aspects of image quality like OECF,
resolution, color reproduction, shading, and distortion. Application specific possibilities are studied and summarized.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Market's demands of digital cameras for higher sensitivity capability under low-light conditions are remarkably
increasing nowadays. The digital camera market is now a tough race for providing higher ISO capability. In this paper,
we explore an approach for increasing maximum ISO capability of digital cameras without changing any structure of an
image sensor or CFA. Our method is directly applied to the raw Bayer pattern CFA image to avoid non-linearity
characteristics and noise amplification which are usually deteriorated after ISP (Image Signal Processor) of digital
cameras. The proposed method fuses multiple short exposed images which are noisy, but less blurred. Our approach is
designed to avoid the ghost artifact caused by hand-shaking and object motion. In order to achieve a desired ISO image
quality, both low frequency chromatic noise and fine-grain noise that usually appear in high ISO images are removed
and then we modify the different layers which are created by a two-scale non-linear decomposition of an image. Once
our approach is performed on an input Bayer pattern CFA image, the resultant Bayer image is further processed by ISP
to obtain a fully processed RGB image. The performance of our proposed approach is evaluated by comparing SNR
(Signal to Noise Ratio), MTF50 (Modulation Transfer Function), color error ∝E*ab and visual quality with reference
images whose exposure times are properly extended into a variety of target sensitivity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a method for measuring the multidirectional modulation transfer function (MTF) of digital image
acquisition devices using a Siemens star. There are two leading methods for measuring the MTF: the slanted-edge
method and the modulated Siemens star method. The former measures the MTF in the horizontal or vertical spatial
frequency based on the line spread function (LSF) derived from the edge profile of a slanted knife-edge image. The
latter measures the multidirectional MTF using a pattern circumferentially modulated with continuous gray levels. Our
method measures the multidirectional MTF using the multidirectional knife-edges of a Siemens star, which is a simple
binary image consisting of radial spokes. The vertical edge of the Siemens star is slightly slanted so that the
multidirectional edge profiles are obtained in super-resolution. A portion image consisting of the knife-edge is selected
in each direction and rotated so that the knife-edge stands upright with a slight tilt. Along the edge slope detected by
fitting a cumulative distribution function to the pixel levels, the pixels are projected onto the horizontal axis, forming the
edge profile. The resulting multidirectional MTF computed from the edge profiles is in excellent agreement with that
measured by the modulated Siemens star method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Most contemporary still and video cameras employ various optically birefringent materials as optical low-pass filters
(OLPF) in order to minimize alias artifacts in the image. Due to the slope characteristics of these filters, camera
designers are faced with the choice of either under correcting to maintain image resolution, allowing some aliasing, or
eliminating aliases with a more aggressive design that will also compromise the image modulation transfer function
(MTF). Furthermore, most OLPFs are designed as optical elements with a frequency response that does not change even
if the frequency responses of other elements of the capturing systems are altered.
In this paper, we demonstrate the use of a parallel optical window or, alternatively, a rigid mirror positioned between a
lens and an imager as an OLPF. Controlled x- and y-axis rotations of the window and the mirror result in a manipulation
of the point spread function (PSF) of the system and frequency content of imaged continuous scenes. We evaluate the
system MTF when various window functions are used to shape PSF, such as rectangle, triangle, Blackman-Harris etc.
We also present results of our experiments when the dynamic OLPF's support and shape are altered to accommodate the
optical performance of lenses and imager characteristics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A careful mathematical analysis of the meaning of variables and equations used in standards for exposure meters and the
determination of sensitivities S demonstrates that standards and authors of many photographic texts have erred in
interpretations and applications of the common exposure equation. This article concludes that it is inappropriate to use
the exposure meter constant Ks as an exposure meter calibration constant because it is essentially a label for a product of
characteristics of the photosensitive array employed (the reference exposure Ho = Hsp/S and the midtone shift M =
Hmid/Hsp). It also concludes that the sensitivity and the common exposure equation ultimately depend on the midtone
photosensitive exposure target Hmid. This midtone exposure equation can be generalized by including a shift to a
arbitrary (non-midtone) photosensitive exposure target in its derivation. This more general exposure equation includes
the exposure compensation and eliminates the need for exposure indices. Analysis of the exposure equation for incidentlight
exposure meters shows that these meters avoid the vagaries of the current scene by calculating exposure for a
potentially very different standard scene and often can be, in effect, less accurate in exposure calculations than reflectedlight
exposure meters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The evaluation of CMOS sensors performance in terms of color accuracy and noise is a big challenge for camera
phone manufacturers. On this paper, we present a tool developed with Matlab at STMicroelectronics which
allows quality parameters to be evaluated on simulated images. These images are computed based on measured
or predicted Quantum Efficiency (QE) curves and noise model. By setting the parameters of integration time and
illumination, the tool optimizes the color correction matrix (CCM) and calculates the color error, color saturation
and signal-to-noise ratio (SNR). After this color correction optimization step, a Graphics User Interface (GUI)
has been designed to display a simulated image at a chosen illumination level, with all the characteristics of a
real image taken by the sensor with the previous color correction. Simulated images can be a synthetic Macbeth
ColorChecker, for which reflectance of each patch is known, or a multi-spectral image, described by the reflectance
spectrum of each pixel or an image taken at high-light level. A validation of the results has been performed with
ST under development sensors. Finally we present two applications one based on the trade-offs between color
saturation and noise by optimizing the CCM and the other based on demosaicking SNR trade-offs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.