Wafer Plane Inspection (WPI) is a novel approach to inspection, developed to enable high inspectability on fragmented
mask features at the optimal defect sensitivity. It builds on well-established high resolution inspection capabilities to
complement existing manufacturing methods. The production of defect-free photomasks is practical today only because
of informed decisions on the impact of defects identified. The defect size, location and its measured printing impact can
dictate that a mask is perfectly good for lithographic purposes. This inspection - verification - repair loop is timeconsuming
and is predicated on the fact that detectable photomask defects do not always resolve or matter on wafer.
This paper will introduce and evaluate an alternative approach that moves the mask inspection to the wafer plane. WPI
uses a high NA inspection of the mask to construct a physical mask model. This mask model is used to create the mask
image in the wafer plane. Finally, a threshold model is applied to enhance sensitivity to printing defects. WPI essentially
eliminates the non-printing inspection stops and relaxes some of the pattern restrictions currently placed on incoming
photomask designs. This paper outlines the WPI technology and explores its application to patterns and substrates
representative of 32nm designs. The implications of deploying Wafer Plane Inspection will be discussed.
Comprehensive evaluation of retinal image quality requires that light scatter as well as
optical aberrations be considered. In investigating how retinal image degradation
affects eye growth in the chick model of myopia, we developed a simple method
based on Shack-Hartmann images for evaluating the effects of both monochromatic
aberrations and light scatter on retinal image quality. We further evaluated our
method in the current study by applying it to data collected from both normal chick
eyes and albino eyes that were expected to show increased intraocular light scatter. To
analyze light scatter in our method, each Shack-Hartmann dot is treated as a local
point spread function (PSF) that is the convolution of a local scatter PSF and a lenslet
diffraction PSF. The local scatter PSF is obtained by de-convolution, and is fitted with
a circularly symmetric Gaussian function using nonlinear regressions. A whole-eye
scatter PSF also can be derived from the local scatter PSFs for the analyzed pupil.
Aberrations are analyzed using OSA standard Zernike polynomials, and
aberration-related PSF calculated from reconstructed wavefront using fast Fourier
transform. Modulation transfer functions (MTFs) are computed separately for
aberration and scatter PSFs, and a whole-eye MTF is derived as the product of the
two. This method was applied to 4 normal and 4 albino eyes. Compared to normal
eyes, albino eyes were more aberrated and showed greater light scatter. As a result,
overall retinal image degradation was much greater in albino eyes than in normal
eyes, with the relative contribution to retinal image degradation of light scatter
compared to aberrations also being greater for albino eyes.
KEYWORDS: Point spread functions, Digital signal processing, Wavelet transforms, Imaging systems, Spatial frequencies, Wavelets, Fourier transforms, Monte Carlo methods, Image sensors, Distributed computing
A high-performance focus measure is one of the key components in any autofocus
system based on digital image processing. More than a dozen of focus measures have
been proposed and evaluated in the literature, yet there have be no comprehensive
evaluations that include most of them. The purpose of the current study is to evaluate
and compare the performance of ten focus measures using Monte Carlo simulations,
run on a self-built scalable inhomogeneous computer cluster with distributed
computing capacity. From the perspective of a general framework for focus measure
evaluations, we calculate the true point spread functions (PSFs) from aberrations
represented by OSA standard Zernike polynomials using fast Fourier transform. For
each run, a range of defocus levels are generated, the PSF for each defocus level is
convoluted with an original image, and a certain amount of noise is added to the
resulting defocused image. Each focus measure is applied to all the blurred images to
obtain a focus measure curve. The procedure is repeated on a few representative
images for different types and levels of noise (Gaussian, salt & pepper, and speckle).
The performance of the ten focus measures is compared in terms of monotonicity,
unimodality, defocus sensitivity, noise sensitivity, effective range, computational
efficiency and variability.
The diverse needs for digital auto-focusing systems have driven the development of a
variety of focus measures. The purpose of the current study was to investigate whether
any of these focus measures are biologically plausible; specifically whether they are
applicable to retinal images from which defocus information is extracted in the operation
of accommodation and emmetropization, two ocular auto-focusing mechanisms. Ten
representative focus measures were chosen for analysis, 6 in the spatial domain and 4
transform-based. Their performance was examined for combinations of non-defocus
aberrations and positive and negative defocus. For each combination, a wavefront was
reconstructed, the corresponding point spread function (PSF) computed using Fast
Fourier Transform (FFT), and then the blurred image obtained as the convolution of the
PSF and a perfect image. For each blurred image, a focus measure curve was derived for
each focus measure. Aberration data were either collected from 22 real eyes or randomly
generated data based on Gaussian parameters describing data from a published large scale
human study (n>100). For the latter data set, analyses made use of distributed computing
on a small inhomogeneous computer cluster. In the presence of small amounts of nondefocus
aberrations, all focus measures showed monotonic changes with positive or
negative defocus, and their curves generally remained unimodal, although there were
large differences in their variability, sensitivity to defocus and effective ranges. However,
the performance of a number of these focus measures became unacceptable when nondefocus
aberrations exceed a certain level.
We previously showed the necessity of utilizing dynamic methods to select focus window for passive autofocus in digital imaging systems. One possibility is to track the photographer's pupil through a modified viewfinder so that the region of interest in a target image can be determined, as previously described. Yet this assumes that a user is on site and he/she looks through the viewfinder, which is less and less practiced as a result of the availability of liquid crystal displays (LCD) on most consumer digital imaging systems. An alternative is to use pattern recognition to select focus windows when the imaging targets are known in advance and can be extracted from their background. In this paper, one of such cases, where the imaging targets are humans, is discussed in detail. The theoretical basis for dynamic focus window selection is briefly reviewed. And an example is given to demonstrate the effects of different focus windows on the imaging results. Then the focus window selecting technique using a statistical model of human skin colors is described in detail. The incoming target image in RGB color space is transformed into 2-dimension (r, g) space. Each pixel is binarized according to the relationship between its (r, g) value and the skin color distribution. Thus skin regions in the image are extracted. Morphological operations are then applied to the resultant binary image to reduce the number and irregularity of the skin regions. A rectangle can be fitted to the extracted skin region and used as the focus window. Experimental results are given to demonstrate the advantages of the proposed method.
The purpose of selecting focus window is not only to reduce computation, but also to improve image sharpness of object(s) of interest. A simple geometrical model is built using thin lens Gauss equation. The necessity of utilizing different focus window selecting strategies is demonstrated with the model. Then a dynamic focus window selecting method is described. It's reasonable to assume that a photographer's gaze direction points to the object(s) of his interest when he is taking a picture. The gaze direction is trackable by various pupil-tracking methods. A simple modification of viewfinder allows us to take images of photographer's eye with the built-in image sensor. The images are then processed to determine the photographer's gaze direction. One small area in the target image is matched to the gaze direction and selected as focus window. Within the focus window, an uneven sampling method can be used to reduce the computational need further. The uneven sampling works the same way as the human retina. This dynamic focus window selecting method can greatly increase the probability of getting sharply focused object(s) of interest, at the same time only less than 1% of the target image is needed to apply the focus measure.