Among existing biometrics, iris recognition systems are among the most accurate personal biometric identification systems. However, the acquisition of a workable iris image requires strict cooperation of the user; otherwise, the image will be rejected by a verification module because of its poor quality, inducing a high false reject rate (FRR). The FRR may also increase when iris localization fails or when the pupil is too dilated. To improve the existing methods, we propose to use video sequences acquired in real time by a camera. In order to keep the same computational load to identify the iris, we propose a new method to estimate the iris characteristics. First, we propose a new iris texture characterization based on Fourier-Mellin transform, which is less sensitive to pupil dilatations than previous methods. Then, we develop a new iris localization algorithm that is robust to variations of quality (partial occlusions due to eyelids and eyelashes, light reflects, etc.), and finally, we introduce a fast and new criterion of suitable image selection from an iris video sequence for an accurate recognition. The accuracy of each step of the algorithm in the whole proposed recognition process is tested and evaluated using our own iris video database and several public image databases, such as CASIA, UBIRIS, and BATH.
The line-fitting problem has been transposed to the signal-processing framework: Array-processing methods can be applied to virtual signals generated from the image, to estimate straight-line orientations. This paper deals with the estimation of straight and distorted lines in images by fast array-processing methods. Hough transform and snake methods retrieve straight lines and distorted contours, but present limitations. We adapt a fast high-resolution method, the propagator method, to the estimation of multiple distorted contours. For the first time, a method is proposed to cope with the intrinsically limited size of images, which reduces the accuracy of the high-resolution methods due to the low number of signal realizations. Moreover, an extension to images impaired by correlated noise is proposed. For this, an extension of the subspace-based methods to a method based on higher-order statistics is proposed. Distorted contours are assimilated to distorted wavefronts and retrieved with a novel optimization method. The performance of the proposed method is validated on several images.
In this paper, we consider the problem of hyperspectral image denoising. Current denoising is based on multichannel restoration filters assuming the separability of the signal covariance, which describes the between-channel and within-channel relationships. We propose a new algorithm for a spectral band restoration scheme, the adaptive multidimensional Wiener filter, based on a local signal model, without assuming spectral and spatial separability. The proposed filter can be applied as a preprocessing step for detection in hyperspectral imagery. We highlight the target detection improvement when the developed method is used before existing methods the well-known hyperspectral imagery detectors as: AMF (Adaptive Matched Filter), ACE (Adaptive coherence/cosine Estimator) and RX (Reed and Xiaoli algotithm). We demonstrate that integrating a multidimensional restoration leads to significant improvement of the detection probability. The performance of our method is exemplified using real-world HYDICE images.
With shrinking technology nodes and increasing geometries criticity, it has become more and more difficult to
conceive fast and accurate in-line check to insure process quality for each lithography level. Time and costs limit
A commonly accepted solution consists in some CD measurement on high contrast structure for each critical
level. However, the RET complexity of current layouts makes this solution no longer fully reliable and allows
non-conform materials to pass through the check. The idea behind this article (patent pending) is to add a second
verification by creating a set of small structures layouted to cover specific coordinates in the model parameters
space. Extrapolated model parameters allow to layout geometries encircling the OPC space region occupied by
the production device. Those structures shall bridge or pinch for litho or process deviations before any detectable
impact on the most sensitive shapes present in the product. Total size of few square microns is required to stay
within a single SEM picture. The use of image processing based on pattern recognition on SEM pictures to
assess their sensitivity to process variations permits a fast analysis. As a matter of fact, this approach will allow
getting reliability by watching the whole model space and economic compatibility as the procedure is fast and
For terrestrial free space optical (FSO) systems, we investigate the use of multipulse pulse position modulation
(MPPM), which has the advantage of bandwidth efficiency compared to the classical PPM. We first discuss
the upper-bound on the information transmission rate for the case of a Gaussian (turbulence-free) channel. We
next consider the channel coding issue for MPPM. We propose to use a simple binary convolutional code and
to perform iterative soft demodulation and channel decoding at the receiver. We study the performance of this
scheme by presenting some numerical results for the cases of Gaussian and weak-turbulence channels. We also
show that, in contrast to PPM, the bit-symbol mapping is an important point for MPPM, especially regarding
the proposed iterative receiver. In this view, we propose design rules for optimal bit-symbol mapping.
In photolithography, aerial image simulation of the mask has become mandatory. To compute aerial images, transmission cross coefficients (TCCs), drawn from Hopkins optical system transmission function, are arranged as a four-way array (four-entry table) called a fourth-order tensor. To estimate the kernels using the linear algebra-based methods, the existing algorithms unfold this tensor into a matrix. To reduce the computational load, this matrix is approximated by lower rank order matrix owing to the singular value decomposition (SVD). We propose to adopt the multilinear algebra tools to the tensor of TCC values in order to keep this data tensor as a whole entity. For runtime improvement, we use a fixed point algorithm to estimate only the needed eigenvectors. To estimate the optimal number of needed eigenvectors, two well-known criteria of signal processing and information theory are adopted. This tensorial approach leads to a fast and accurate algorithm to compute aerial images.
Model Based Optical Proximity Correction (MBOPC) is since a decade a widely used technique that permits
to achieve resolutions on silicon layout smaller than the wave-length which is used in commercially-available
photolithography tools. This is an important point, because masks dimensions are continuously shrinking.
As for the current masks, several billions of segments have to be moved, and also, several iterations are needed
to reach convergence. Therefore, fast and accurate algorithms are mandatory to perform OPC on a mask in a
reasonably short time for industrial purposes.
As imaging with an optical lithography system is similar to microscopy, the theory used in MBOPC is drawn
from the works originally conducted for the theory of microscopy. Fourier Optics was first developed by Abbe to
describe the image formed by a microscope and is often referred to as Abbe formulation. This is one of the best
methods for optimizing illumination and is used in most of the commercially available lithography simulation
Hopkins method, developed later in 1951, is the best method for mask optimization. Consequently, Hopkins
formulation, widely used for partially coherent illumination, and thus for lithography, is present in most of the
commercially available OPC tools. This formulation has the advantage of a four-way transmission function independent
of the mask layout. The values of this function, called Transfer Cross Coefficients (TCC), describe
the illumination and projection pupils.
Commonly-used algorithms, involving TCC of Hopkins formulation to compute aerial images during MBOPC
treatment, are based on TCC decomposition into its eigenvectors using matricization and the well-known Singular
Value Decomposition (SVD) tool. These techniques that use numerical approximation and empirical determination
of the number of eigenvectors taken into account, could not match reality and lead to an information loss.
They also remain highly runtime consuming.
We propose an original technique, inspired from tensor signal processing tools. Our aim is to improve the simulation
results and to obtain a faster algorithm runtime. We consider multiway array called tensor data T CC. Then,
in order to compute an aerial image, we develop a lower-rank tensor approximation algorithm based on the signal
subspaces. For this purpose, we propose to replace SVD by the Higher Order SVD to compute the eigenvectors
associated with the different modes of TCC. Finally, we propose a new criterion to estimate the optimal number
of leading eigenvectors required to obtain a good approximation while ensuring a low information loss.
Numerical results we present show that our proposed approach is a fast and accurate for computing aerial
An alternative solution for surface inspection is being presented. It is based on a concerted combination of an adapted stripe-illumination principle together with an image processing approach specialized on the analysis of the obtained stripe images. This approach is capable of detecting, segmenting, and classifying nondefective surfaces, as well as three- and two-dimensional defective surfaces from perturbations in the stripe illumination. In contrast to alternative procedures, no calibration of illumination or camera is necessary. The principle of the proposed method using a concrete industrial application for the inspection of cylindrical metallic surfaces under structured lighting is explained. Furthermore, based on several examples involving different surface types, we demonstrate the broad range of applications for the proposed algorithm.
In optical nondestructive testing, a novel solution is presented for fault detection based on the interpretation of fringe images. These images can be acquired using different optical methods, such as structured lighting or interferometry. We propose a set of eight special features adapted to the problem of surface inspection using structured illumination. These characteristics are combined with six further features specially developed for the classification of faults using interferometric images. We apply two kinds of decision rules: the Bayesian and the nearest neighbor classifiers. The proposed features are evaluated using a noisy and a noise-free image data set. All patterns were obtained by means of structured lighting. Concerning the noisy data set, we obtain better classification rates when all the 14 features are used in combination with a one-nearest-neighbor classifier. In case of a noise-free data set, we show that similar classification rates are obtained when the 14 features or only the 8 specific features are involved. The methods described are designed to address a broad range of optical nondestructive applications involving the interpretation and classification of fringe patterns.
Quality control in mammographic facilities has to be done periodically in order to ensure that the mammographic chain works properly. In particular global image quality is evaluated from a mammographic phantom film. A phantom is an object with the same anatomic shape and radiological response as an average dense-fleshed breast in which are embedded structures that mimic clinically relevant features such as microcalcifications, masses and fibers. This evaluation is done by visual observation of a phantom film and a global score is given depending on the number of objects seen by several observers. This paper presents the main results of a feasibility study of breast phantom scoring using digital image processing. First breast phantom films were digitized. For each category of structures, subimages were extracted from the digitized phantom. A noise reduction method was used as a pre-processing step. A local contrast enhancement was then applied. At last image segmentation was done. Noise reduction and contrast enhancement steps were both based on a direct contrast modification technique. Segmentation step was adapted to each embedded object. Nine digitized phantom films were studied and results show that an evaluation of mammographic facilities could be done using digital image processing.