While evaluating the performance of image processing algorithms, the starting point is often the acquired image. However, in practice, several factors, extrinsic to the actual algorithm, affect its performance. These factors depend largely on the features of the acquisition system. This paper focuses on some of the key factors that affect algorithm performance, and attempts to provide some insight into defining “optimal” system features for best performance.
The system features studied in depth in the paper are camera type, camera SNR, pixel size, bit-depth and system illumination. We were primarily interested in determining the effect of each of these factors on system performance. Towards this end, we designed an experiment to measure performance on a precision measurement system using several different cameras under varying illumination settings. From the results of the experiment, we observed that the variation in performance was greater for the same algorithm under different test system configurations, than for different algorithms under the same system configuration. Using these results as the basis, we discuss at length the combination of features that contributes to an optimal system configuration for a given purpose. We expect this work to have relevance to researchers in all areas of image processing who want to optimize the performance of their algorithms when ported to actual systems.
Proc. SPIE. 5011, Machine Vision Applications in Industrial Inspection XI
KEYWORDS: Signal to noise ratio, Point spread functions, Edge detection, Visual process modeling, Detection and tracking algorithms, Data modeling, Image processing, Image resolution, Computer simulations, Reconstruction algorithms
A common problem in optical metrology is the determination of the exact location of an edge. In practice, however, exact edge information is generally impossible to obtain. The best we can do is to locate the edge with very high precision through the use of sub-pixeling techniques. In this paper, we review several sub-pixel edge detection schemes and compare them with respect to two figures of merit - resolution accuracy and repeatability. Towards this end, we design experiments to determine the relative performance of different algorithms using a simulated system model. Finally, we verify the model accuracy by performing similar experiments on an off-line test vision system. Our experiments achieve a two-fold purpose. First, they provide a reliable indication of the relative performance of the different algorithms under similar test conditions. Next, by validating the results obtained from the simulated model with the results from the test system, they illustrate a methodology to simulate a real measurement system. This is significant because the simulated model can be used to generate data to quickly evaluate algorithms without the need to conduct expensive and time-consuming data collection experiments. We expect that this will be of considerable value to researchers in the field.
Segmentation and classification are important problems with applications in areas like textural analysis and pattern recognition. Th is paper describes a single-0stage approach to solve the image segmentation/classification problem down to the pixel level, using energy density functions based on the wavelet transform. The energy density functions obtained, called Pseudo Power Signatures, are essentially functions of the scale and orientation, and are obtained using separable approximations to the 2D wavelet transform. A significant advantage of these representations is that they are invariant to signal magnitude, and spatial location within the object of interest. Further, they lend themselves to fast and simple classification routines. We provide a complete formulation of the signature determination problem for 2D, and propose an effective, albeit simple, technique based on a tensor singular value analysis, to solve the problem, We present an efficient computational algorithm, and a simulation result reflecting the strengths and limitations of this approach. We next present a detailed analysis of a more sophisticate method based on orthogonal projections to obtain signatures which are better representations of the underlying data.
We present a new approach to SAR image segmentation based on a Poisson approximation to the SAR amplitude image. It has been established that SAR amplitude images are well approximated using Rayleigh distributions. We show that, with suitable modifications, we can model piecewise homogeneous regions (such as tanks, roads, scrub, etc.) within the SAR amplitude image using a Poisson model that bears a known relation to the underlying Rayleigh distribution. We use the Poisson model to generate an efficient tree-based segmentation algorithm guided by the minimum description length (MDL) criteria. We present a simple fixed tree approach, and a more flexible adaptive recursive partitioning scheme. The segmentation is unsupervised, requiring no prior training, and very simple, efficient, and effective for identifying possible regions of interest (targets). We present simulation results on MSTAR clutter data to demonstrate the performance obtained with this parsing technique.
We study the segmentation of SAR imagery using wavelet-domain Hidden Markov Tree (HMT) models. The HMT model is a tree- structured probabilistic graph that captures the statistical properties of the wavelet transforms of images. This technique has been successfully applied to the segmentation of natural texture images, documents, etc. However, SAR image segmentation poses a difficult challenge owing to the high levels of speckle noise present at fine scales. We solve this problem using a 'truncated' wavelet HMT model specially adapted to SAR images. This variation is built using only the coarse scale wavelet coefficients. When applied to SAR images, this technique provides a reliable initial segmentation. We then refine the classification using a multiscale fusion technique, which combines the classification information across scales from the initial segmentation to correct for misclassifications. We provide a fast algorithm, and demonstrate its performance on MSTAR clutter data.