While evaluating the performance of image processing algorithms, the starting point is often the acquired image. However, in practice, several factors, extrinsic to the actual algorithm, affect its performance. These factors depend largely on the features of the acquisition system. This paper focuses on some of the key factors that affect algorithm performance, and attempts to provide some insight into defining “optimal” system features for best performance.
The system features studied in depth in the paper are camera type, camera SNR, pixel size, bit-depth and system illumination. We were primarily interested in determining the effect of each of these factors on system performance. Towards this end, we designed an experiment to measure performance on a precision measurement system using several different cameras under varying illumination settings. From the results of the experiment, we observed that the variation in performance was greater for the same algorithm under different test system configurations, than for different algorithms under the same system configuration. Using these results as the basis, we discuss at length the combination of features that contributes to an optimal system configuration for a given purpose. We expect this work to have relevance to researchers in all areas of image processing who want to optimize the performance of their algorithms when ported to actual systems.
Proc. SPIE. 5011, Machine Vision Applications in Industrial Inspection XI
KEYWORDS: Signal to noise ratio, Point spread functions, Edge detection, Visual process modeling, Detection and tracking algorithms, Data modeling, Image processing, Image resolution, Computer simulations, Reconstruction algorithms
A common problem in optical metrology is the determination of the exact location of an edge. In practice, however, exact edge information is generally impossible to obtain. The best we can do is to locate the edge with very high precision through the use of sub-pixeling techniques. In this paper, we review several sub-pixel edge detection schemes and compare them with respect to two figures of merit - resolution accuracy and repeatability. Towards this end, we design experiments to determine the relative performance of different algorithms using a simulated system model. Finally, we verify the model accuracy by performing similar experiments on an off-line test vision system. Our experiments achieve a two-fold purpose. First, they provide a reliable indication of the relative performance of the different algorithms under similar test conditions. Next, by validating the results obtained from the simulated model with the results from the test system, they illustrate a methodology to simulate a real measurement system. This is significant because the simulated model can be used to generate data to quickly evaluate algorithms without the need to conduct expensive and time-consuming data collection experiments. We expect that this will be of considerable value to researchers in the field.
Well-adjusted lighting is one of the most important contributors to good quality and robust measurements in video based inspection systems. The optimal selection of lighting varies from part to part, and system to system. System designers must often select from a significant number of competing illumination sources at varied positions in order to determine how best to illuminate a target edge. In cases where an end user must perform the lighting setup this complexity often results in either a poor measurement or a determination that the inspection cannot be performed. This paper describes a general, simulation based methodology to adjust automatically a set of physical illumination sources (both in magnitude and position) so as to achieve a high quality edge measurement. Specific metrics of edge quality and an approach to search the resulting solution space are also presented.
Investigations in the area of digital mammography have been limited by the resolution of the sensor devices employed. We have proposed a multiple camera or mosaic architecture in which adjacent sensors observe an overlapping field of view. Such a technique can deliver extremely high resolution while simultaneously maintaining a moderate cost for the resultant instrument. However, this technique's clinical efficacy will be limited by the ability to accurately and precisely reconstruct a single continuous image from multiple CCD sensors. We present an integrated algorithm which will correct distortions introduced by the camera while addressing the problem of image reconstruction or 're- stitching.' Such a technique will minimize pixel loss by limiting image re-sampling to a single incident. Custom designed calibration screens were employed for the calculation of camera distortion and intra-camera disparity. A parallel digital signal processor architecture has been developed to accelerate system performance when employing a large number of camera inputs. We present a quantitative evaluation of our reconstruction technique and an analysis with respect to similar methods of image reconstruction. We have previously constructed and presented a prototype imager for digital radiography based upon a similar sensor architecture. The algorithm presented will significantly enhance the feasibility of our multiple camera architecture for both digital radiography and mammography. We believe that such a methodology will enhance diagnostic accuracy at a moderate cost when compared with system of similar imaging resolution.
A new methodology is proposed for the delineation of anatomical or physiological target regions utilizing a rich set of multimodality imagery. The proposed technique is unique in that it allows the integration of anatomical cues from transmission tomography (MRI/CT) with more fuzzy, physiological information derived from emission tomography (PET/SPECT). This approach allows for the delineation of regions of homogeneous tracer uptake constrained by the boundaries of anatomical structures, and vice versa. An extension of deformable model segmentation techniques is presented which integrates both region and edge information from registered scans in a competitive manner. The proposed technique has been implemented for two-dimensional deformable models. Results are presented for the cases of unimodal MRI in which region and edge information is integrated, and for a multimodal case employing a registered MRI and PET scan.
Pre-production prototype of a low-cost, portable, compact digital radiographic imaging device which replaces current film based systems has been constructed and tested. Currently, it is in the process of full utilization for field hospitals where immediate verification of the results is essential. For the particular pre-production unit, image acquisition is performed by a 3 by 4 matrix of charge-coupled-device (CCD) imaging sensors which view the output of a standard x-ray scintillation screen via an off-the-shelf optical system. The use of multiple, moderately priced CCD sensors results in a high resolution system with a low cost of production relative to other digital imaging systems of comparable resolution. The field of view each CCD are purposefully overlapped so as to facilitate image reconstruction. The acquisition of each radiographic image formed on a scintillation screen results in the production of twelve sub- images. A software algorithm is employed to detect the regions of overlap and create a single continuous digital radiograph from the raw CCD data. Software methods are utilized to correct for barrel distortion affects that are caused by the use of low cost lens components.
This paper proposes an approach to advance the utility of physical modeling techniques for medical applications by correlating finite element based models with the mechanical anatomy characteristic of a clinical patient. A methodology is presented to model the patient-specific mechanical response of brain tissue in vivo. The resultant model is parameterized in terms of clinical CT and MRI imaging sequences acquired for each patient. Applications of the proposed technique to the areas of brain tumor growth modeling and predicting tissue shifts during stereotactic neurosurgery, are described. Results are presented for an implementation of our approach to the problem of predictive brain tumor modeling.
Radiation therapy is a treatment modality which seeks to deliver radiation energy to a localized site within a patient, in order to destroy a malignant tumor. The nature of radio-therapy results in dual, conflicting treatment goals: (1) the ability to deliver sufficient energy to a site so as to destroy the growth and, (2) sufficient localization of the energy to minimize the damage of surrounding, healthy tissue. One of the most important aspects of radiation dose treatment planning is the accurate localization of tumor masses. In order for a course of radiation therapy to be successful, the treatment volume must encompass the entire malignant process. Accordingly, the treatment volume must include the primary tumor of interest, as well as the direct and indirect course of the cancer's metastasis. Clinical results have demonstrated that a patient's tolerance to a given dose of radiation decreases as the volume exposed is increased. Therefore, improvements in tumor localization will provide the maximum amount of tissue sparing to the patient while encompassing the necessary target volume. An improved methodology is presented for the localization of tumors. This approach focuses on the integration of MRI and CT imaging data towards the generation of a mathematically optimal, tumor boundary. The solution to this problem is formulated within a framework integrating concepts from the fields of deformable modeling, fuzzy logic, and data fusion. Fuzzy edges derived from CT and MR are combined to form an integrated edge map, which subsequently guides the `growth' of a deformable tumor model. The fusion algorithm yields tumor contours which may be employed directly in the radiation therapy treatment planning process. Results are presented for the case of a phantom data set, with a simulated-implanted tumor, as well as for an actual patient.
A high-resolution, portable, digital x-ray imaging device which replaces current film based systems has been developed. The system is intended to be used in field hospitals where on-line verification is required during treatment. Image acquisition is performed by a 3 X 4 matrix of charge-coupled-device (CCD) imaging sensors which view the output of a standard x-ray scintillation screen via an off-the-shelf optical system. The use of multiple, moderately priced CCD units results in a high resolution system with a low cost of production relative to other digital imaging systems of comparable resolution. The fields of view of each CCD are purposefully overlapped so as to facilitate image reconstruction. The acquisition of each radiographic image formed on a scintillation screen results in the production of twelve sub- images. A software algorithm is employed to detect the regions of overlap and create a single, continuous digital radiograph from the raw CCD data. Software methods are utilized to correct for barrel distortion affects that are caused by the use of low cost lens components.
A common medical diagnostic problem is the determination of physiological function. MRI and CT are somewhat dissimilar but complementary imaging technologies. While CT provides excellent information regarding internal bony structures, MRI has proven to be superior in the production of high contrast images of soft tissue. The integration of these two modalities will ameliorate solutions to problems which require highly accurate mappings of anatomical features. PET imagery presents an accurate view of physiological function but little anatomical information. The ability to integrate quantitative information from these complementary modalities will result in improved medical analysis and subsequent improvements in patient care. We consider the problem of multimodality data fusion as an extension of regularization theory. Directionally controlled continuity stabilizers are utilized in the reconstruction process. The fusion method presented in this paper can deal with surface discontinuities of an arbitrary order. The fusion methodology presented can handle with data sets belonging to different visual cues.