We present a method to incorporate nonlinear shape prior constraints into segmenting different anatomical structures in medical images. Kernel space density estimation (KSDE) is used to derive the nonlinear shape statistics and enable building a single model for a class of objects with nonlinearly varying shapes. The object contour is coerced by image-based energy into the correct shape sub-distribution (e.g., left or right lung), <i>without</i> the need for model selection. In contrast to an earlier algorithm that uses a local gradient-descent search (susceptible to local minima), we propose an algorithm that iterates between dynamic programming (DP) and shape regularization. DP is capable of finding an optimal contour in the search space that maximizes a cost function related to the difference between the interior and exterior of the object. To enforce the nonlinear shape prior, we propose two shape regularization methods, global and local regularization. Global regularization is applied after each DP search to move the entire shape vector in the shape space in a gradient descent fashion to the position of probable shapes learned from training. The regularized shape is used as the starting shape for the next iteration. Local regularization is accomplished through modifying the search space of the DP. The modified search space only allows a certain amount of deformation of the local shape from the starting shape. Both regularization methods ensure the consistency between the resulted shape with the training shapes, while still preserving DP’s ability to search over a large range and avoid local minima. Our algorithm was applied to two different segmentation tasks for radiographic images: lung field and clavicle segmentation. Both applications have shown that our method is effective and versatile in segmenting various anatomical structures under prior shape constraints; and it is robust to noise and local minima caused by clutter (e.g., blood vessels) and other similar structures (e.g., ribs). We believe that the proposed algorithm represents a major step in the paradigm shift to object segmentation under nonlinear shape constraints.
In this paper we present a robust and automated algorithm to segment lung nodules in three dimensional (3D) Computed Tomography (CT) volume dataset. The nodule is segmented out in slice-per-slice basis, that is, we first process each CT slice separately to extract two dimensional (2D) contours of the nodule which can then be stacked together to get the whole 3D surface. The extracted 2D contours are optimal as we utilize dynamic programming based optimization algorithm. To extract each 2D contour, we utilize a shape based constraint. Given a physician specified point on the nodule, we blow a circle which gives us rough initialization of the nodule from where our dynamic programming based algorithm estimates the optimal contour. As a nodule can be calcified, we pre-process a small region-of-interest (ROI), around the physician selected point on the nodule boundary, using the Expectation Maximization (EM) based algorithm to classify and remove calcification. Our proposed approach can be consistently and robustly used to segment not only the solitary nodules but also the nodules attached to lung walls and vessels.
This paper is concerned with estimating a probability density function of human skin color, using a finite Gaussian mixture model, whose parameters are estimated through the EM algorithm. Hawkins' statistical test on the normality and homoscedasticity (common covariance matrix) of the estimated Gaussian mixture models is performed and McLachlan's bootstrap method is used to test the number of components in a mixture. Experimental results show that the estimated Gaussian mixture model fits skin images from a large database. Applications of the estimated density function in image and video databases are presented.
This paper describes a method for obtaining a composite focused image from a monocular image sequence. The image sequence is obtained using a novel non-frontal camera that has sensor elements at different distances from the lens. This paper first describes the motivation behind the non-frontal camera, followed by the description of an algorithm to obtain a focused image of a large scene. Large scenes are scenes that are deep and wide (panoramic). Consequently, the camera has to be panned in order to image all objects/surfaces of interest. The described algorithm integrates panning and generation of focused images. Results of experiments to generate extended depth of field images of wide scenes are also shown.
Proc. SPIE. 1964, Applications of Artificial Intelligence 1993: Machine Vision and Robotics
KEYWORDS: Signal to noise ratio, Detection and tracking algorithms, Sensors, Image processing, Feature extraction, Signal processing, Distance measurement, Artificial intelligence, Signal detection, Evolutionary algorithms
General aspects of feature extraction and matching are addressed, which include optimal principles, similarity measures, constraints, and heuristics. The common characteristics of feature extraction and matching are summarized which show that they can be considered as special cases of signal detection. However, the existing signal detection theories do not solve these problems readily. Therefore, a general formulation of feature extraction and matching as a problem of signal detection is desired and presented. This formulation considers feature extraction and matching as similar, subsequent processes, which well integrates the two different processes together to form an automatic system for image matching or object recognition. Guidelines for designing algorithms for detection or matching of arbitrary image features or patterns are derived which can be easily reconfigured for many practical applications. Typical methods and the associated experiments with real image data are provided which demonstrate the superb performance of the methods.
This paper describes an active vision system which employs two high-resolution cameras for image acquisition. The system is capable of automatically directing movements of the cameras so that camera positioning and image acquisition are tightly coupled with visual processing. The system was developed as a research tool and is largely based on off-the-shelf components. A central workstation controls imaging parameters, which include five degrees of freedom for camera positioning (tilt, pan, translation, and independent vergence) and six degrees of freedom for the control of two motorized lenses (focus, aperture, and zoom). This paper is primarily concerned with describing the hardware of the system, the imaging model, and the calibration method employed. A brief description of system software is also given.
Three-dimensional (3D) position estimation using a single passive sensor particularly vision has frequently suffered from unreliability and has involved complex processing methods. Past research has combined vision with other active sensors in which the emphasis has been on data fusion. This paper attempts to integrate multiple passive 3D cues - camera focus camera vergence and stereo disparity - using a single sensor. We argue that in the active vision paradigm an estimate of the position is obtained in the process of fixation in which the imaging parameters are dynamically controlled to direct the attention of the imaging system at the point of interest. Fixation involves integration of the passive cues in a mutually consistent way in order to overcome the deficiencies of any individual cue and to reduce the complexity of processing. Taking into account their reliabilities the individual position estimates from the different cues are combined to form a final overall estimate. 1