This paper presents a knowledge-based method for automatic reconstruction and recognition of pulmonary blood vessels from chest x-ray CT images with 10-mm thickness. The system has four main stages: (1) automatic extraction and segmentation of blood vessel components from each 2-D image, (2) analysis of these components, (3) a search for points connecting blood vessel segments in different CT slices, using a knowledge base for 3-D reconstruction, and (4) object manipulation and display. The authors also describe a method of representing 3-D anatomical knowledge of the pulmonary blood vessel structure. The edges of blood vessels in chest x-ray images are unclear, in contrast to those in angiograms. Each CT slice has thickness, and blood vessels are slender, so a simple graphical display, which can be used for bone tissues from CT images, is not sufficient for pulmonary blood vessels. It is therefore necessary to use anatomical knowledge to track the blood vessel lines in 3-D spaces. Experimental results using actual images of a normal adult male has shown that utilizing anatomical information enables one to improve processing efficiency and precision, such as blood vessel extraction and searching for connecting points.
This paper reviews recent results in cone-beam tomography to the extent possible without resorting to mathematical equations, and discusses their implications. A review of theory is given, and the 'completeness condition' for the data collection geometries is discussed. Consequences of the completeness condition are discussed, and three novel reconstruction methods are described.
The resolution of Computerized Tomographic (CT) images is limited due to the bandlimited reconstruction process and the use of smoothing windows which eliminate high spatial frequencies from the image. A new method is presented which restores these missing spectral components and thus increases the resulting spatial resolution of the image. The new method based on the row action projection (RAP) algorithm is computationally efficient and facilitates local adaptation of the projection operators. The local mean value as well as minimum and maximum bounds are used as constraints. The method is proposed to provide a zoom-in capability which yields a high resolution estimate of a specified region of the image. The zoom-in feature could be of great utility in medical and commercial applications of tomographic image reconstruction. Computer simulations demonstrate the new method to be very effective in recovering high order spectral components of designated regions of the reconstructed image.
Images obtained from the scanning electrochemical microscope (SECM) have been restored by digital computer techniques. SECM images are inherently blurred by the diffusion process that occurs in the oxidation-reduction reaction at the probe tip. Restoration of an image of the bottom surface of a Ligustrum sinensis leaf as well as the image of a conductive inverse indium tin oxide grid structure is described here. The authors present two techniques for restoring SECM images. The first is an inverse filtering technique and the second is a smoothed Taylor series approximation of the unblurred image via a modification of a procedure given in Rosenfeld and Kak.
In this study, the authors evaluate quantitatively the performance of the Expectation Maximization (EM) algorithm as a restoration technique for radiographic images. The 'perceived' signal-to-nose ratio (SNR), of simple radiographic patterns processed by the EM algorithm are calculated on the basis of a statistical decision theory model that includes both the observer's visual response function and a noise component internal to the eye-brain system. The relative SNR (ratio of the processed SNR to the original SNR) is calculated and used as a metric to quantitatively compare the effects of the EM algorithm to two popular image enhancement techniques: contrast enhancement (windowing) and unsharp mask filtering.
In order to extract the protein spots in a gel image from the uneven background which has sharp edges, e.g., lines and streaks in some areas, the authors have developed a detection algorithm where the median, FMH or morphological filter is taken as the smoother to estimate the varying background. The performance comparison shows that FMH detectors provide a simple implementation as well as a good compromise between streak removing and spot distortion.
Two recent Ph.D. theses by Lawrence Firestone and Norman Link, Carnegie Mellon University, treated subtyping follicular lymphomas (cancers of the human lymph node) from digital images of tissue sections mounted on microscope slides. To make the subtype differentiation by automated image analysis they examined many different features relating to spatial spectra, texture and texture energy, and three-dimensional morphology. The latter technique considers any gray-level image as a three-dimensional surface and uses three- dimensional erosions and dilations of this surface to extract features. Of all measures tested the best pair are both related to three-dimensional mathematical morphology. One of these was derived from residue-producing (Xi) -filters and is discussed in this paper. The (Xi) -filter selected measures surface voxels whose values are changed by the (Xi) -filter but for which no neighborhood voxel values are changed. Such filters have been found to be half-octave filters with steep cutoffs (60 dB per octave) exhibiting no phase shifts. These unusual filters are exactly matched to the subtype analysis of lymph-node cancers. (Xi) -filters are also valuable in other medical imaging applications. One of these applications is three-dimensional interpolation from serial sections. This application is also illustrated in this paper.
The authors propose a nonstationary Markovian model with deterministic relaxation for segmenting the hyper-attenuated areas in pulmonary computerized tomography. Their contribution lies in the definition of a local energy as the weighted combination of four components: density function, the Geman-Graffigne gradient function, the local maxima function concerning cliques of order one and the attraction-repulsion function as an Ising model dealing with cliques of order two. This potential is deduced from pre-processing and a priori knowledge. Spatial interactions are modeled on a hexagonal lattice. The 6-connectivity neighborhood system is defined by morphological dilations. An important aspect of this model is that it considers, in addition to the two classes normally used (hype-rattenuated and non- hyper-attenuated), a third class for non-identifiable pixels. Results of this automatic segmentation perfectly match the areas interactively selected by the radiologists.
Interior (luminal) diameter of blood vessels directly controls the amount of blood supplied to a given tissue and is an important parameter for the study of microcirculation. Microvessels are visualized using videomicroscopy, and their diameter can be measured either on or off line. Previous attempts at measurement of these vessels have been performed by direct use of calipers on a calibrated video monitor, or by a video caliper that is manipulated by the investigator. This paper describes the initial work in the development of a new technique for automatically detecting the vessel walls using the gray-level information from these video images. This is the first step toward measuring the vessel diameters. Texture measures are utilized in segmenting the blood vessels via digital image processing (DIP) without user intervention.
Bone erosion presenting as subperiosteal resorption on the phalanges of the hand is an early manifestation of hyperparathyroidism associated with chronic renal failure. At present, the diagnosis is made by trained radiologists through visual inspection of hand radiographs. In this study, a neural network is being developed to assess the feasibility of computer-aided detection of these changes. A two-pass approach is adopted. The digitized image is first compressed by a Laplacian pyramid compact code. The first neural network locates the region of interest using vertical projections along the phalanges and then the horizontal projections across the phalanges. A second neural network is used to classify texture variations of trabecular patterns in the region using a concurrence matrix as the input to a two-dimensional sensor layer to detect the degree of associated osteopenia. Preliminary results demonstrate the feasibility of this approach.
This paper describes a system for automatic detection of lung nodules by means of digital image-processing techniques. The objective of the system is to help chest physicians to improve their accuracy of detection. For detecting lung nodules in chest x-ray images, the authors developed the directional contrast filter for nodules (DCF-N), which consists of three concentric circles. The DCF-N is effective for detecting patterns with obscure peripheries, such as lung cancer. The filter was evaluated using 192 lung cancer cases, and a detection ratio of 88.5% with false-positive foci was obtained. The authors also developed a rule-based system for eliminating these false-positive foci. The rule-base contains six rules that were heuristically developed according to a common method of diagnosis used by chest physicians. By using the rule-base, the authors succeeded in eliminating 63.3% of false-positive foci without increasing the number of false-negatives significantly (5.0%). In addition to the rule- base, a logic was developed for discriminating between lung nodules and false-positive foci by using the nine measured values on each shadow. The discrimination was tested by using 192 lung cancer cases and 74 normal control cases. As a result, figures of 92.2% and 71.6% were obtained for the sensitivity and specificity of the system, respectively. To evaluate the logic by using external data, 30 cases of lung cancer and 78 control cases were collected. As a result of the evaluation, the authors obtained figures of 71.3%, 76.7%, and 69.2% for the accuracy, sensitivity, and specificity of the system, respectively.
A novel graph theoretic approach to image segmentation is presented, and its application to tissue segmentation in MR images of the human brain is demonstrated. An undirected adjacency graph G is used to represent the image with each vertex of G corresponding to a homogeneous component of the image. Each component may be a single pixel or a connected region which, under a suitable criterion, is homogeneous. All pairs of nodes corresponding to spatially connected pixels or regions in the image are linked by arcs in G. A flow capacity, assigned to each arc, is chosen to reflect the probability that the pair of linked vertices belong to the same region or tissue type. The segmentation is achieved through clustering vertices in G by removing arcs of G to form mutually exclusive subgraphs. The subgraphs formed by the clustering algorithm are optimal in the sense that the largest inter-subgraph maximum flow is minimized. Each of the resulting subgraphs then represents a homogeneous region of the image. Using a suitable choice of the arc capacity function, this approach can be used to segment the image either by searching for statistically homogeneous regions (texture segmentation) or by searching for closed region boundaries (edge detection). A direct implementation of the new segmentation algorithm requires the construction of a flow equivalent spanning tree for G. As the size of the graph G increases, constructing an equivalent tree becomes very inefficient. In order to overcome this problem, an algorithm for hierarchically constructing and partitioning a partially equivalent tree of much reduced size has been developed. This hierarchical algorithm results in an optimal solution equivalent to that obtained by partitioning the complete equivalent tree of G.
This paper presents an objective somatotyping method based upon a three-dimensional Fourier descriptor (FD3) as an invariant body shape descriptor. Human body shape was assumed as a stack of cross-sectional contours, and shape features were extracted based upon the FD3. The FD3 represents the shape features on the spatial frequency domain. Because global shape features are concentrated on the lower frequency terms, it is possible to classify the body shape efficiently. Trunks of forty-eight male subjects were measured using laser range finding and image processing techniques, and FD3s were calculated from their trunk contours and classified using a hierarchical clustering algorithm using Euclidian distance metric. Clustering results were compared with the classical somatotyping and showed good correlation with visual classification.
In automatically analyzing brain structures from a MR image, the choice of low level region extraction methods depends on the characteristics of both the target object and the surrounding anatomical structures in the image. The authors have experimented with local thresholding, global thresholding, and other techniques, using various types of MR images for extracting the major brian landmarks and different types of lesions. This paper describes specifically a local- binary thresholding method and a new global-multiple thresholding technique developed for MR image segmentation and analysis. The initial testing results on their segmentation performance are presented, followed by a comparative analysis of the two methods and their ability to extract different types of normal and abnormal brain structures -- the brain matter itself, tumors, regions of edema surrounding lesions, multiple sclerosis lesions, and the ventricles of the brain. The analysis and experimental results show that the global multiple thresholding techniques are more than adequate for extracting regions that correspond to the major brian structures, while local binary thresholding is helpful for more accurate delineation of small lesions such as those produced by MS, and for the precise refinement of lesion boundaries. The detection of other landmarks, such as the interhemispheric fissure, may require other techniques, such as line-fitting. These experiments have led to the formulation of a set of generic computer-based rules for selecting the appropriate segmentation packages for particular types of problems, based on which further development of an innovative knowledge- based, goal directed biomedical image analysis framework is being made. The system will carry out the selection automatically for a given specific analysis task.
The ability of four methods to perform automatic texture discrimination of three cellular organelles (nucleus, mitochondria and lipid droplets) from autoradiographic images is investigated. The four methods studied are the first-order statistics of the gray-level histogram, the gray-level difference method, the gray-level run length method, and the spatial gray-level dependence method. The influence of parameters like the number of features, the number of gray-level classes, the orientation and step size of the analysis, and the effect of preprocessing the images by histogram equalization and image reduction were also analyzed to optimize the performance of the methods. The nearest neighbor pattern recognition algorithm using the Mahalanobis distance was used to evaluate the performance of the methods. First, a training set of 30 samples per organelle was chosen to train the classifier and to select the best discriminant features. The probability of error was estimated with the leave-one-out method and the results are expressed in percentage of correct classifications. The study shows that features extracted using the spatial gray-level dependence method were the most discriminate ones. The best features set was then applied to a test population of 734 cellular organelles to differentiate the three classes. Correct classifications occurred for 95% of cases, which indicates that it is possible to achieve a semi-automatic analysis of autoradiographic images.
X-ray diagnosis of destructive periodontal disease requires assessing serial radiographs by an expert to determine the change in the distance between cemento-enamel junction (CEJ) and the bone crest. To achieve this without the subjectivity of a human expert, a knowledge based system is proposed to automatically locate the two landmarks which are the CEJ and the level of alveolar crest at its junction with the periodontal ligament space. This work is a part of an ongoing project to automatically measure the distance between CEJ and the bone crest along a line parallel to the axis of the tooth. The approach presented in this paper is based on identifying a prominent feature such as the tooth boundary using local edge detection and edge thresholding to establish a reference and then using model knowledge to process sub-regions in locating the landmarks. Segmentation techniques invoked around these regions consists of a neural-network like hierarchical refinement scheme together with local gradient extraction, multilevel thresholding and ridge tracking. Recognition accuracy is further improved by first locating the easily identifiable parts of the bone surface and the interface between the enamel and the dentine and then extending these boundaries towards the periodontal ligament space and the tooth boundary respectively. The system is realized as a collection of tools (or knowledge sources) for pre-processing, segmentation, primary and secondary feature detection and a control structure based on the blackboard model to coordinate the activities of these tools.
The aim of the paper is to describe a decision support system operating in the area of capillaroscopic images. The system automatically sites the capillaroscopic analyzed image into one of the following classes: normal, diabetic and sclerodermic. The automatic morphometric analysis attempts to imitate the physician behavior and requires the introduction of some particular features connected with the specific domain. These features allow a symbolic representation of the capillary partitioning it into three components: apex, arteriolar, and venular. Each component is qualified by specific attributes which allow the necessary shape evaluations in order to discriminate among the classes of capillaries. The system is hierarchically organized in two levels. The first level is concerned with the segmentation after a noise reduction and an enhancement of the digitized image. This level uses a shell, developed and successfully experimented for many heterogeneous classes of images. The second level is concerned with the effective classification of the previously processed image. It matches the visual data with a model constituted by a semantic network which embeds the geometric and structural a-priori knowledge of all kinds of capillaries. The system has been successfully used in experiments to obtain images of nailfold capillaries of the human finger.
A novel method has been developed to quantitatively measure cell size from the photomicrographs taken at regular intervals during the processes of freeze-thaw experiments on a cryomicroscope. The images of cells are easily outlined from pictures with poor signal- to-noise ratio and the sizes of cells are computed. The results have demonstrated that this method is particularly suitable to the analysis of cell images which were taken under adverse conditions--poorly focused or in a very dusty environment, situations occurring frequently in cryomicroscopic measurement.
A method is described for the spatio-temporal filtering of digital angiographic image sequences corrupted by simulated quantum mottle. An x-ray dosage reduction in coronary imaging studies inevitably leads to the introduction of quantum mottle --a Poisson distributed, signal dependent noise that occurs as a result of statistical fluctuations in the arrival of photons at the image intensifier tube. Although spatial filtering of individual frames in the sequence is often performed to improve image quality, this technique does not utilize valuable information from temporal correlations between images. The spatio-temporal filter here estimates motion trajectories for individual pixels and then filters along the direction of motion. This method is different from temporal filtering techniques that do not use motion compensation as the latter always blur the edges of the coronary arteries. Although the method is derived for the estimation of a single frame from two degraded frames of a sequence, it is easily generalized to multi-frame estimates. The performance of the above filter is examined using real image sequences corrupted by quantum mottle.
This paper presents a new method for extracting local surface stretching from the left ventricle (LV) cineangiography data. The algorithm is based on Gaussian curvature for surface stretching recovery under more realistic conformal motion assumption. During conformal motion surface stretching can vary over the surface patch. In particular, surface stretching can be approximated using linear or quadratic (or higher order) functions. Then, coefficients of the approximating function can be calculated and surface stretching computed from changes in surface curvature at corresponding points. For example, linear approximation requires three point correspondences (between consecutive time frames) within small surface patch. The authors demonstrate the higher precision of the new approach (as compared to homothetic assumption in the authors' earlier work) on simulated and real data of the left ventricle of the human heart. The data set was provided by Dr. Alistair Young of the University of Auckland, New Zealand, and consists of the tracked locations of eleven bifurcation points of the left coronary artery and the tracked locations of 292 vessel points for one cardiac cycle (60 frames/cycle).
Inferring dynamic behavior of the heart from its image sequences is a very important research area in biomedical engineering. It provides an invaluable tool for noninvasive evaluation of myocardial functions. This paper presents estimation algorithms for the analysis of heart motion and deformation over a cardiac cycle, as well as the visualization techniques for the animation of moving heart evolution. The first part of the paper is devoted to the analysis of the heart motion and deformation. The research is based on the general belief that the human heart undergoes both global motion and local deformation, and is conducted on the angiographic data of the heart. The authors identify the global motion as the relative position and orientation change of the heart as a whole and estimate the motion parameters from the 3- D data of the bifurcation points. They also develop a recursive algorithm for estimating global motion and object shape in order to combat the biased distribution of the bifurcation points. Upon compensation for the global motion, a tensor analysis based approach is introduced to parameterize the deformation of localized region. The estimated stretch tensors give the directions and magnitudes of extreme deformation for each localized region. In the second part of the paper, several visualization techniques are presented to vividly examine the spatial and time varying nature of the heart. Animations of global motion compensation and local deformation evolution are generated using the original data and estimation results. Display showing the heart in slow motion is created by interpolating original and estimated data between image frames. Visualization operations such as camera and lighting manipulation, polygon outlining, color coding, and so on are applied to the data to reveal the complex nature of the beating heart.
An algorithm has been designed for automatic detection of the endocardiac wall in the left ventricle based on ultrasonic images. The algorithm uses a closed polygon as an initial estimate of the cardiac wall. A set of search lines, normal to the initial estimate, are spaced uniformly around the initial polygon. An ellipse in the center of the image with search lines covering the entire image is used if no a-priori information is available. The wall is detected by computing the global optimum of all closed curves that can be drawn by selecting one point on each consecutive search line. The wall is optimal in terms of a functional that favors curves with a high radial gradient in the image intensity function, but disfavors curves that require a substantial geometrical deformation of the initial estimate. The fact that the wall represents a global optimum in terms of a functional makes it possible to prove theoretical properties of the computed wall. The part of the functional that measures geometry assures that the wall will preserve the form of the initial estimate in regions where the intensity function gives no indication of the location of the wall. This property is very useful because poor regional signal quality is a typical degradation in ultrasonic images.
This paper presents a computerized technique for quantitative analysis of the movement characteristics of spermatozoa. Stored video images of spermatozoa are digitized at a fixed time interval. The digital images are stored as a sequence of frames in a microcomputer. The analysis of the sequence comprises two main tasks: finding the location of the centroid for each sperm and tracking them over the entire sequences. Information from the motion of each moving cell will be used for tracking. Experimental results are presented to show the merits of the proposed algorithm for tracking.
This paper presents a new method for tracking points on the left ventricle (LV) surface from volumetric cardiac images. If an object undergoes nonrigid motion, the standard motion parameters of translation and rotation are not sufficient to describe the object transformation. The authors define the local surface stretching as an additional motion parameter of nonrigid transformation. In homothetic motion, this parameter is constant at all points on the surface. In this work a new algorithm for tracking LV surface through the heart cycle is presented. The authors utilize small motion assumption, hypothesize all possible correspondences, and compute curvature changes for each hypothesis. Then, calculation is made of the error between computed curvature changes and the one predicted by homothetic motion assumption. The hypothesis with the smallest error gives point correspondences between consecutive time frames. The algorithm is demonstrated on simulated data, then applied to real data of LV. The data set was provided by Dr. Eric Hoffman at University of Pennsylvania Medical school and consists of 16 volumetric (128 by 128 by 118) images taken through the heart cycle.
The 3-D reconstruction of light microscopic objects has been extensively investigated in the past, but fundamental problems remain. The real object under observation does not normally have high transparency, and light transmission will be attenuated introducing nonlinearities in the imaging process. A nonlinear approach to this problem has been developed. In this nonlinear approach the authors regard the 3-dimensional spatially resolved light absorption coefficient as the object information of interest. A nonlinear equation system modeling the imaging process is proposed.
The three-dimensional reconstruction of the optic zone of the cornea and the ocular crystalline lens has been accomplished using confocal microscopy and volume rendering computer techniques. A laser scanning confocal microscope was used in the reflected light mode to obtain the two-dimensional images from the cornea and the ocular lens of a freshly enucleated rabbit eye. The light source was an argon ion laser with a 488 nm wavelength. The microscope objective was a Leitz X25, NA 0.6 water immersion lens. The 400 micron thick cornea was optically sectioned into 133 three micron sections. The semi-transparent cornea and the in-situ ocular lens was visualized as high resolution, high contrast two-dimensional images. The structures observed in the cornea include: superficial epithelial cells and their nuclei, basal epithelial cells and their 'beaded' cell borders, basal lamina, nerve plexus, nerve fibers, nuclei of stromal keratocytes, and endothelial cells. The structures observed in the in- situ ocular lens include: lens capsule, lens epithelial cells, and individual lens fibers. The three-dimensional data sets of the cornea and the ocular lens were reconstructed in the computer using volume rendering techniques. Stereo pairs were also created of the two- dimensional ocular images for visualization. The stack of two-dimensional images was reconstructed into a three-dimensional object using volume rendering techniques. This demonstration of the three-dimensional visualization of the intact, enucleated eye provides an important step toward quantitative three-dimensional morphometry of the eye. The important aspects of three-dimensional reconstruction are discussed.
A confocal image understanding system was developed which uses the blackboard model of problem solving to achieve computerized identification and characterization of confocal fluorescent images (serial optical sections). The system is capable of identifying a large percentage of structures (e.g., cell nucleus) in the presence of background noise and nonspecific staining of cellular structures. The blackboard architecture provides a convenient framework within which a combination of image processing techniques can be applied to successively refine the input image. The system is organized to find the surfaces of highly visible structures first, using simple image processing techniques, and then to adjust and fill in the missing areas of these object surfaces using external knowledge, and a number of more complex image processing techniques when necessary. As a result, the image analysis system is capable of obtaining morphometrical parameters such as surface area, volume and position of structures of interest automatically. In addition, the system is also used in the characterization of inertial fusion targets where the actual target geometry was checked against ideal parameters. The system provides a powerful tool in the fields of material science and biological research such as micro-structural characterization, morphogenesis, cell differentiation, tissue organization and embryo development.
A reflection differential interference contrast (DIC) system has been investigated as a means for imaging phase information in confocal microscopy. Interestingly, this method has an advantage over the split-detector method for differential phase contrast (and its corresponding confocal derivative) in that the background signal decays as the specimen surface is displaced away from the focal plane. Therefore, a series of DIC images from successive planes of focus can be used to produce easily visualized three-dimensional reconstructions.
Proc. SPIE 1450, Three-dimensional image processing method to compensate for depth-dependent light attenuation in images from a confocal microscope, 0000 (1 July 1991); https://doi.org/10.1117/12.44307
When looking into the depth of a semitransparent specimen, using a confocal laser microscope working in the epifluorescence mode, it is often observed that the recorded images are darker the deeper the optical sections are located in the specimen. One reason for this is that light is absorbed in the specimen on its way to and from the section. A manual method to compensate for this darkening is to vary the electronic amplification at the recording. The appropriate amplification depends not only on the depth but also on the specimen, its shape and density, etc. Methods to replace the manual adjustments with computer methods, applied to stacks of uncompensated images recorded at different equidistant depths, have been suggested. A basic assumption is then that there are regions of the specimen that are homogeneous enough to serve as reference regions for the compensation. A key problem is to detect these regions. An interactive method to trace homogeneous regions in a stack of recorded images is described. It is also shown how image segmentation can be performed to extract such regions.
In this paper we consider various issues concerned with
the production of 3D presentations from multislice biomedical
images. Data storage schemes are briefly discussed with respect to
compression and access efficency. We then go on to describe the
principles of surface rendering and illustrate with a combined
wireframe and surface shaded presentation of an example data set.
Volume projection is then considered in some detail, giving a number
of different, but simple means of producing projection images,
including binary, transparent and brightest point techniques. It is
concluded that both surface and volume rendering offer advantages
for visualisation and that future work based on combined techniques
will be most useful, with an efficient means of extracting surface
orientation from volumetric data being an immediate goal.
A computerized system was developed to carry out the clinical diagnosis of cutaneous melanoma. The main objective of the system is to produce a real-time first level diagnosis of skin lesion images acquired by a color TV camera. The algorithms are based on research activities started in 1985 at the National Cancer Institute of Milan. Color slides of skin lesions were used as a training field for studying and tuning the image analysis procedures. A prototype system was then developed to capture and analyze skin lesion images on a real-time basis. More than 200 images were acquired and the first level diagnosis output by the system was compared with the diagnosis of expert clinicians. The obtained results were judged very encouraging by the clinicians, and research is in progress to improve and refine the system. The diagnostic procedure is based on image processing and understanding techniques, automatic lesion contour recognition, feature extraction (feature components derived from lesion shape, color and texture) and computation of a malignancy index. The malignancy index depends on the lesion feature values and a thesaurus collecting the system knowledge; the histologic results of clinically diagnosed malignant lesions are used to upgrade the system knowledge.
There is a great deal of interest in automating the process of DNA (deoxyribonucleic acid) sequencing to support the analysis of genomic DNA such as the Human and Mouse Genome projects. In one class of gel-based sequencing protocols autoradiograph images are generated in the final step and usually require manual interpretation to reconstruct the DNA sequence represented by the image. The need to handle a large volume of sequence information necessitates automation of the manual autoradiograph reading step through image analysis in order to reduce the length of time required to obtain sequence data and reduce transcription errors. Various adaptive image enhancement, segmentation and alignment methods were applied to autoradiograph images. The methods are adaptive to the local characteristics of the image such as noise, background signal, or presence of edges. Once the two-dimensional data is converted to a set of aligned one-dimensional profiles waveform analysis is used to determine the location of each band which represents one nucleotide in the sequence. Different classification strategies including a rule-based approach are investigated to map the profile signals, augmented with the original two-dimensional image data as necessary, to textual DNA sequence information.