Accurate lumbar spine measurement in CT images provides an essential way for quantitative spinal diseases analysis such as spondylolisthesis and scoliosis. In today’s clinical workflow, the measurements are manually performed by radiologists and surgeons, which is time consuming and irreproducible. Therefore, automatic and accurate lumbar spine measurement algorithm becomes highly desirable. In this study, we propose a method to automatically calculate five different lumbar spine measurements in CT images. There are three main stages of the proposed method: First, a learning based spine labeling method, which integrates both the image appearance and spine geometry information, is used to detect lumbar and sacrum vertebrae in CT images. Then, a multiatlases based image segmentation method is used to segment each lumbar vertebra and the sacrum based on the detection result. Finally, measurements are derived from the segmentation result of each vertebra. Our method has been evaluated on 138 spinal CT scans to automatically calculate five widely used clinical spine measurements. Experimental results show that our method can achieve more than 90% success rates across all the measurements. Our method also significantly improves the measurement efficiency compared to manual measurements. Besides benefiting the routine clinical diagnosis of spinal diseases, our method also enables the large scale data analytics for scientific and clinical researches.
Automatic and precise segmentation of hand bones is important for many medical imaging applications. Although several previous studies address bone segmentation, automatically segmenting articulated hand bones remains a challenging task. The highly articulated nature of hand bones limits the effectiveness of atlas-based segmentation methods. The use of low-level information derived from the image-of-interest alone is insufficient for detecting bones and distinguishing boundaries of different bones that are in close proximity to each other. In this study, we propose a method that combines an articulated statistical shape model and a local exemplar-based appearance model for automatically segmenting hand bones in CT. Our approach is to perform a hierarchical articulated shape deformation that is driven by a set of local exemplar-based appearance models. Specifically, for each point in the shape model, the local appearance model is described by a set of profiles of low-level image features along the normal of the shape. During segmentation, each point in the shape model is deformed to a new point whose image features are closest to the appearance model. The shape model is also constrained by an articulation model described by a set of pre-determined landmarks on the finger joints. In this way, the deformation is robust to sporadic false bony edges and is able to fit fingers with large articulations. We validated our method on 23 CT scans and we have a segmentation success rate of ~89.70 %. This result indicates that our method is viable for automatic segmentation of articulated hand bones in conventional CT.
In X-ray examinations, it is essential that radiographers carefully use collimation to the appropriate anatomy of interest to minimize the overall integral dose to the patient. The shadow regions are not diagnostically meaningful and could impair the overall image quality. Thus, it is desirable to detect the collimation and exclude the shadow regions to optimize image display. However, due to the large variability of collimated images, collimation detection remains a challenging task. In this paper, we consider a region of interest (ROI) in an image, such as the collimation, can be described by two distinct views, a cluster of pixels within the ROI and the corners of the ROI. Based on this observation, we propose a robust multi-view learning based strategy for collimation detection in digital radiography. Specifically, one view is from random forests learning based <i>region detector</i>, which provides pixel-wise image classification and each pixel is labeled as either in-collimation or out-of-collimation. The other view is from a discriminative, learning-based <i>landmark detector</i>, which detects the corners and localizes the collimation within the image. Nevertheless, given the huge variability of the collimated images, the detection from either view alone may not be perfect. Therefore, we adopt an adaptive view fusing step to obtain the final detection by combining region and corner detection. We evaluate our algorithm in a database with 665 X-ray images in a wide variety of types and dosages and obtain a high detection accuracy (95%), compared with using region detector alone (87%) and landmark detector alone (83%).
One of primary challenges in the medical image data analysis is the ability to handle abnormal, irregular and/or
partial cases. In this paper, we present two different robust algorithms towards the goal of automatic planar
primitive detection in 3D volumes. The overall algorithm is a bottoms-up approach starting with the anatomic
point primitives (or landmarks) detection. The robustness in computing the planar primitives is built in through
both a novel consensus-based voting approach, and a random sampling-based weighted least squares regression
method. Both these approaches remove inconsistent landmarks and outliers detected in the landmark detection
step. Unlike earlier approaches focused towards a particular plane, the presented approach is generic and can be
easily adapted to computing more complex primitives such as ROIs or surfaces. To demonstrate the robustness
and accuracy of our approach, we present extensive results for automatic plane detection (Mig-Sagittal and
Optical Triangle planes) in brain MR-images. In comparison to ground truth, our approach has marginal errors
on about 90 patients. The algorithm also works really well under adverse conditions of arbitrary rotation and
cropping of the 3D volume. In order to exhibit generalization of the approach, we also present preliminary results
on intervertebrae-plane detection for 3D spine MR application.
We present an automatic method to quickly and accurately detect multiple anatomy region-of-interests (ROIs) from CT
topogram images. Our method first detects a redundant and potentially erroneous set of local features. Their spatial
configurations are captured by a set of local voting functions. Unlike all the existing methods where the idea was to try to
"hit" the correct/best constellations of local features, we have taken an opposite approach. We try to peel away the bad
features until a safe (i.e., conservatively small) number of features remain. It is deterministic in nature and guarantees
a success even for extremely noisy cases. The advantages of the method are its robustness and computational efficiency.
Our method also addresses the potential scenario in which outliers (i.e., false landmarks detections) forms plausible
configurations. As long as such outliers are a minority, the method can successfully remove these outliers. The final ROI
of the anatomy is computed from a best subset of the remaining local features. Experimental validation was carried out
for multiple organs detection from a large collection of CT topogram images. Fast and highly robust performance was
observed. In the testing data sets, the detection rate varies from 98.2% to 100% for different ROIs and the false detection
rate is from 0.0% to 0.5% for different ROIs. The method is fast and accurate enough to be seamlessly integrated into a
real-time work flow on the CT machine to improve efficiency, consistency, and repeatability.