Volumetric display of medical images is an increasingly relevant method for examining an imaging acquisition as the prevalence of thin-slice imaging increases in clinical studies. Current mouse and keyboard implementations for volumetric control provide neither the sensitivity nor specificity required to manipulate a volumetric display for efficient reading in a clinical setting. Solutions to efficient volumetric manipulation provide more sensitivity by removing the binary nature of actions controlled by keyboard clicks, but specificity is lost because a single action may change display in several directions. When specificity is then further addressed by re-implementing hardware binary functions through the introduction of mode control, the result is a cumbersome interface that fails to achieve the revolutionary benefit required for adoption of a new technology. We address the specificity versus sensitivity problem of volumetric interfaces by providing adaptive positional awareness to the volumetric control device by manipulating communication between hardware driver and existing software methods for volumetric display of medical images. This creates a tethered effect for volumetric display, providing a smooth interface that improves on existing hardware approaches to volumetric scene manipulation.
Within the complex branching system of the breast, terminal duct lobular units (TDLUs) are the anatomical
location where most cancer originates. With aging, TDLUs undergo physiological involution, re
ected in a
loss of structural components (acini) and a reduction in total number. Data suggest that women undergoing
benign breast biopsies that do not show age appropriate involution are at increased risk of developing breast
cancer. To date, TDLU assessments have generally been made by qualitative visual assessment, rather by
objective quantitative analysis. This paper introduces a technique to automatically estimate a set of quantitative
measurements and use those variables to more objectively describe and classify TDLUs. To validate the accuracy
of our system, we compared the computer-based morphological properties of 51 TDLUs in breast tissues donated
for research by volunteers in the Susan G. Komen Tissue Bank and compared results to those of a pathologist,
demonstrating 70% agreement. Secondly, in order to show that our method is applicable to a wider range
of datasets, we analyzed 52 TDLUs from biopsies performed for clinical indications in the National Cancer
Institute Breast Radiology and Study of Tissues (BREAST) STAMP project and obtained 82% correlation with
visual assessment. Lastly, we demonstrate the ability to uncover novel measures when researching the structural
properties of the acini by applying machine learning and clustering techniques. Through our study we found that
while the number of acini per TDLU increase exponentially with the TDLU diameter, the average elongation
and roundness remain constant.
Mild traumatic brain injury (TBI) is often an invisible injury that is poorly understood and its sequelae can be difficult to diagnose. Recent neuroimaging studies on patients diagnosed with mild TBI (mTBI) have demonstrated an increase in hyperintense brain lesions on T2-weighted MR images. This paper presents an in-depth analysis of the multi-modal and morphological properties of T2 hyperintensity lesions among service members diagnosed with mTBI. A total of 790
punctuate T2 hyperintensity lesions from 89 mTBI subjects were analyzed and used to characterize the lesions based on different quantitative measurements. Morphological analysis shows that on average, T2 hyperintensity lesions have volumes of 23mm3 (±24.75), a roundness measure of 0.83 (±0.08) and an elongation of 7.90 (±2.49). The frontal lobe lesions demonstrated significantly more elongated lesions when compared to other areas of the brain.
Currently, most computer-aided diagnosis (CAD) systems rely on image analysis and statistical models to diagnose,
quantify, and monitor the progression of a particular disease. In general, CAD systems have proven to be effective at
providing quantitative measurements and assisting physicians during the decision-making process. As the need for
more flexible and effective CADs continues to grow, questions about how to enhance their accuracy have surged.
In this paper, we show how statistical image models can be augmented with multi-modal physiological values to create
more robust, stable, and accurate CAD systems. In particular, this paper demonstrates how highly correlated blood and
EKG features can be treated as biomarkers and used to enhance image classification models designed to automatically
score subjects with pulmonary fibrosis. In our results, a 3-5% improvement was observed when comparing the
accuracy of CADs that use multi-modal biomarkers with those that only used image features. Our results show that lab
values such as Erythrocyte Sedimentation Rate and Fibrinogen, as well as EKG measurements such as QRS and I:40,
are statistically significant and can provide valuable insights about the severity of the pulmonary fibrosis disease.
The low-cost and minimum health risks associated with ultrasound (US) have made ultrasonic imaging a widely
accepted method to perform diagnostic and image-guided procedures. Despite the existence of 3D ultrasound probes,
most analysis and diagnostic procedures are done by studying the B-mode images. Currently, multiple ultrasound
probes include 6-DOF sensors that can provide positioning information. Such tracking information can be used to
reconstruct a 3D volume from a set of 2D US images. Recent advances in ultrasound imaging have also shown that,
directly from the streaming radio frequency (RF) data, it is possible to obtain additional information of the anatomical
region under consideration including the elasticity properties.
This paper presents a generic framework that takes advantage of current graphics hardware to create a low-latency
system to visualize streaming US data while combining multiple tissue attributes into a single illustration. In particular,
we introduce a framework that enables real-time reconstruction and interactive visualization of streaming data while
enhancing the illustration with elasticity information. The visualization module uses two-dimensional transfer functions
(2D TFs) to more effectively fuse and map B-mode and strain values into specific opacity and color values. On
commodity hardware, our framework can simultaneously reconstruct, render, and provide user interaction at over 15
fps. Results with phantom and real-world medical datasets show the advantages and effectiveness of our technique with
ultrasound data. In particular, our results show how two-dimensional transfer functions can be used to more effectively
identify, analyze and visualize lesions in ultrasound images.
Labeled training data in the medical domain is rare and expensive to obtain. The lack of labeled multimodal medical
image data is a major obstacle for devising learning-based interactive segmentation tools. Transductive learning (TL) or
semi-supervised learning (SSL) offers a workaround by leveraging unlabeled and labeled data to infer labels for the test
set given a small portion of label information. In this paper we propose a novel algorithm for interactive segmentation
using transductive learning and inference in conditional mixture naïve Bayes models (T-CMNB) with spatial
regularization constraints. T-CMNB is an extension of the transductive naïve Bayes algorithm [1, 20]. The multimodal
Gaussian mixture assumption on the class-conditional likelihood and spatial regularization constraints allow us to
explain more complex distributions required for spatial classification in multimodal imagery. To simplify the estimation
we reduce the parameter space by assuming naïve conditional independence between the feature space and the class
label. The naïve conditional independence assumption allows efficient inference of marginal and conditional
distributions for large scale learning and inference . We evaluate the proposed algorithm on multimodal MRI brain
imagery using ROC statistics and provide preliminary results. The algorithm shows promising segmentation
performance with a sensitivity and specificity of 90.37% and 99.74% respectively and compares competitively to
alternative interactive segmentation schemes.
Computer-aided detection of lung fibrosis remains a difficult task due to the small vascular structures, scars, and fibrotic
tissues that need to be identified and differentiated. In this paper, we present a texture-based computer-aided diagnosis
(CAD) system that automatically detects lung fibrosis. Our system uses high-resolution computed tomography (HRCT),
advanced texture analysis, and support vector machine (SVM) committees to automatically and accurately detect lung
fibrosis. Our CAD system follows a five-stage pipeline that is comprised of: segmentation, texture analysis, training,
classification, and display. Since the accuracy of the proposed texture-based CAD system depends on how precise we
can distinguish texture dissimilarities between normal and abnormal lungs, in this paper we have given special attention
to the texture block selection process. We present the effects that texture block size, data reduction techniques, and
image smoothing filters have within the overall classification results. Furthermore, a histogram-based technique to
refine the classification results inside texture blocks is presented.
The proposed texture-based CAD system to detect lung fibrosis has been trained with several normal and abnormal
HRCT studies and has been tested with the original training dataset as well as new HRCT studies. On average, when
using the suggested/default texture size and an optimized SVM committee system, a 90% accuracy has been observed
with the proposed texture-based CAD system to detect lung fibrosis.
A flexible, scalable, high-resolution display system is presented to support the next generation of radiology reading rooms or interventional radiology suites. The project aims to create an environment for radiologists that will simultaneously facilitate image interpretation, analysis, and understanding while lowering visual and cognitive stress. Displays currently in use present radiologists with technical challenges to exploring complex datasets that we seek to address. These include resolution and brightness, display and ambient lighting differences, and degrees of complexity in addition to side-by-side comparison of time-variant and 2D/3D images.
We address these issues through a scalable projector-based system that uses our custom-designed geometrical and photometrical calibration process to create a seamless, bright, high-resolution display environment that can reduce the visual fatigue commonly experienced by radiologists. The system we have designed uses an array of casually aligned projectors to cooperatively increase overall resolution and brightness. Images from a set of projectors in their narrowest zoom are combined at a shared projection surface, thus increasing the global "pixels per inch" (PPI) of the display environment.
Two primary challenges - geometric calibration and photometric calibration - remained to be resolved before our high-resolution display system could be used in a radiology reading room or procedure suite. In this paper we present a method that accomplishes those calibrations and creates a flexible high-resolution display environment that appears seamless, sharp, and uniform across different devices.