We propose a method for retrieving similar fMRI statistical images given a query fMRI statistical image. Our
method thresholds the voxels within those images and extracts spatially distinct regions from the voxels that
remain. Each region is defined by a feature vector that contains the region centroid, the region area, the average
activation value for all the voxels within that region, the variance of those activation values, the average distance
of each voxel within that region to the region's centroid, and the variance of the voxel's distance to the region's
centroid. The similarity between two images is obtained by the summed minimum distance of their constituent
feature vectors. Results on a dataset of fMRI statistical images from experiments involving distinct cognitive
tasks are shown.
Recent studies have shown an increase in the occurrence of deformational plagiocephaly and brachycephaly in
children. This increase has coincided with the "Back to Sleep" campaign that was introduced to reduce the risk
of Sudden Infant Death Syndrome (SIDS). However, there has yet to be an objective quantification of the degree
of severity for these two conditions. Most diagnoses are done on subjective factors such as patient history and physician examination. The existence of an objective quantification would help research in areas of diagnosis and intervention measures, as well as provide a tool for finding correlation between the shape severity and cognitive outcome. This paper describes a new shape severity quantification and localization method for deformational plagiocephaly and brachycephaly. Our results show that there is a positive correlation between the new shape severity measure and the scores entered by a human expert.
Content-based image retrieval has been applied to many different biomedical applications1. In almost all cases, retrievals
involve a single query image of a particular modality and retrieved images are from this same modality. For example,
one system may retrieve color images from eye exams, while another retrieves fMRI images of the brain. Yet real
patients often have had tests from multiple different modalities, and retrievals based on more than one modality could
provide information that single modality searches fail to see. In this paper, we show medical image retrieval for two
different single modalities and propose a model for multimodal fusion that will lead to improved capabilities for
physicians and biomedical researchers. We describe a graphical user interface for multimodal retrieval that is being
tested by real biomedical researchers in several different fields.
Recent studies have shown that more than 5 million bronchoscopy procedures are performed each year worldwide. The
procedure usually involves biopsy of possible cancerous tissues from the lung. Standard bronchoscopes are too large to
reach into the peripheral lung, where cancerous nodules are often found. The University of Washington has developed an
ultrathin and flexible scanning fiber endoscope that is able to advance into the periphery of the human lungs without
sacrificing image quality. To accompany the novel endoscope, we have developed a user interface that serves as a
navigation guide for doctors when performing a bronchoscopy. The navigation system consists of a virtual surface mesh
of the airways extracted from computed-tomography (CT) scan and an electromagnetic tracking system (EMTS). The
complete system can be viewed as a global positioning system for the lung that provides pre-procedural planning
functionalities, virtual bronchoscopy navigation, and real time tracking of the endoscope inside the lung. The real time
virtual navigation is complemented by a particle filter algorithm to compensate for registration errors and outliers, and to
prevent going through surfaces of the virtual lung model. The particle filter method tracks the endoscope tip based on
real time tracking data and attaches the virtual endoscopic view to the skeleton that runs inside the virtual airway surface.
Experiment results on a dried sheep lung show that the particle filter method converges and is able to accurately track the
endoscope tip in real time when the endoscope is inserted both at slow and fast insertion speeds.
Unmanned aerial vehicles with high quality video cameras are able to provide videos from 50,000 feet up that show a surprising amount of detail on the ground. These videos are difficult to analyze, because the airplane moves, the camera zooms in and out and vibrates, and the moving objects of interest can be in the scene, out of the scene, or partly occluded. Recognizing both the moving and static objects is important in order to find events of interest to human analysts. In this paper, we describe our approach to object and event recognition using multiple stages of classification.