Differentiating lymphomas and glioblastoma multiformes (GBMs) is important for proper treatment planning. A number of works have been proposed but there are still some problems. For example, many works depend on thresholding a single feature value, which is susceptible to noise. Non-typical cases that do not get along with such simple thresholding can be found easily. In other cases, experienced observers are required to extract the feature values or to provide some interactions to the system, which is costly. Even if experts are involved, inter-observer variance becomes another problem. In addition, most of the works use only one or a few slice(s) because 3D tumor segmentation is difficult and time-consuming. In this paper, we propose a tumor classification system that analyzes the luminance distribution of the whole tumor region. The 3D MRIs are segmented within a few tens of seconds by using our fast 3D segmentation algorithm. Then, the luminance histogram of the whole tumor region is generated. The typical cases are classified by the histogram range thresholding and the apparent diffusion coefficients (ADC) thresholding. The non-typical cases are learned and classified by a support vector machine (SVM). Most of the processing elements are semi-automatic except for the ADC value extraction. Therefore, even novice users can use the system easily and get almost the same results as experts. The experiments were conducted using 40 MRI datasets (20 lymphomas and 20 GBMs) with non-typical cases. The classification accuracy of the proposed method was 91.1% without the ADC thresholding and 95.4% with the ADC thresholding. On the other hand, the baseline method, the conventional ADC thresholding, yielded only 67.5% accuracy.
This paper presents a very fast segmentation algorithm based on the region-growing-based segmentation called GrowCut
for 3D medical image slices. By the combination of four contributions such as hierarchical segmentation, voxel value
quantization, skipping method, and parallelization, the computational time is drastically reduced from 507 seconds to
9.2-14.6 seconds on average for tumor segmentation of 256 x 256 x 200 MRIs.
Today, multimedia information has gained an important role in daily life and people can use imaging devices to capture their visual experiences. In this paper, we present our personal Life Log system to record personal experiences in form of wearable video and environmental data; in addition, an efficient retrieval system is demonstrated to recall the desirable media. We summarize the practical video indexing techniques based on Life Log content and context to detect talking scenes by using audio/visual cues and semantic key frames from GPS data. Voice annotation is also demonstrated as a practical indexing method. Moreover, we apply body media sensors to record continuous life style and use body media data to index the semantic key frames. In the experiments, we demonstrated various video indexing results which provided their semantic contents and showed Life Log visualizations to examine personal life effectively.