19 January 2009 A model of multimodal fusion for medical applications
Author Affiliations +
Abstract
Content-based image retrieval has been applied to many different biomedical applications1. In almost all cases, retrievals involve a single query image of a particular modality and retrieved images are from this same modality. For example, one system may retrieve color images from eye exams, while another retrieves fMRI images of the brain. Yet real patients often have had tests from multiple different modalities, and retrievals based on more than one modality could provide information that single modality searches fail to see. In this paper, we show medical image retrieval for two different single modalities and propose a model for multimodal fusion that will lead to improved capabilities for physicians and biomedical researchers. We describe a graphical user interface for multimodal retrieval that is being tested by real biomedical researchers in several different fields.
© (2009) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
S. Yang, I. Atmosukarto, J. Franklin, J. F. Brinkley, D. Suciu, L. G. Shapiro, "A model of multimodal fusion for medical applications", Proc. SPIE 7255, Multimedia Content Access: Algorithms and Systems III, 72550H (19 January 2009); doi: 10.1117/12.805490; https://doi.org/10.1117/12.805490
PROCEEDINGS
12 PAGES


SHARE
Back to Top