17 March 2015 Medical case-based retrieval: integrating query MeSH terms for query-adaptive multi-modal fusion
Author Affiliations +
Advances in medical knowledge give clinicians more objective information for a diagnosis. Therefore, there is an increasing need for bibliographic search engines that can provide services helping to facilitate faster information search.

The ImageCLEFmed benchmark proposes a medical case-based retrieval task. This task aims at retrieving articles from the biomedical literature that are relevant for differential diagnosis of query cases including a textual description and several images. In the context of this campaign many approaches have been investigated showing that the fusion of visual and text information can improve the precision of the retrieval. However, fusion does not always lead to better results.

In this paper, a new query-adaptive fusion criterion to decide when to use multi-modal (text and visual) or only text approaches is presented. The proposed method integrates text information contained in MeSH (Medical Subject Headings) terms extracted and visual features of the images to find synonym relations between them. Given a text query, the query-adaptive fusion criterion decides when it is suitable to also use visual information for the retrieval.

Results show that this approach can decide if a text or multi{modal approach should be used with 77.15% of accuracy.
© (2015) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Alba G. Seco de Herrera, Alba G. Seco de Herrera, Antonio Foncubierta-Rodríguez, Antonio Foncubierta-Rodríguez, Henning Müller, Henning Müller, "Medical case-based retrieval: integrating query MeSH terms for query-adaptive multi-modal fusion", Proc. SPIE 9418, Medical Imaging 2015: PACS and Imaging Informatics: Next Generation and Innovations, 94180S (17 March 2015); doi: 10.1117/12.2082028; https://doi.org/10.1117/12.2082028

Back to Top