1 October 2002 Multimodal search in collections of images and text
Author Affiliations +
Abstract
This paper presents a data model for images immersed in the world wide web and that derive their meaning from visual similarity, from the connection with the text of the pages that contain them, and from the link structure of the web. I will model images on the web as a graph whose nodes are either text documents or images, and whose edges are links, labeled with measures of relevance of one document towards the other. The paper presents briefly the features used to characterize the text and the visual aspect of the images, and then goes on to present a data algebra suitable to navigate and query the database.
© (2002) Society of Photo-Optical Instrumentation Engineers (SPIE)
Simone Santini, Simone Santini, } "Multimodal search in collections of images and text," Journal of Electronic Imaging 11(4), (1 October 2002). https://doi.org/10.1117/1.1504104 . Submission:
JOURNAL ARTICLE
14 PAGES


SHARE
RELATED CONTENT

Battlespace exploitation visualization
Proceedings of SPIE (August 25 1998)
Use of collateral text in understanding photos in documents
Proceedings of SPIE (February 24 1994)
3D model retrieval method based on mesh segmentation
Proceedings of SPIE (June 07 2012)
Psychophysical approach to modeling image semantics
Proceedings of SPIE (June 07 2001)
Visual data mining
Proceedings of SPIE (October 24 2004)

Back to Top