Techniques for Content-Based Image Retrieval (CBIR) have been intensively explored due to the increase in the
amount of captured images and the need of fast retrieval of them. The medical field is a specific example that
generates a large flow of information, especially digital images employed for diagnosing. One issue that still
remains unsolved deals with how to reach the perceptual similarity. That is, to achieve an effective retrieval,
one must characterize and quantify the perceptual similarity regarding the specialist in the field. Therefore,
the present paper was conceived to fill in this gap creating a consistent support to perform similarity queries
over medical images, maintaining the semantics of a given query desired by the user. CBIR systems relying in
relevance feedback techniques usually request the users to label relevant images. In this paper, we present a
simple but highly effective strategy to survey user profiles, taking advantage of such labeling to implicitly gather
the user perceptual similarity. The user profiles maintain the settings desired for each user, allowing tuning
the similarity assessment, which encompasses dynamically changing the distance function employed through an
interactive process. Experiments using computed tomography lung images show that the proposed approach is
effective in capturing the users' perception.
A challenge in Computer-Aided Diagnosis based on image exams is to provide a timely answer that complies to the specialist's expectation. In many situations, when a specialist gets a new image to analyze, having information and knowledge from similar cases can be very helpful. For example, when a radiologist evaluates a new image, it is common to recall similar cases from the past. However, when performing similarity queries to retrieve similar cases, the approach frequently adopted is to extract meaningful features from the images and searching the database based on such features. One of the most popular image feature is the gray-level histogram, because it is simple and fast to obtain, providing the global gray-level distribution of the image. Moreover, normalized histograms are also invariant to affine transformations on the image. Although vastly used, gray-level histograms generates a large number of features, increasing the complexity of indexing and searching operations. Therefore, the high dimensionality of histograms degrades the efficiency of processing similarity queries. In this paper we propose a new and efficient method associating the Shannon entropy and the gray-level histogram to considerably reduce the dimensionality of feature vectors generated by histograms. The proposed method was evaluated using a real dataset and the results showed impressive reductions of up to 99% in the feature vector size, at the same time providing a gain in precision of up to 125% in comparison with the traditional gray-level histogram.