Improving the quality of gray level images continues to be a challenging task, and the challenge increases for color images due to the interaction of multiple parameters within a scene. Each color plane or wavelength constitutes an image by itself, and its quality depends on many parameters such as absorption, reflectance or scattering of the object with the lighting source. Non-uniformity of the lighting, optics, electronics of the camera, and even the environment of the object are sources of degradation in the image. Therefore, segmentation and interpretation of the image may become very difficult if its quality is not enhanced. The main goal of the present work is to demonstrate image processing algorithm that is inspired from some concepts of the Human Visual System (HVS). HVS concepts have been widely used in gray level image enhancement and here we show how they can be successfully extended to color images. The resulting Multi-Scale Spatial Decomposition (MSSD) is employed to enhance the quality of color images. Of particular interest for medical imaging is the enhancement of retinal images whose quality is extremely sensitive to imaging artifacts. We show that our MSSD algorithm improves the readability and gradeability of retinal images and quantify such improvements using both subjective and objective metrics of image quality.
A continuing clinical need exists to find diagnostic tools that will detect and characterize the extent of retinal abnormalities as early as possible with non-invasive, highly sensitive techniques. The objective of this paper was to demonstrate the utility of a Hyperspectral Fundus Imager and related analytical tools to detect and characterize retinal tissues based on their spectral signatures. In particular, the paper shows that this system can measure spectral differences between normal retinal tissue and clinically significant macular edema. Future work will lead to clinical studies focused on spectrally characterizing retinal tissue, its diseases, and on the detection and tracking of the progression of retinal disease.
The fusion of multi-modal medical images provides a new diagnostic tool with clinical applications. Over the years, image fusion has been used in a number of medical disciplines. However, little fusion work in ophthalmic imaging appears in the literature. With the advent of multi-modal digital information of the retina and advanced image registration programs, the possibility of displaying complementary information in one fused retinal image becomes visually and clinically exciting. The objective of this research was to demonstrate that through fusion of multi-modal retinal information one could increase the information content of retinal pathologies on a fused image. Two aspects of image fusion were addressed in this study: image registration and image fusion of two distinctly different modalities, Fluorescein Angiography (FA) videos and standard color photography. Quantitative analysis of the fusion results was performed using entropy and image noise index. Qualitative analysis was performed by simultaneous visual comparison of two modalities (FA and color) of all registered unfused modes and the fused modes.
Age-Related Macular Degeneration (ARMD) is the leading cause of irreversible visual loss among the elderly in the US and Europe. A computer-based system has been developed to provide the ability to track the position and margin of the ARMD associated lesion; drusen. Variations in the subject's retinal pigmentation, size and profusion of the lesions, and differences in image illumination and quality present significant challenges to most segmentation algorithms. An algorithm is presented that first classifies the image to optimize the variables of a mathematical morphology algorithm. A binary image is found by applying Otsu's method to the reconstructed image. Lesion size and area distribution statistics are then calculated. For training and validation, the University of Wisconsin provided longitudinal images of 22 subjects from their 10 year Beaver Dam Study. Using the Wisconsin Age-Related Maculopathy Grading System, three graders classified the retinal images according to drusen size and area of involvement. The percentages within the acceptable error between the three graders and the computer are as follows: Grader-A: Area: 84% Size: 81%; Grader-B: Area: 63% Size: 76%; Grader-C: Area: 81% Size: 88%. To validate the segmented position and boundary one grader was asked to digitally outline the drusen boundary. The average accuracy based on sensitivity and specificity was 0.87 for thirty four marked regions.