Diabetic retinopathy (DR) is one of the leading causes of blindness in the working-age population of developed countries, caused by a side effect of diabetes that reduces the blood supply to the retina. Deep neural networks have been widely used in automated systems for DR classification on eye fundus images. However, these models need a large number of annotated images. In the medical domain, annotations from experts are costly, tedious, and time-consuming; as a result, a limited number of annotated images are available. This paper presents a semi-supervised method that leverages unlabeled images and labeled ones to train a model that detects diabetic retinopathy. The proposed method uses unsupervised pretraining via self-supervised learning followed by supervised fine-tuning with a small set of labeled images and knowledge distillation to increase the performance in a classification task. This method was evaluated on the EyePACS test and Messidor-2 dataset achieving 0.94 and 0.89 AUC respectively using only 2% of EyePACS train labeled images.
Image acquisition and automatic quality analysis are fundamental stages and tasks to support an accurate ocular diagnosis. In particular, when eye fundus image quality is not appropriate, it can hinder the diagnosis task performed by experts. Portable, smart-phone-based eye fundus image acquisition devices have the advantage of their low cost and easy deployment, however, their main disadvantage is the sacrifice of image quality. This paper presents a deep-learning-based model to assess the eye fundus image quality which is small enough to be deployed in a smart phone. The model was evaluated in a public eye fundus dataset with two sets of annotations. The proposed method obtained an accuracy of 0.911 and 0.856, in the binary classification task and the three-classes classification task respectively. Besides, the presented method has a small number of parameters compared to other state-of-the-art models, being an alternative for a mobile-based eye fundus quality classification system.
Prostate cancer diagnosis is performed by pathologists through the analysis of tissue samples from the prostate gland using a microscope. The development of automatic acquisition and digitalization technologies has allowed the construction of large collections of digitalized histopathology slides, that are usually accompanied by clinical information and other types of metadata. This collection of cases, along with the metadata, has the potential to be an invaluable resource for the analysis of new challenging cases supporting diagnosis, prognosis, and theragnosis decision tasks. This paper presents a multimodal retrieval system based on a supervised multimodal kernel semantic embedding model that supports the search of relevant cases in a multimodal database, combining both images, i.e. histopathology slides, and text, i.e. pathologist’s reports. The system was tested in a multimodal prostate adenocarcinoma dataset, composed of whole slide images of tissue samples, pathologist’s reports and gradation information using the Gleason score. The system shows a high performance for multimodal information retrieval with a Mean Average Precision of 0.6263.
Age-related macular degeneration is a common cause of vision loss in people aging 55 and older. The condition affects the light-sensing cells in the macula limiting the sharp and central vision. On the other hand, Spectral Domain Optical Coherence Tomography (SD-OCT) allow highlighting abnormalities and thickness in the retinal layers which are useful for age-related macular degeneration diagnosis and follow up. The Neurosensory retina (NSR) map is defined as the thickness between the inner limiting membrane layer and the inner aspect of the retinal pigment epithelium complex. Additionally, the NSR map has been used to differentiate between healthy and subjects with macular problems, but the plotting of the retinal thickness map depends critically on additional manufacturer interpretation software to automatically drawing. Therefore, this paper presents an end-to-end 3D convolutional neural network to automatically extract nine thickness mean values to draw the NSR map from an SD-OCT.
Glaucoma is an eye condition that leads to loss of vision and blindness. Ophthalmoscopy exam evaluates the shape, color and proportion between the optic disc and physiologic cup, but the lack of agreement among experts is still the main diagnosis problem. The application of deep convolutional neural networks combined with automatic extraction of features such as: the cup-to-disc distance in the four quadrants, the perimeter, area, eccentricity, the major radio, the minor radio in optic disc and cup, in addition to all the ratios among the previous parameters may help with a better automatic grading of glaucoma. This paper presents a strategy to merge morphological features and deep convolutional neural networks as a novel methodology to support the glaucoma diagnosis in eye fundus images.
Diabetic retinopathy has several clinical data sources for medical diagnosis, but the lack of tools to process the data generates a subjective and unclear diagnosis. The use of convolutional networks to analyze and extract features in eye fundus images may help with an automatic detection to support medical personnel in the grading of diabetic retinopathy. This paper presents a description of convolutional neural networks as a good methodology to detect and discriminate between exudate and healthy regions in eye fundus images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.