Quantum machine learning by superposition and entanglement, for disease categorization utilizing OCT images will be discussed in this paper. To the best of our knowledge, this is the first application of the quantum computing element in a neural network model for classifying ophthalmological disease. The model was built and tested with PennyLane (PennyLane.ai), an open-source software tool based on the concept of quantum differentiable programming. The model training circuit functioning was tested on an IBM 5 qubits system “ibmq_belem” and a 32 qubits simulator “ibmq. qasm_simulator”. A hybrid quantum and classical model with a 2 qubit QNode converted layer with operations such as Angle Embedding, BasicEntanglerLayers and measurements were the internal operations of the qlayer. Drusen, Choroidal neovascularization (CNV), and Diabetic macular edema (DME) OCT images formed the abnormal/disease class. The model was trained using 414 normal and 504 abnormal labelled OCT scans and the validation used 97 and 205 OCT scans. The resulting model had an accuracy of 0.95 in this preliminary 2-class classifier. This study aims to develop a 4-class classifier with 4 qubits and explore the potential of quantum computing for disease categorization. A preliminary performance analysis of quantum Machine Learning, the steps involved, and operational details will be discussed.
One of the leading causes of irreversible vision loss is Diabetic Retinopathy (DR). The International Clinical Diabetic Retinopathy scale (ICDRS) provides grading criteria for DR. Deep Convolutional Neural Networks (DCNNs) have high performance in DR grading in terms of classification evaluation metrics; however, these metrics are not sufficient for evaluation. The eXplainable Artificial Intelligence (XAI) methodology provides insight into the decisions made by networks by producing sparce, generic heat maps highlighting the most critical DR features. XAI also could not satisfy clinical criteria due to the lack of explanation on the number and types of lesions. Hence, we propose a computational tool box that provides lesion-based explanation according to grading system criteria for determining severity levels. According to ICDRS, DR has 10 major lesions and 4 severity levels. Experienced clinicians annotated 143 DR fundus images and we developed a toolbox containing 9 lesion-specified segmentation networks. Networks should detect lesions with high annotation resolution and then compute DR severity grade according to ICDRS. The network that was employed in this study is the optimized version of Holistically Nested Edge Detection Network (HEDNet). Using this model, the lesions such as hard exudates (Ex), cotton wool spots (CWS), microaneurysms (MA), intraretinal haemorrhages (IHE) and vitreous preretinal haemorrhages (VPHE) were properly detected but the prediction of lesions such as venous beading (VB), neovascularization (NV), intraretinal microvascular abnormalities (IRMA) and fibrous proliferation (FP) had low specificity. Consequently,this will affect the value of grading which uses the segmented masks of all contributing lesions.
Deep learning methods for ophthalmic diagnosis have shown considerable success in tasks like segmentation and classification. However, their widespread application is limited due to the models being opaque and vulnerable to making a wrong decision in complicated cases. Explainability methods show the features that a system used to make prediction while uncertainty awareness is the ability of a system to highlight when it is not sure about the decision. This is one of the first studies using uncertainty and explanations for informed clinical decision making. We perform uncertainty analysis of a deep learning model for diagnosis of four retinal diseases - age-related macular degeneration (AMD), central serous retinopathy (CSR), diabetic retinopathy (DR), and macular hole (MH) using images from a publicly available (OCTID) dataset. Monte Carlo (MC) dropout is used at the test time to generate a distribution of parameters and the predictions approximate the predictive posterior of a Bayesian model. A threshold is computed using the distribution and uncertain cases can be referred to the ophthalmologist thus avoiding an erroneous diagnosis. The features learned by the model are visualized using a proven attribution method from a previous study. The effects of uncertainty on model performance and the relationship between uncertainty and explainability are discussed in terms of clinical significance. The uncertainty information along with the heatmaps make the system more trustworthy for use in clinical settings.
The human visual system has the ability to perceive differences in the relative positions of spatially localized objects that are smaller than the size or spacing of foveal cones under ideal conditions. It is referred to as “hyperacuity”. Vernier acuity, a form of hyperacuity, is sensitive to spatial alignment of lines or dots and observers can judge misalignments of about 2 arc seconds. However, thresholds depend upon the psychophysical experimentation method as well as stimuli parameters such as the size, shape, color and contrast etc. We are working to standardize the test for clinical use by developing an adaptive staircase psychophysical procedure which involves a response-oriented positioning of the stimulus in a 3-Down and 1-Up design. Responses were recorded using a 3-alternative forced choice technique for 7 different gap sizes (vertical separation) of the stimulus features ranging from 128 to 2 arc minutes with a test time of about 15-20 minutes. The standard deviations of the reported aligned responses were defined as a threshold in arc seconds. We performed this pilot study on five normal, healthy subjects for method validation. Thresholds were measured for different stimuli parametric conditions. The gap size versus thresholds functions were plotted. The mean difference between the high and reverse contrast was statistically significant only for a gap size of 32 arc minutes. This pilot study aimed at developing and validating an adaptive staircase technique using a user-friendly software for clinical use in screening diseases such as glaucoma in developing nations.
Optical coherence tomography (OCT) and retinal fundus images are widely used for detecting retinal pathology. In particular, these images are used by deep learning methods for classification of retinal disease. The main hurdle for widespread deployment of AI-based decision making in healthcare is a lack of interpretability of the cutting-edge deep learning-based methods. Conventionally, decision making by deep learning methods is considered to be a black box. Recently, there is a focus on developing techniques for explaining the decisions taken by deep neural networks, i.e. Explainable AI (XAI) to improve their acceptability for medical applications. In this study, a framework for interpreting the decision making of a deep learning network for retinal OCT image classification is proposed. An Inception-v3 based model was trained to detect choroidal neovascularization (CNV), diabetic macular edema (DME) and drusen from a dataset of over 80,000 OCT images. We visualized and compared various interpretability methods for the three disease classes. The attributions from various approaches are compared and discussed with respect to clinical significance. Results showed a successful attribution of the specific pathological regions of the OCT that are responsible for a given condition in the absence of any pixel-level annotations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.