In this paper, we present the previous development and deployment of Fundus Analysis Software Tool (FAST) to enable the analysis of different anatomical features and pathologies within fundus images over time, and demonstrate its usefulness with three use cases. First, we utilized FAST to acquire 616 fundus images from a remote clinic in a HIPAAcompliant manner. An ophthalmologist at the clinic then used FAST to annotate 190 fundus images containing exudates at the pixelwise level in a time-efficient manner. In comparison with publicly available datasets, our dataset constitutes the largest pixelwise-labeled collection of images and the first exudate segmentation dataset with eye-matched pairs of images for a given patient. Second, we developed an optic disk CAD segmentation algorithm, where our algorithm achieved a mean intersection over union of 0.930, comparable to the disagreement between ophthalmologist annotations. We deployed this algorithm into FAST, where it segments and flushes the segmentation onto the computer screen while simultaneously filling out specified optic disk fields of a DICOM-SR report on the fundus image. Third, we integrated our software with the open-source EHR framework OpenMRS, where our software can upload both automatic and manual analyses of the fundus to a remote server using HL7 FHIR standard then retrieve historical reports for a patient chronologically. Finally, we discuss our design decisions in developing FAST, particularly those relating to its treatment of DICOM-SR reports based on fundus images and its usage of the FHIR standard and its next steps towards enabling effective analyses of fundus images.
Currently the methods used to develop radiation therapy treatment plans for head and neck cancers rely on clinician experience and a small set of universal guidelines which result in inconsistent and variable methods. Data driven support can provide assistance to clinicians by reducing inconsistency associated with treatment planning and provide empirical estimates to minimize the radiation to healthy organs near the tumor. We created a database of DICOM RT objects which stores historical cases and when a new DICOM object is uploaded it will return a set of similar treatment plans to assist the clinician in creating the treatment plan for the current patient. The database works first by extracting features from DICOM RT object to quantitatively compare and evaluate the similarity of cases enabling the system to mine for cases with defined similarity. The feature extraction methods are based on the spatial relationships between the tumors and organs at risk which allows the generation the overlap volume histogram and spatial target similarity which demonstrate the volumetric and locational similarity between the organ at risk and the tumor. It is useful to find cases with similar tumor anatomy because this similarity translates to similarity in radiation dosage. The developed system was applied to three different RT sites, University of California Los Angeles, Technical University at Munich and State University of New York Buffalo; Roswell Park, with a total of 247 cases to evaluate the system for both inter- and intra- institutional best practices and results. Future roadmap will be discussed for correlating outcomes results to the decision support system which will enhance the overall performance and utilization of the decision support system in the RT workflow. In the future, because this database returns similar historical cases to a current one this could be a worthwhile decision support tool for clinicians as they create new radiation therapy treatment plans.
The increasing incidence of diabetes mellitus (DM) in modern society has become a serious issue. DM can also lead to several secondary clinical complications. One of these complications is diabetic retinopathy (DR), which is the leading cause of new cases of blindness for adults in the United States. While DR can be treated if screened and caught early in progression, the only currently effective method to detect symptoms of DR in the eyes of DM patients is through the manual analysis of fundus images. Manual analysis of fundus images is time-consuming for ophthalmologists and can reduce access to DR screening in rural areas. Therefore, effective automatic prescreening tools on a cloud-based platform might be a potential solution to that problem. Recently, deep learning (DL) approaches have been shown to have state-of-the-art performance in image analysis tasks. In this study, we established a research PACS for fundus images to view DICOMized and anonymized fundus images. We prototyped a deep learning engine in the PACS server to perform prescreening classification of uploaded fundus images into DR grade. We fine-tuned a deep convolutional neural network (CNN) model pretrained on the ImageNet dataset by using over 30,000 labeled image samples from the public Kaggle Diabetic Retinopathy Detection fundus image dataset6. We linked the PACS repository with the DL engine and demonstrated the output predicted result of DR into the PACS worklist. The initial prescreened result was promising and such applications could have potential as a “second reader” with future CAD development for nextgeneration PACS.