Image acquisition and automatic quality analysis are fundamental stages and tasks to support an accurate ocular diagnosis. In particular, when eye fundus image quality is not appropriate, it can hinder the diagnosis task performed by experts. Portable, smart-phone-based eye fundus image acquisition devices have the advantage of their low cost and easy deployment, however, their main disadvantage is the sacrifice of image quality. This paper presents a deep-learning-based model to assess the eye fundus image quality which is small enough to be deployed in a smart phone. The model was evaluated in a public eye fundus dataset with two sets of annotations. The proposed method obtained an accuracy of 0.911 and 0.856, in the binary classification task and the three-classes classification task respectively. Besides, the presented method has a small number of parameters compared to other state-of-the-art models, being an alternative for a mobile-based eye fundus quality classification system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.