Quantification of legumes biological nitrogen fixation (BNF) is normally done via analytical methods that require sampling, drying, grinding, and laboratory processing. These methods are time-consuming, expensive, and are not accessible to growers. The correlation between the BNF quantity and nodule number and nodule mass can be used to develop tools that allow rapid assessments of the BNF. In this work, we developed a graphical user interface (GUI) based deep learning and image processing system for legume nodule segmentation and classification to determine the characteristics associated with legume nodules using digital images. During image acquisition, the legume root samples were imaged using a smartphone camera and lab-made imaging setup. A total of 1468 digital images were collected from 367 root systems. After the first run of imaging, nodules were separated from the roots, and another image was obtained from the nodules of each sample. For comparison and validations, nodules were manually counted, dried, and weighed. In this study, a categorized image data library was developed and utilized for deep learning and image processing. Digital image processing filters, and image segmentation method were applied to process the digital images of the root systems and determine the number of nodules and provide their characteristics. Deep learning models were used to classify the images into different legume classes. Furthermore, a GUI was successfully developed to simplify the utilization and application of deep learning/digital image processing algorithms. The preliminary results of this study demonstrate that our deep learning/image analysis system has a great potential to accurately quantify, characterize, and count the nodules and that could be extremely valuable to growers.
Dental caries remains the most prevalent chronic disease in both children and adults. Optical coherence tomography (OCT) is a noninvasive optical imaging modality utilized to image oral samples to diagnose carious lesions, but detecting early stage dental caries with high-level accuracy remains challenging. Deep learning models have been employed to classify OCT images for various healthcare applications. In this paper, human tooth specimens were imaged ex vivo using OCT imaging systems, and a three-class grading system based on deep learning model for detection and classification of carious lesions was developed. This study is a step forward in the development of automated deep learning/OCT imaging system for early dental caries diagnosis.
Dental caries are common chronic infectious oral diseases affecting most teenagers and adults worldwide. Optical coherence tomography (OCT) has been studied extensively for the detection of early carious lesions. Deep learning techniques are a rapidly emerging new area of biomedical research and have yielded impressive results in diagnosis and prediction in the field of oral radiology. Deep learning models particularly deep convolutional neural networks (CNN) can be employed along with OCT imaging system to more accurately identify early dental caries. In this work, after OCT data acquisition, data augmentation was performed to obtain a large amount of training data in order to effectively learn, where collection of such training data is often expensive and laborious. For the backpropagation process, seven optimization methods, namely Adadelta, AdaGrad, Adam, AdaMax, Nadam, RMSProp, and Stochastic Gradient Descent (SGD) were utilized to improve the accuracy of a CNN classifier for diagnosing dental caries. In this study, 75% of the data were utilized for training and 25% for testing. The diagnostic accuracy, sensitivity, specificity, positive predictive value, negative predictive value, and receiver operating characteristic (ROC) curve were calculated for detection and diagnostic performance of the deep CNN algorithm. This study highlighted the performance of various optimization methods for deep CNN models with OCT images to detect dental caries.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.