An important point-of-care diagnostic technology for COVID-19 is x-ray imaging of the lungs. Here we present a novel deep learning training method which combines both supervised and reinforcement learning methodologies which allows transfer learning in a convolutional neural network (CNN). The method integrated hill-climbing techniques and stochastic gradient descent with momentum to train the CNN architectures without overfitting on small datasets. The model was trained using the Kaggle COVID-19 Chest Radiography dataset. The dataset consists of 219 COVID-19 positive images, 1341 normal images, and 1345 viral pneumonia images. Since training of a CNN can be affected by bias and depends on the limitations of available computing power, the data set was reduced to 219 images for each class. From each of the classes, 150 random images were used for training the CNN algorithm and the model was tested with 69 independent images. Transfer training was done on three models, namely, VGG-19, DenseNet-201, and NASNet. The DenseNet-201 architecture performed the best in terms of accuracy achieving an accuracy of 96.1%. The VGG-19 and DenseNet-201 had sensitivity of 91.3 % while NASnet had a slightly higher sensitivity of 92.8%. This shows that we can have high confidence of the classification results achieved by these models. These results show that deep learning methodologies can be used for identifying COVID-19 patients quickly and accurately.
With the increasing use of deep learning methodologies in various biomedical applications, there is a need for a large number of labeled medical image datasets for training and validation purposes. However, the accumulation of labeled datasets is expensive and time consuming. Recently, generative adversarial networks (GAN) have been utilized to generate synthetic datasets. Currently, the accuracy of generative adversarial networks is calculated using a structural similarity index measure (SSIM). SSIM is not adequate for comparison of images as it underestimates the distortions near hard edges. In this paper, we compare the real DRIVE dataset to the synthetic FunSyn-Net using Fourier transform techniques and show that Fourier behavior is quite different in the two datasets, especially at high frequencies. It is observed that for real images, the amplitude of the Fourier components exponentially decreased with increasing frequency. For the synthesized images, the rate of decrease of the amplitude is much slower. If a linear function is fit to the high frequency components, the slope distributions for the two datasets are completely different with no offset. The average slope in the log scale for DRIVE dataset and FunSyn-Net were 0.0195, and 0.009 respectively. We also looked at auto correlations for the horizontal cut of the Fourier transform and again saw a statistically significant difference between the means for the two datasets. Finally, we also observed that Fourier transforms with real images have higher magnitude squared coherence as compared to the synthesized images. Fourier transform has shown great success for finding differences between real and synthesized images and can be used to improve the synthesized GAN models.
A very low cost multispectral detector is developed and bench marked with full spectrometer measurements by measuring internal quality parameters of kiwis. The multispectator detector uses self-referenced reflectance to reduce measurement variations. It is demonstrated that even when using only twelve wavelengths, only a small loss of accuracy occurs with respect to a spectrometer in measurements of solid soluble content and dry matter. Further, using classification, similar accuracy is achieved in placing the fruits in bins based on their quality parameters. The measurements are rapid (less than 5 seconds), non-destructive and the system costs less than $50.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.