Burdens of doctors for chest X-ray (CXR) examination have increased because number of X-ray images increases. Furthermore, since diagnosis is based on the experience and subjectivity of them, there is a possibility that a misdiagnosis may occur. Therefore, we performed Computer-Aided Diagnosis (CAD). In this study, we detected pulmonary nodules using R-CNN (Region with Convolutional Neural Network) which is a kind of Deep Learning. First, we created CNN (Convolutional Neural Network) which classified data into classes of nodule opacities and nonnodule opacities. Next, we detected the object candidate regions from the chest X-ray images by Selective Search, and applied the CNN to the candidate regions to classify them and estimate the detailed position of the object. Thus, we propose a method to detect pulmonary nodules from the chest X-ray images.
Research on Computer-Aided Diagnosis (CAD), which discriminates the presence or absence of diseases by machine learning and supports doctors’ diagnosis, has been actively conducted. However, training of machine learning requires many training data with annotations. Since the annotations are done by radiologists manually, annotating hundreds to thousands of images is very hard work. This study proposes classifiers using convolutional neural network (CNN) with transfer learning for efficient opacity classification of diffuse lung diseases, and the effects of transfer learning are analyzed under various conditions. In detail, classifiers with nine different conditions of transfer learning and without transfer learning are compared to show the best conditions.
Research on computer-aided diagnosis (CAD) for medical images using machine learning has been actively conducted. However, machine learning, especially deep learning, requires a large number of training data with annotations. Deep learning often requires thousands of training data, but it is tough work for radiologists to give normal and abnormal labels to many images. In this research, aiming the efficient opacity annotation of diffuse lung diseases, unsupervised and semi-supervised opacity annotation algorithms are introduced. Unsupervised learning makes clusters of opacities based on the features of the images without using any opacity information, and semi-supervised learning efficiently uses the small number of training data with annotation for training classifiers. The performance evaluation is carried out by the classification of six kinds of opacities of diffuse lung diseases: consolidation, ground-glass opacity, honeycombing, emphysema, nodular and normal, and the effectiveness of the methods is clarified.
This research proposes a multi-channel deep convolutional neural network (DCNN) for computer-aided diagnosis (CAD) that classifies normal and abnormal opacities of diffuse lung diseases in Computed Tomography (CT) images. Because CT images are gray scale, DCNN usually uses one channel for inputting image data. On the other hand, this research uses multi-channel DCNN where each channel corresponds to the original raw image or the images transformed by some preprocessing techniques. In fact, the information obtained only from raw images is limited and some conventional research suggested that preprocessing of images contributes to improving the classification accuracy. Thus, the combination of the original and preprocessed images is expected to show higher accuracy. The proposed method realizes region of interest (ROI)-based opacity annotation. We used lung CT images taken in Yamaguchi University Hospital, Japan, and they are divided into 32 × 32 ROI images. The ROIs contain six kinds of opacities: consolidation, ground-glass opacity (GGO), emphysema, honeycombing, nodular, and normal. The aim of the proposed method is to classify each ROI into one of the six opacities (classes). The DCNN structure is based on VGG network that secured the first and second places in ImageNet ILSVRC-2014. From the experimental results, the classification accuracy of the proposed method was better than the conventional method with single channel, and there was a significant difference between them.