This paper proposes an automated segmentation method of infection and normal regions in the lung from CT volumes of COVID-19 patients. From December 2019, novel coronavirus disease 2019 (COVID-19) spreads over the world and giving significant impacts to our economic activities and daily lives. To diagnose the large number of infected patients, diagnosis assistance by computers is needed. Chest CT is effective for diagnosis of viral pneumonia including COVID-19. A quantitative analysis method of condition of the lung from CT volumes by computers is required for diagnosis assistance of COVID-19. This paper proposes an automated segmentation method of infection and normal regions in the lung from CT volumes using a COVID-19 segmentation fully convolutional network (FCN). In diagnosis of lung diseases including COVID-19, analysis of conditions of normal and infection regions in the lung is important. Our method recognizes and segments lung normal and infection regions in CT volumes. To segment infection regions that have various shapes and sizes, we introduced dense pooling connections and dilated convolutions in our FCN. We applied the proposed method to CT volumes of COVID-19 cases. From mild to severe cases of COVID-19, the proposed method correctly segmented normal and infection regions in the lung. Dice scores of normal and infection regions were 0.911 and 0.753, respectively.
Subarachnoid Hemorrhage (SAH) detection is a critical, severe problem that confused clinical residents for a long time. With the rise of deep learning technologies, SAH detection made a significant breakthrough in recent ten years. Whereas, the performances are significantly degraded on imbalanced data, makes deep learning models have always suffered criticism. In this study, we present a DenseNet-LSTM network with Class-Balanced Loss and the transfer learning strategy to solve the SAH detection problem on an extremely imbalanced dataset. Compared to the previous works, the proposed framework not merely effectively integrate greyscale features the and spatial information from the consecutive CT scans, but also employ Class-Balanced loss and transfer learning to alleviate the adverse effects and broaden feature diversity respectively on an extreme SAH cases scarcity dataset, mimicking the actual situation of emergency departments. Comprehensive experiments are conducted on a dataset, consisted of 2,519 cases without hemorrhage cases and only 33 cases with SAH. Experimental results demonstrate the F-measure score of SAH detection achieved a remarkable improvement, the backbone DenseNet121 gained around 33% promotion after transfer learning, and on this basis, importing the Class-Balanced Loss and the LSTM structure, the F-measure score further increased 6.1% and 2.7% sequentially.
This paper newly proposes a segmentation method of infected area for COVID-19 (Coronavirus Disease 2019) infected lung clinical CT volumes. COVID-19 spread globally from 2019 to 2020, causing the world to face a globally health crisis. It is desired to estimate severity of COVID-19, based on observing the infected area segmented from clinical computed tomography (CT) volume of COVID-19 patients. Given the lung field from a COVID-19 lung clinical CT volume as input, we desire an automated approach that could perform segmentation of infected area. Since labeling infected area for supervised segmentation needs a lot of labor, we propose a segmentation method without labeling of infected area. Our method refers to a baseline method utilizing representation learning and clustering. However, the baseline method is likely to segment anatomical structures with high H.U. (Houns field) intensity such as blood vessel into infected area. Aiming to solve this problem, we propose a novel pre-processing method that could transform high intensity anatomical structures into low intensity structures. This pre-processing method avoids high intensity anatomical structures to be mis-segmented into infected area. Given the lung field extracted from a CT volume, our method segment the lung field into normal tissue, ground GGO (ground glass opacity), and consolidation. Our method consists of three steps: 1) pulmonary blood vessel segmentation, 2) image inpainting of pulmonary blood vessel based on blood vessel segmentation result, and 3) segmentation of infected area. Compared to the baseline method, experimental results showed that our method contributes to the segmentation accuracy, especially on tubular structures such as blood vessels. Our method improved normalized mutual information score from 0.280 (the baseline method) to 0.394.
This paper presents a method for extracting the lung and lesion regions from COVID-19 CT volumes using 3D fully convolutional networks. Due to the pandemic of coronavirus disease 2019 (COVID-19), computer aided diagnosis (CAD) system for COVID-19 using CT volume is required. In the development of CAD system, it is important to extract patient anatomical structures in CT volume. Therefore, we develop a method for extracting the lung and lesion regions from COVID-19 CT volumes for the CAD system of COVID-19. We use 3D U-Net type fully convolutional network (FCN) for extraction of the lung and lesion regions. We also use transfer learning to train the 3D U-Net type FCN using the limited data of COVID-19 CT volume. As pre-training, the proposed method trains the 3D U-Net model using abdominal multi-organ regions segmentation dataset which contains a large number of annotated CT volumes. After pre-training, we train the 3D U-Net model from the pre-trained model using a small number of annotated COVID-19 CT volumes. The experimental results showed that the proposed method could extract the lung and lesion regions from COVID-19 CT volumes.
This paper presents segmentation of multiple organ regions from non-contrast CT volume based on deep learning. Also, we report usefulness of fine-tuning using a small number of training data for multi-organ regions segmentation. In medical image analysis system, it is vital to recognize patient specific anatomical structures in medical images such as CT volumes. We have studied on a multi-organ regions segmentation method from contrast-enhanced abdominal CT volume using 3D U-Net. Since non-contrast CT volumes are also usually used in the medical field, segmentation of multi-organ regions from non-contrast CT volume is also important for the medical image analysis system. In this study, we extract multi-organ regions from non-contrast CT volume using 3D U-Net and a small number of training data. We perform fine-tuning from a pre-trained model obtained from the previous studies. The pre-trained 3D U-Net model is trained by a large number of contrast enhanced CT volumes. Then, fine-tuning is performed using a small number of non-contrast CT volumes. The experimental results showed that the fine-tuned 3D U-Net model could extract multi-organ regions from non-contrast CT volume. The proposed training scheme using fine-tuning is useful for segmenting multi-organ regions using a small number of training data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print format on
SPIE.org.