Colonoscopy is essential for examining colorectal polyp or cancer. Examining colonoscopy has allowed for a reduction in the incidence and mortality of colorectal cancer through the detection and removal of polyps. However, missed polyp rate during colonoscopy has been reported as approximately 24% and intra- and inter-observer variability for polyp detection rates among endoscopists has been an issue. In this paper, we propose a real-time deep learning-based colorectal polyp detection system called SmartEndo-Net. To extract the polyp information, ResNet-50 is used in the backbone. To enable high-level feature fusion, extra mix-up edges in all level of the fusion feature pyramid network (FPN) are added. Fusion features are fed to a class and box network to produce object class and bounding box prediction. SmartEndo-Net is compared with Yolo-V3, SSD, and Faster R-CNN. SmartEndo-Net recorded sensitivity of 92.17% and proposed network was higher 7.96%, 6.78%, and 10.05% than Yolo-V3, SSD, and Faster R-CNN. SmartEndo-Net showed stable detection results regardless of polyp size, shape, and surrounding structures.
Volumetric lung tumor segmentation is essential for monitoring tumor response to treatment by tracking lung tumor changes. However, it is difficult to segment due to the diversity of size, shape, location as well as types such as solid, subsolid and necrosis of lung tumor and it is difficult to distinguish between the tumor and the nearby structures because of their low contrast in case of tumors attached to chest wall or mediastinum. In this study, we propose a coupling-net with shape-focused prior that focuses on segmentation of various types of lung tumor and prevent leakage into nearby structures. First, to extract shape information, 2D-Net is trained in each axial, coronal, and sagittal planes. Second, to generate the shape-focused prior including suspicious area of the lung tumor, the prediction maps are integrated with maximum voting, and shape-focused prior was made by applying the narrow-band distance propagation. Finally, to prevent leakage due to low contrast between lung tumor and adjacent structures and give the constraint using shape-focused prior, a 3D-Net is trained using shape-focused prior. To validate segmentation performance, we divide into four types according to the tumor location characteristics: non-attached tumor (Type 1), chest wall-attached tumor (Type 2), mediastinum-attached tumor (Type 3), and surrounded-tumor by chest wall, or liver in apex and base in the lung (Type 4). Our proposed network showed best segmentation performance without a leak to adjacent structures due to considering shape-focused prior.
The malignancy rate of GGN is different according to the presence and the size of a solid component. Thus, it is important to differentiate part-solid GGN with a variable sized solid component from pure GGN. In this paper, we propose a method of classifying the GGNs according to presence or size of solid component using multiple 2.5- dimensional deep CNNs. First, to consider not only intensity but also texture, and shape information, we propose an enhanced input image using image augmentation and removing background. Second, we proposed GGN-Net which can classify GGNs into three classes using multiple input images in chest CT images. Finally, we comparatively evaluate the classification performance according to different type of input images. In experiments, the accuracy of the proposed method using multiple input images was the highest at 82.76% and it was 10.35%, 13.79%, and 6.90% higher than that of using three single input image such as intensity-based, texture- and shape-enhanced input images, respectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.