Computer aided diagnosis (CADx) of polyps has shown great potential to advance the computed tomography colonography (CTC) technique with diagnostic capability. Facing the problem of numerous uncertainties such as polyp size, shape, and orientation in CTC, GLCM-CNN has been proved to be an effective deep learning based tumor classification method, where convolution neural network (CNN) makes decision based on the texture pattern encoded in gray level co-occurrence matrix (GLCM) containing 13 directions. The 13 directional GLCM, by sampling displacement, can be classified into 3 subgroups. Based on our evaluation on the information encoded in the three subgroups, we propose a multi-stage fusion CNN model, which makes the final decision based on two types of features, i.e. (1) a gate module selected group-specific features and (2) fused features learnt from all the features from three groups. On our polyp dataset, which contains 87 polyp masses, our proposed method outperforms both single sub-group based and 13 directional GLCM based CNN model by at least 1.3% in AUC by the average of 20 times 2 fold cross validation experiment results.
Sparse view computed tomography (CT) is an effective way to lower the radiation exposure, but results in streaking artifacts in the constructed CT image due to insufficient projection views. Several approaches have been reported for full view sinogram synthesis by interpolating the missing data into the sparse-view sinogram. However, current interpolation methods tend to generate over-smoothed sinogram, which could not preserve the sharpness of the image. Such sharpness is often referred to the region boundaries or tissue texture and of high importance as clinical indicators. To address this issue, this paper aims to propose an efficient sharpness-preserve spare-view CT sinogram synthesis method based on convolutional neural network (CNN). The sharpness preserving is stressed by the zero-order and first-order difference based loss function in the model. This study takes advantage of the residual design to overcome the problem of degradation for our deep network (20 layers), which is capable of extracting high level information and dealing with large sample dimensions (672 x 672). The proposed model design and loss function achieved a better performance in both quantitative and qualitative evaluation comparing to current state-of-the-art works. This study also performs ablation test on the effect of different designs and researches on hyper-parameter settings in the loss function.