Purpose: High-resolution cardiac imaging and fiber analysis methods are required to understand cardiac anatomy. Although refraction-contrast x-ray CT (RCT) has high soft tissue contrast, it cannot be commonly used because it requires a synchrotron system. Microfocus x-ray CT (μCT) is another commercially available imaging modality.
Approach: We evaluate the usefulness of μCT for analyzing fibers by quantitatively and objectively comparing the results with RCT. To do so, we scanned a rabbit heart by both modalities with our original protocol of prepared materials and compared their image-based analysis results, including fiber orientation estimation and fiber tracking.
Results: Fiber orientations estimated by two modalities were closely resembled under the correlation coefficient of 0.63. Tracked fibers from both modalities matched well the anatomical knowledge that fiber orientations are different inside and outside of the left ventricle. However, the μCT volume caused incorrect tracking around the boundaries caused by stitching scanning.
Conclusions: Our experimental results demonstrated that μCT scanning can be used for cardiac fiber analysis, although further investigation is required in the differences of fiber analysis results on RCT and μCT.
This paper newly introduces multi-modality loss function for GAN-based super-resolution that can maintain image structure and intensity on unpaired training dataset of clinical CT and micro CT volumes. Precise non- invasive diagnosis of lung cancer mainly utilizes 3D multidetector computed-tomography (CT) data. On the other hand, we can take μCT images of resected lung specimen in 50 μm or higher resolution. However, μCT scanning cannot be applied to living human imaging. For obtaining highly detailed information such as cancer invasion area from pre-operative clinical CT volumes of lung cancer patients, super-resolution (SR) of clinical CT volumes to μCT level might be one of substitutive solutions. While most SR methods require paired low- and high-resolution images for training, it is infeasible to obtain precisely paired clinical CT and μCT volumes. We aim to propose unpaired SR approaches for clincial CT using micro CT images based on unpaired image translation methods such as CycleGAN or UNIT. Since clinical CT and μCT are very different in structure and intensity, direct appliation of GAN-based unpaired image translation methods in super-resolution tends to generate arbitrary images. Aiming to solve this problem, we propose new loss function called multi-modality loss function to maintain the similarity of input images and corresponding output images in super-resolution task. Experimental results demonstrated that the newly proposed loss function made CycleGAN and UNIT to successfully perform SR of clinical CT images of lung cancer patients into μCT level resolution, while original CycleGAN and UNIT failed in super-resolution.
High-resolution cardiac imaging and fiber analysis methods are desired for deeper understanding cardiac anatomy. Although refraction-contrast X-ray CT (RCT) has high contrast for soft tissues, its scanning cost is very high. On the other hand, micro-focus X-ray CT (μCT) is a modality that is commercially available with lower cost, but its contrast for soft tissue is not as high as RCT. To investigate the efficacy of μCT for fiber analysis, we scanned a common rabbit heart with both modalities with our original protocol of preparing materials, and compared their image-based analysis results. Their results were very similar, with correlation coefficient of 0.95. We confirmed that µCT volumes prepared by our protocol are useful for fiber analysis as well as RCT.
A surgical simulator with elaborate artificial eyeball models has been developed for ophthalmic surgeries, in which sophisticated skills are required. To create the elaborate eyeball models with microstructures included in an eyeball, a database of eyeball models should be compiled by segmenting eye structures based on high-resolution medical images. Therefore, this paper presents an automated segmentation of eye structures from micro-CT images by using Fully Convolutional Networks (FCNs). In particular, we aim to construct a method for accurately segmenting eye structures from sparse annotation data. This method performs end-to-end segmentation of eye structures, including a workflow from training the FCN based on sparse annotation to obtaining the segmentation of the entire eyeball. We use the FCN trained on the slices sparsely annotated in a micro-CT volume to segment the remaining slices in the same volume. To achieve accurate segmentation from less annotated images, the multi-class segmentation is performed by using the network trained on the preprocessed and augmented micro-CT images; in the preprocessing, we apply filters for removing ring artifacts and random noises to the images, while in the data augmentation process, rotation and elastic deformation operations are performed on the sparsely-annotated training data. From the results of experiments for evaluating segmentation performances based on sparse annotation, we found that the FCN trained with data augmentation could achieve high segmentation accuracy of more than 90% even from a sparse training subset of only 2.5% of all slices.