The human organism is a highly complex system that is prone to various diseases. Some diseases are more dangerous than others, especially those that affect the circulatory system or the aorta in particular. The aorta is the largest artery in the human body. Its wall comprises several layers. When the intima, i.e. the innermost layer of the aortic wall, tears, blood enters and propagates between the layers causing them to separate. This is known as aortic dissection (AD). Without immediate treatment, an AD may kill 33% of patients within the first 24 hours, 50% of patients within 48 hours, and 75% of patients within 2 weeks. However, proper treatment is still subject to research and active discussion. By providing a deeper understanding of aortic dissections, this work aims to contribute to the continuous improvement of AD diagnosis and treatment by presenting AD in a new, immersive visual experience: Virtual Reality (VR). The visualization is based on Computed Tomography (CT) scans of human patients suffering from an AD. Given a scan, relevant visual information is segmented, refined and put into a 3D scene. Further enhanced by blood flow simulation and VR user interaction, the visualization helps in better understanding AD. The current implementation serves as a prototype and is considered to be extended by minimizing user interaction when new CT scans are loaded into VR (i) and by providing an interface to feed the visualization with simulation data provided by mathematical models (ii).
Volumetric examinations of the aorta are nowadays of crucial importance for the management of critical pathologies such as aortic dissection, aortic aneurism, and other pathologies, which affect the morphology of the artery. These examinations usually begin with the acquisition of a Computed Tomography Angiography (CTA) scan from the patient, which is later on postprocessed to reconstruct the 3D geometry of the aorta. The first postprocessing step is referred to as segmentation. Different algorithms have been suggested for the segmentation of the aorta; including interactive methods, as well as fully automatic methods. Interactive methods need to be fine-tuned on each single CTA scan and result in longer duration of the process, whereas fully automatic methods require the possession of a large amount of labeled training data. In this work, we introduce a hybrid approach by combining a deep learning method with a consolidated interaction technique. In particular, we trained a 2D and a 3D U-Net on a limited number of patches extracted from 25 labeled CTA scans. Afterwards, we use an interactive approach, which consists in defining a region of interest (ROI) by just placing a seed point. This seed point is later used as the center of a 2D or 3D patch to be fed to the 2D or 3D U-Net, respectively. Due to the low content variation of these patches, this method allows to correctly segment the ROIs without the need for parameter tuning for each dataset and with a smaller training dataset, requiring the same minimal interaction as state-of-the-art interactive methods. Later on, the new segmented CTA scans can be further used to train a convolutional network for a fully automatic approach.