Segmentation of pulmonary lobes in inspiration and expiration chest CT scan pairs is an important prerequisite for lobe-based quantitative disease assessment. Conventional methods process each CT scan independently, resulting typically in lower segmentation performance at expiration compared to inspiration. To address this issue, we present an approach, which utilizes CT scans at both respiratory states. It consists of two main parts: a base method that processes a single CT scan and an extended method that utilizes the segmentation result obtained on the inspiration scan as a subject-specific prior for segmentation of the expiration scan. We evaluated the methods on a diverse set of 40 CT scan pairs. In addition, we compare the performance of our method to a registration-based approach. On inspiration scans, the base method achieved an average distance error of 0.59, 0.64, and 0.91 mm for the left oblique, right oblique, and right horizontal fissures, respectively, when compared with expert-based reference tracings. On expiration scans, the base method’s errors were 1.54, 3.24, and 3.34 mm, respectively. In comparison, utilizing proposed subject-specific priors for segmentation of expiration scans allowed decreasing average distance errors to 0.82, 0.79, and 1.04 mm, which represents a significant improvement (p<0.05) compared with all other methods investigated.
Surgical planning of liver tumor resections requires detailed three-dimensional (3D) understanding of the complex arrangement of vasculature, liver segments and tumors. Knowledge about location and sizes of liver segments is important for choosing an optimal surgical resection approach and predicting postoperative residual liver capacity. The aim of this work is to facilitate such surgical planning process by developing a robust method for portal vein tree segmentation. The work also investigates the impact of vessel segmentation on the approximation of liver segment volumes. For segment approximation, smaller portal vein branches are of importance. Small branches, however, are difficult to segment due to noise and partial volume effects. Our vessel segmentation is based on the original gray-values and on the result of a vessel enhancement filter. Validation of the developed portal vein segmentation method in computer generated phantoms shows that, compared to a conventional approach, more vessel branches can be segmented. Experiments with in vivo acquired liver CT data sets confirmed this result. The outcome of a Nearest Neighbor liver segment approximation method applied to phantom data demonstrates, that the proposed vessel segmentation approach translates into a more accurate segment partitioning.
Planning of surgical liver tumor resections based on image data from X-ray computed tomography requires
correct segmentation of the liver, liver vasculature and pathological structures. Automatic liver segmentation
methods frequently fail in cases where the anatomy is degenerated by lesions or other present liver diseases. On
the other hand performing a manual segmentation is a tedious and time consuming task. Therefore Augmented
Reality based segmentation refinement tools are reported, that aid radiologists to efficiently correct incorrect
segmentations in true 3D using head-mounted displays and tracked input devices. The developed methods
facilitate segmentation refinement by interactively deforming a mesh data structure reconstructed from an initial
segmentation. The variety of refinement methods are all accessible through the intuitive, direct 3D user interface
of an Augmented Reality system.
Surgical resection has evolved to an accepted and widely-used method for the treatment of liver tumors. In order
to elaborate an optimal resection strategy, computer-aided planning tools are required. However, measurements
based on 2D cross sectional images are difficult to perform. Moreover, resection planning with current desktopbased
systems using 3D visualization is also a tedious task because of limited 3D interaction. For facilitating the
planning process, different tools are presented allowing easy user interaction in an Augmented Reality environment.
Methods for quantitative analysis like volume calculation and distance measurements are discussed with
focus on the user interaction aspect. In addition, a tool for automatically generating anatomical resection proposals
based on knowledge about tumor locations and the portal vein tree is described. The presented methods
are part of an evolving liver surgery planning system which is currently evaluated by physicians.
Surgical resection of liver tumors requires a detailed three-dimensional understanding of a complex arrangement of vasculature, liver segments and tumors inside the liver. In most cases, surgeons need to develop this understanding by looking at sequences of axial images from modalities like X-ray computed tomography. A system for liver surgery planning is reported that enables physicians to visualize and refine segmented input liver data sets, as well as to simulate and evaluate different resections plans. The system supports surgeons in finding the optimal treatment strategy for each patient and eases the data preparation process. The use of augmented reality contributes to a user-friendly design and simplifies complex interaction with 3D objects. The main function blocks developed so far are: basic augmented reality environment, user interface, rendering, surface reconstruction from segmented volume data sets, surface manipulation and quantitative measurement toolkit. The flexible design allows to add functionality via plug-ins. First practical evaluation steps have shown a good acceptance. Evaluation of the system is ongoing and future feedback from surgeons will be collected and used for design refinements.
Knowledge about the location of the diaphragm dome surface, which separates the lungs and the heart from the abdominal cavity, is of vital importance for applications like automated segmentation of adjacent organs (e.g., liver) or functional analysis of the respiratory cycle. We present a new 3D Active Appearance Model (AAM) approach to segmentation of the top layer of the diaphragm dome. The 3D AAM consists of three parts: a 2D closed curve (reference curve), an elevation image and texture layers. The first two parts combined represent 3D shape information and the third part image intensity of the diaphragm dome and the surrounding layers. Differences in height between dome voxels and a reference plane are stored in the elevation image. The reference curve is generated by a parallel projection of the diaphragm dome outline in the axial direction. Landmark point placement is only done on the (2D) reference curve, which can be seen as the bounding curve of the elevation image. Matching is based on a gradient-descent optimization process and uses image intensity appearance around the actual dome shape. Results achieved in 60 computer generated phantom data sets show a high degree of accuracy (positioning error -0.07+/-1.29 mm). Validation using real CT data sets yielded a positioning error of -0.16+/-2.95 mm. Additional training and testing on in-vivo CT image data is ongoing.
This paper focuses on an adaptive multi-resolutional algorithm for generating forest floor digital elevation models by processing the three dimensional data acquired by the laser scanner. The adaptivity of our algorithm ensures that it can be used successfully in flat, hilly, and mountainous terrain and deliver accurate results. A large set of GPS ground reference points are used to verify the algorithm along with others commonly used. First results show that the average error is between 0,5 and 1m for an Alpine region in Austria which is very close to the error the laser scanner data distributor claims for flat terrain. This study is part of the HIGH-SCAN project (EU IV Framework/Center of Earth Observation), a project whose objective is to provide methods to integrate high satellite imagery and laser scanner data for forest inventory.
A human-in-the-loop computer based camouflage assessment approach was already presented at the AeroSense 1998 conference.<SUP>3</SUP> The same image sets were used for human photosimulation as well as for the computer assessment method. The human photosimulation results suggested four camouflage classes which were used to develop and verify the separability measure. Analyzing camouflage effectiveness using separability measures induces a very complex feature space. Best results were obtained using the C4.5 classifier as a separability measure. The size of the objects presented duing the photosimulation sessions and tactical knowledge of the observers had significant influence on the detectoin/recognition performance of humans. The most important advantage of our method is to make camouflage assessment more transparent and deterministic. Results of a selected experiment during a field test are shown in this paper.
A key point for good camouflage results in the thermal infrared domain lies in the ability of the camouflage system to adapt to the thermal emission behavior of the surrounding background. In order to obtain reliable assessments of the camouflage effectiveness, evaluation has to take place under various environment condition. The combination of the different results leads to a assessment measure with the demanded reliability. The object quantization of the individual camouflage effectiveness and the following combination is very difficult to achieve by human operators. Therefore an Infrared Camouflage Effectiveness Assessment Tool (ICEAT) has been developed, which needs only minor human interaction and supports the automated combination of the results of various test scenes. In a first step hot spots of the object and the background are detected. In a second phase various features are calculated which are combined to a single assessment measure in the third phase by using fuzzy logic. The fuzzy logic approach has the advantage that the customization of the ICEAT can be achieved by simply modifying the used membership functions.