With the advancement of three-dimensional (3-D) real-time echocardiography in recent years, automatic creation of patient specific geometric models is becoming feasible and important in clinical decision making. However, the vast majority of echocardiographic segmentation methods presented in the literature focus on the left ventricle (LV) endocardial border, leaving segmentation of the right ventricle (RV) a largely unexplored problem, despite the increasing recognition of the RV’s role in cardiovascular disease. We present a method for coupled segmentation of the endo- and epicardial borders of both the LV and RV in 3-D ultrasound images. To solve the segmentation problem, we propose an extension of a successful state-estimation segmentation framework with a geometrical representation of coupled surfaces, as well as the introduction of myocardial incompressibility to regularize the segmentation. The method was validated against manual measurements and segmentations in images of 16 patients. Mean absolute distances of 2.8±0.4 mm, 3.2±0.7 mm, and 3.1±0.5 mm between the proposed and reference segmentations were observed for the LV endocardium, RV endocardium, and LV epicardium surfaces, respectively. The method was computationally efficient, with a computation time of 2.1±0.4 s.
In this paper, we present an automatic solution for segmentation and quantification of the left atrium (LA) from 3D cardiac ultrasound. A model-based framework is applied, making use of (deformable) active surfaces to model the endocardial surfaces of cardiac chambers, allowing incorporation of <i>a priori</i> anatomical information in a simple fashion. A dual-chamber model (LA and left ventricle) is used to detect and track the atrio-ventricular (AV) plane, without any user input. Both chambers are represented by parametric surfaces and a Kalman filter is used to fit the model to the position of the endocardial walls detected in the image, providing accurate detection and tracking during the whole cardiac cycle. This framework was tested in 20 transthoracic cardiac ultrasound volumetric recordings of healthy volunteers, and evaluated using manual traces of a clinical expert as a reference. The 3D meshes obtained with the automatic method were close to the reference contours at all cardiac phases (mean distance of 0.03±0.6 mm). The AV plane was detected with an accuracy of −0.6±1.0 mm. The LA volumes assessed automatically were also in agreement with the reference (mean ±1.96 SD): 0.4±5.3 ml, 2.1±12.6 ml, and 1.5±7.8 ml at end-diastolic, end-systolic and pre-atrial-contraction frames, respectively. This study shows that the proposed method can be used for automatic volumetric assessment of the LA, considerably reducing the analysis time and effort when compared to manual analysis.
In this paper, we present an automatic approach for alignment of standard apical and short-axis slices, and correcting them for out-of-plane motion in 3D echocardiography. This is enabled by using real-time Kalman tracking to perform automatic left ventricle segmentation using a coupled deformable model, consisting of a left ventricle model, as well as structures for the right ventricle and left ventricle outflow tract. Landmark points from the segmented model are then used to generate standard apical and short-axis slices. The slices are automatically updated after tracking in each frame to correct for out-of-plane motion caused by longitudinal shortening of the left ventricle.
Results from a dataset of 35 recordings demonstrate the potential for automating apical slice initialization and dynamic short-axis slices. Apical 4-chamber, 2-chamber and long-axis slices are generated based on an assumption of fixed angle between the slices, and short-axis slices are generated so that they follow the same myocardial tissue over the entire cardiac cycle. The error compared to manual annotation was 8.4 ± 3.5 mm for apex, 3.6 ± 1.8 mm for mitral valve and 8.4 ± 7.4 for apical 4-chamber view. The high computational efficiency and automatic behavior of the method enables it to operate in real-time, potentially during image acquisition.