We present a new framework to estimate and visualize heart motion from echocardiograms. For velocity estimation, we have developed a novel multiresolution optical flow algorithm. In order to account for typical heart motions like contraction/expansion and shear, we use a local affine model for the velocity in space and time. The motion parameters are estimated in the least-squares sense inside a sliding spatio-temporal window.
The estimated velocity field is used to track a region of interest which is represented by spline curves. In each frame, a set of sample points on the curves is displaced according to the estimated motion field. The contour in the subsequent frame is obtained by a least-squares spline fit to the displaced sample points. This ensures robustness of the contour tracking. From the estimated velocity, we compute a radial velocity field with respect to a reference point. Inside the time-varying region of interest, the radial velocity is color-coded and superimposed on the original image sequence in a semi-transparent fashion. In contrast to conventional Tissue Doppler methods, this approach is independent of the incident angle of the ultrasound beam.
The motion analysis and visualization provides an objective and robust method for the detection and quantification of myocardial malfunctioning. Promising results are obtained from synthetic and clinical echocardiographic sequences.
We present a new wavelet-based strategy for autonomous feature extraction and segmentation of cardiac structures in dynamic ultrasound images. Image sequences subjected to a multidimensional (2D plus time) wavelet transform yield a large number of individual subbands, each coding for partial structural and motion information of the ultrasound sequence. We exploited this fact to create an analysis strategy for autonomous analysis of cardiac ultrasound that builds on shape- and motion specific wavelet subband filters. Subband selection was in an automatic manner based on subband statistics. Such a collection of predefined subbands corresponds to the so-called footprint of the target structure and can be used as a multidimensional multiscale filter to detect and localize the target structure in the original ultrasound sequence. Autonomous, unequivocal localization by the autonomous algorithm is then done using a peak finding algorithm, allowing to compare the findings with a reference standard. Image segmentation is then possible using standard region growing operations. To test the feasibility of this multiscale footprint algorithm, we tried to localize, enhance and segment the mitral valve autonomously in 182 non-selected clinical cardiac ultrasound sequences. Correct autonomous localization by the algorithm was feasible in 165 of 182 reconstructed ultrasound sequences, using the experienced echocardiographer as reference. This corresponds to a 91% accuracy of the proposed method in unselected clinical data. Thus, multidimensional multiscale wavelet footprints allow successful autonomous detection and segmentation of the mitral valve with good accuracy in dynamic cardiac ultrasound sequences which are otherwise difficult to analyse due to their high noise level.
High resolution multidimensional image data yield huge datasets. For compression and analysis, 2D approaches are often used, neglecting the information coherence in higher dimensions, which can be exploited for improved compression. We designed a wavelet compression algorithm suited for data of arbitrary dimensions, and assessed its ability for compression of 4D medical images. Basically, separable wavelet transforms are done in each dimension, followed by quantization and standard coding. Results were compared with conventional 2D wavelet. We found that in 4D heart images, this algorithm allowed high compression ratios, preserving diagnostically important image features. For similar image quality, compression ratios using the 3D/4D approaches were typically much higher (2-4 times per added dimension) than with the 2D approach. For low-resolution images created with the requirement to keep predefined key diagnostic information (contractile function of the heart), compression ratios up to 2000 could be achieved. Thus, higher-dimensional wavelet compression is feasible, and by exploitation of data coherence in higher image dimensions allows much higher compression than comparable 2D approaches. The proven applicability of this approach to multidimensional medical imaging has important implications especially for the fields of image storage and transmission and, specifically, for the emerging field of telemedicine.
In the last few years more and more University Hospitals as well as private hospitals changed to digital information systems for patient record, diagnostic files and digital images. Not only that patient management becomes easier, it is also very remarkable how clinical research can profit from Picture Archiving and Communication Systems (PACS) and diagnostic databases, especially from image databases. Since images are available on the finger tip, difficulties arise when image data needs to be processed, e.g. segmented, classified or co-registered, which usually demands a lot computational power. Today's clinical environment does support PACS very well, but real image processing is still under-developed. The purpose of this paper is to introduce a parallel cluster of standard distributed systems and its software components and how such a system can be integrated into a hospital environment. To demonstrate the cluster technique we present our clinical experience with the crucial but cost-intensive motion correction of clinical routine and research functional MRI (fMRI) data, as it is processed in our Lab on a daily basis.
The structure of an fMRI time series coregistration algorithm can be divided into modules (preprocessing, minimization procedure, interpolation method, cost function), for each of which there are many different approaches. In our study we implemented some of the most recent techniques and compared their combinations with regard to both registration accuracy and runtime performance. Bidirectional inconsistency and difference image analysis served as quality measures. The result shows that with an appropriate choice of methods realignment results can be improved by far compared with standard solutions. Finally, an automatic parameter adaptation method was incorporated. Additionally, the algorithm was implemented to run on a distributed 48 processor PC cluster, surpassing the performance of conventional applications running on high end workstations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.