In the last decade information-theoretic similarity measures, especially mutual information and its derivatives, have proven to be accurate measures for rigid and non-rigid, mono- and multi-modal image registration. However, these measures are sometimes not robust enough, especially in cases of poor image quality. This is most likely due to the lack of spatial information included in the measure as usually only intensities are employed to measure similarity between images. Spatial information in the form of intensity gradients or second derivatives may be included in information-theoretic similarity measures. This paper presents a novel method for efficiently combining multiple features into the estimation of mutual information. The proposed measure, under certain assumptions on feature probability distribution, strictly follows information theory in contrast to a number of heuristic methods that were proposed to include spatial information in mutual information. The novel approach solves the problem of efficient estimation of multi-feature mutual information from sparse high-dimensional histograms. The proposed measure was tested on widely used Vanderbilt image database. Results indicate that multi-feature mutual information outperforms the single-feature mutual information measure. The contribution of additional image features to registration is especially significant in cases when standard mutual information measure fails. Moreover, it is expected that non-rigid registration may also benefit from the proposed multi-feature mutual information measure.
Ascertaining the detailed shape and spatial arrangement of anatomical structures is important not only within diagnostic settings but also in the areas of planning, simulation, intraoperative navigation, and tracking of pathology. Robust, accurate and efficient automated segmentation of anatomical structures is difficult because of their complexity and inter-patient variability. Furthermore, the position of the patient during image acquisition, the imaging device and protocol, image resolution, and other factors induce additional variations in shape and appearance. Statistical shape models (SSMs) have proven quite successful in capturing structural variability. A possible approach to obtain a 3D SSM is to extract reference voxels by precisely segmenting the structure in one, reference image. The corresponding voxels in other images are determined by registering the reference image to each other image. The SSM obtained in this way describes statistically plausible shape variations over the given population as well as variations due to imperfect registration. In this paper, we present a completely automated method that significantly reduces shape variations induced by imperfect registration, thus allowing a more accurate description of variations. At each iteration, the derived SSM is used for coarse registration, which is further improved by describing finer variations of the structure. The method was tested on 64 lumbar spinal column CT scans, from which 23, 38, 45, 46 and 42 volumes of interest containing vertebra L1, L2, L3, L4 and L5, respectively, were extracted. Separate SSMs were generated for each vertebra. The results show that the method is capable of reducing the variations induced by registration errors.
This paper describes a novel approach to register 3D computed tomography (CT) or magnetic resonance (MR) images to a set of 2D X-ray images. Such a registration may be a valuable tool for intraoperative determination of the precise position and orientation of some anatomy of interest, defined in preoperative images. The registration is based solely on the information present in 2D and 3D images. It does not require fiducial markers, X-ray image segmentation, or construction of digitally reconstructed radiographs. The originality of the approach is in using normals to bone surfaces, preoperatively defined in 3D MR or CT data, and gradients of intraoperative X-ray images, which are back-projected towards the X-ray source. The registration is then concerned with finding that rigid transformation of a CT or MR volume, which provides the best match between surface normals and back projected gradients, considering their amplitudes and orientations. The method is tested on a lumbar spine phantom. Gold standard registration is obtained by fidicual markers attached to the phantom. Volumes of interest, containing single vertebrae, are registered to different pairs of X-ray images from different starting positions, chosen randomly and uniformly around the gold standard position. Target registration errors and rotation errors are in order of 0.3 mm and 0.35 degrees for the CT to X-ray registration and 1.3 mm and 1.5 degrees for MR to X-ray registration. The registration is shown to be fast and accurate.