An established challenge in the field of image analysis has been the registration of images having a large initial
misalignment. For example in chemo and Radiation Therapy Planning (RTP), there is often a need to register an image
delineating a specific anatomy (usually in the surgery position) with that of a whole body image (obtained preoperatively).
In such a scenario, there is room for a large misalignment between the two images that are required to be
aligned. Large misalignments are traditionally handled in two ways: 1) Semi-automatically with a user initialization or 2)
With the help of the origin fields in the image header. The first approach is user dependant and the second method can be
used only if the two images are obtained from the same scanner with consistent origins. Our methodology extends a
typical registration framework by selecting components that are capable of searching a large parameter space without
settling on local optima. We have used an optimizer that is based on an Evolutionary Scheme along with an information
theory based similarity metric that can address these needs. The attempt in this study is to convert a large misalignment
problem to a small misalignment problem that can then be handled using application specific registration algorithms.
Further improvements along local areas can be obtained by subjecting the image to a non-rigid transformation. We have
successfully registered the following pairs of images without any user initialization: CTAC - simCT (neuro, lungs); MRPET/
CT (neuro, liver); T2-SPGR (neuro).
In this paper, we present a framework that one could use to set optimized parameter values, while performing
image registration using mutual information as a metric to be maximized. Our experiment details these steps
for the registration of X-ray Computer Tomography (CT) images with Positron Emission Tomography (PET)
images. Selection of different parameters that influence the mutual information between two images is crucial
for both accuracy and speed of registration. These implementation issues need to be handled in an orderly
fashion by designing experiments in their operating ranges. The conclusions from this study seem vital towards
obtaining allowable parameter range for a fusion software.
Medical image fusion is increasingly enhancing diagnostic accuracy
by synergizing information from multiple images, obtained by the
same modality at different times or from complementary modalities
such as structural information from CT and functional from PET. An
active, crucial research topic in fusion is validation of the registration (point-to-point correspondence) used. Phantoms and
other simulated studies are useful in the absence of, or as a preliminary to, definitive clinical tests. Software phantoms in
specific have the added advantage of robustness, repeatability and
reproducibility. Our virtual-lung-phantom-based scheme can test
the accuracy of any registration algorithm and is flexible enough
for added levels of complexity (addition of blur/anti-alias, rotate/warp, and modality-associated noise) to help evaluate the
robustness of an image registration/fusion methodology. Such a
framework extends easily to different anatomies. The feature of
adding software-based fiducials both within and outside simulated
anatomies prove more beneficial when compared to experiments using
data from external fiducials on a patient. It would help the diagnosing clinician make a prudent choice of registration algorithm.