As virtual environments may be used in training and evaluation for critical real navigation tasks, it is important to
investigate the factors influencing navigational performance in virtual environments. We have carried out controlled
experiments involving two visual factors known to induce or sustain vection, the illusory perception of self-motion. The
first experiment had subjects navigate mazes with either a narrow or wide field of view. We measured the percentage of
wrong turns, the total time taken for each attempt, and we examined subjects' drawings of the mazes. We found that a
wide field of view can have a substantial effect on navigational abilities, even when the wide field of view does not offer
any additional clues to the task, and really only provides a larger view of blank walls on the sides. The second
experiment evaluated the effect of perspective accuracy in the scene by comparing the use of displays that were corrected
for changing head position against those that were not corrected. The perspective corrections available through headtracking
did not appear have any influence on navigational abilities. Another component of our study suggests that
during navigation in a virtual environment, memory for directions may not be as effective as it could be with
supplemental symbolic representations.
Of the papers dealing with the task of mammogram registration, the majority deal with the task by matching corresponding control-points derived from anatomical landmark points. One of the caveats encountered when using pure point-matching techniques is their reliance on accurately extracted anatomical features-points. This paper proposes an innovative approach to matching mammograms which combines the use of a similarity-measure and a point-based spatial transformation. Mutual information is a cost-function used to determine the degree of similarity between the two mammograms. An initial rigid registration is performed to remove global differences and bring the mammograms into approximate alignment. The mammograms are then subdivided into smaller regions and each of the corresponding subimages is matched independently using mutual information. The centroids of each of the matched subimages are then used as corresponding control-point pairs in association with the Thin-Plate Spline radial basis function. The resulting spatial transformation generates a nonrigid match of the mammograms. The technique is illustrated by matching mammograms from the MIAS mammogram database. An experimental comparison is made between mutual information incorporating purely rigid behavior, and that incorporating a more nonrigid behavior. The effectiveness of the registration process is evaluated using image differences.