In this paper, a new feature-based approach for efficient automated image registration with applications to multiple-views or Multi-sensor LADAR imaging is presented. As it is known, highly accurate and efficient Image registration is highly needed and desired in ground or Airborne LADAR imaging. The characteristic of the proposed approach is that it combines wavelet transform with moments of inertia to estimate the rigid transformation parameters between two overlapping images. The Wavelet transform is applied here to extract a number of feature points. Each feature point is an edge point whose edge response is the maximum within a neighborhood of the edge point. By using the normalized cross correlation technique the matching points are found. We show how the computational complexity of the image comparison process is improved by applying the cross correlation technique to 75% of the image size, the right 75% of the left image with the left 75% of the right image, where the overlapping area is supposed to be. From the matching points, the moments of inertia are applied to estimate the rigid transformation parameters. As it is well known, in general, the cross correlation technique for similarity measure is very sensitive to rotation. We show here how a modified cross correlation technique, which includes the rotation angle between points under measurement, is used to estimate the orientation difference between the overlapping images. In particular, for each feature point the orientation with respect to the horizontal Cartesian axis is calculated first, then, the orientation difference between points under measurement is calculated. From the pool of orientation differences, the rotation value that is repeated the most is selected as the orientation difference between the two images. We show the robustness and accuracy of the proposed method in comparison to the existing state-of-the art methods. It is automatic and can work with any partially overlapping images, independently of the size of the rotation angle between the two images considered. Finally, experimental results including significantly rotated and nonrotated images are presented to show the potential of the method.