Using single sensor for reasoning is not adequate for operations in a complicate and dynamic environment and is not suitable for detecting features inferred from multiple modality sensor information. In order to reduce uncertainty about the target and achieve fault tolerant ability, using multiple sensors to provide more information is desired. Since each sensor offers unique scene attribute and contextual information, data gathered from different sensor sources may have different spatial resolutions and various relative orientations between the target and the sensor. In addition, the dimension of the sensor data may vary from one dimension to N dimensions. In order to correlate data from multiple sensor sources, registration between the sensor data and the world coordinate system is required for both voxel-based sensor fusion and feature-based information fusion. It is obviously that the success of the voxel based sensor fusion is heavily relied on the registration accuracy. On the other hand, geometric information is required to be associated with the extracted features either from a single sensor source or from multiple sensor sources for accurate global description in feature-based information fusion.