Image registration is an important fundamental process, which has many useful applications to Tracking, Automatic
Target Recognition (ATR), and Sensor Fusion to name a few. Recent publications illustrate the Air
Force's desire to exploit the benefits of a layered (altitude) sensing environment. In such an environment, data
from different sensors, of different modalities, at different elevations of the same scene are fused to gain a better
understanding of the operational environment. This research, sponsored by the Air Force Research Lab (AFRL),
builds on classical registration techniques to explore novel registration algorithms applied to data under the new
layered sensing environment. Our main focus, herein, is to register large-scale aerial Electro Optical (EO) images
collected from cameras at different altitudes. Particularly we propose a method to jointly segment and register
the same object in two layered images. A multiphase, region-based active contour method, together with an
adapted joint segmentation-registration technique, are combined to provide a way of registering layered sensing
images. This method provides a <i>Level 0</i> mechanism in data fusion hierarchy to preprocess (segment) and align
data for future fusion stages. In this paper, theories of multi-view geometry is reviewed for understanding the
possible form of registration, our proposed method is illustrated and substantiating examples and results are provided as well.
In this paper, we propose using two methods to determine the canonical views of 3D objects: minimum description
length (MDL) criterion and compressive sensing method. MDL criterion searches for the description length that achieves the balance between model accuracy and parsimony. It takes the form of the sum of a likelihood and a penalizing term, where the likelihood is in favor of model accuracy such that more views assists the description of an object, while the second term penalizes lengthy description to prevent overfitting of the model. In order to devise the likelihood term, we propose a model to represent a 3D object as the weighted sum of multiple range images, which is used in the second method to determine the canonical views as well. In compressive sensing method, an intelligent way of parsimoniously sampling an object is presented. We make direct inference from Donoho<sup>1</sup> and Candes'<sup>2</sup> work, and adapt it to our model. Each range image is viewed as a projection, or a sample, of a 3D model, and by using compressive sensing theory, we are able to reconstruct the object with an overwhelming probability by scarcely sensing the object in a random manner. Compressive sensing is different from traditional compressing method in the sense that the former compress things in the sampling stage while the later collects a large number of samples and then compressing mechanism is carried out thereafter. Compressive sensing scheme is particularly useful when the number of sensors are limited or the sampling machinery cost much resource or time.