This paper addresses the problem of stitching Giga Pixel images from airborne images acquired over multiple flight
paths of Costa Rica in 2005. The set of input images contains about 10,158 images, each of size around 4072x4072
pixels, with very coarse georeferencing information (latitude and longitude of each image). Given the spatial coverage
and resolution of the input images, the final stitched color image is 294,847 by 269,195 pixels (79.3 Giga Pixels) and
corresponds to 238.2 GigaBytes. An assembly of such large images requires either hardware with large shared memory
or algorithms using disk access in tandem with available RAM providing data for local image operation. In addition to
I/O operations, the computations needed to stitch together image tiles involve at least one image transformation and
multiple comparisons to place the pixels into a pyramid representation for fast dissemination. The motivation of our
work is to explore the utilization of multiple hardware architectures (e.g., multicore servers, computer clusters) and
parallel computing to minimize the time needed to stitch Giga pixel images.
Our approach is to utilize the coarse georeferencing information for initial image grouping followed by an intensitybased
stitching of groups of images. This group-based stitching is highly parallelizable. The stitching process results in
image patches that can be cropped to fit a tile of an image pyramid frequently used as a data structure for fast image
access and retrieval. We report our experimental results obtained when stitching a four Giga Pixel image from the input
images at one fourth of their original spatial resolution using a single core on our eight core server and our preliminary
results for the entire 79.3 Gigapixel image obtained using a 120 core computer cluster.