29 March 2016 Comparison of mosaicking techniques for airborne images from consumer-grade cameras
Author Affiliations +
Images captured from airborne imaging systems can be mosaicked for diverse remote sensing applications. The objective of this study was to identify appropriate mosaicking techniques and software to generate mosaicked images for use by aerial applicators and other users. Three software packages—Photoshop CC, Autostitch, and Pix4Dmapper—were selected for mosaicking airborne images acquired from a large cropping area. Ground control points were collected for georeferencing the mosaicked images and for evaluating the accuracy of eight mosaicking techniques. Analysis and accuracy assessment showed that Pix4Dmapper can be the first choice if georeferenced imagery with high accuracy is required. The spherical method in Photoshop CC can be an alternative for cost considerations, and Autostitch can be used to quickly mosaic images with reduced spatial resolution. The results also showed that the accuracy of image mosaicking techniques could be greatly affected by the size of the imaging area or the number of the images and that the accuracy would be higher for a small area than for a large area. The results from this study will provide useful information for the selection of image mosaicking software and techniques for aerial applicators and other users.



Recent advances in imaging technologies have made consumer-grade digital cameras an attractive option for remote sensing applications due to their low cost, small size, compact data storage, and ease of use.1 Consequently, consumer-grade digital cameras have been increasingly used for remote sensing applications.23.4 Agricultural aircraft provide a readily available and versatile platform for airborne remote sensing. Equipping these aircraft with appropriate imaging systems can help aerial applicators acquire airborne images for agricultural applications.1

Airborne remote sensing images are widely used in agriculture for monitoring crop growing conditions, detecting pests (i.e., weeds, diseases, and insect damage) and for estimating crop yields of large agricultural areas.1,5 For some specified applications, such as resource monitoring and land inspection of particular items, although multiple images are taken, they could be analyzed separately and image mosaicking may not be needed.6 For precision agriculture, if individual fields need to be treated with variable-rate prescription maps, individual images may be processed and analyzed separately. However, for other applications, such as crop growing condition monitoring, pest detection, and crop yield estimation over large areas, individual images may need to be mosaicked for further analysis. Thus, the user or the service provider should determine if image mosaicking is really necessary to satisfy specific requirements.

Image mosaicking, also known as image stitching, is the technique for combining multiple images into a mosaicked image covering a large area. Substantial research has been conducted on remote sensing image mosaicking, and many efforts have been devoted to generating mosaicked images for various applications, such as homeland security demonstration,7 forest fire monitoring,8,9 quick response measurements for emergency disaster,10 earth science research,11 and monitoring of gas pipelines.1112.13.14 Typically, the algorithms could be broadly divided into two categories: content-based1516.17 and feature-based.1819.20 Content-based methods have the advantage that they use all of the available image data (intensity, color, texture, and so on) to achieve accurate registration; however, they require a close initialization for aligning images or for multimodal registration. By extracting the feature points of image patches, feature-based methods can get robust matching of image sequences and generate reliable mosaicking results.2122.23

Small lightweight unmanned aerial vehicles (UAVs) are increasingly used for collecting remote sensing information. UAVs have several advantages, such as small size, portability, ease of operation, low-cost, and ability to obtain high-resolution images. Furthermore, UAVs are less affected by cloud cover because they can fly at low altitudes.24 Despite the advantages, UAVs have problems capturing high-quality images due to the instability caused by their light weight. Furthermore, as a side effect of high spatial resolution, a single image can cover only a limited area. Therefore, a large number of images need to be captured for image mosaicking, making the image mosaicking process much more difficult. Despite the promising use of UAVs in many applications, the U.S. Federal Aviation Administration only allows very restrictive use of small commercial UAVs. Consequentially, most remote sensing work is still conducted by manned aircraft and satellites in the United States.

For panoramic image mosaicking, there are numerous commercial software programs, and most of them are designed for specific purposes.2526.27 Wikipedia has listed more than 40 software programs for specific image mosaicking usages;27 some of the programs are limited to no more than four input images and have output size limits, while some are designed specifically for fisheye lenses. For images captured by airborne imaging systems, images are distorted due to aircraft motion variation, and there exist inconsistent overlaps between images. Furthermore, the number of images to be mosaicked can be very large. All these make mosaicking airborne images a great challenge. So, the commonly used image mosaicking software should be evaluated for performance and ease of use for agricultural applications.

Recently, scientists at the Aerial Application Technology Research Unit of the U.S. Department of Agriculture-Agricultural Research Service’s Southern Plains Agricultural Research Center in College Station, Texas, assembled a low-cost, single-camera imaging system using off-the-shelf electronics. The imaging system was installed on an Air Tractor AT-402B aircraft for image acquisition. To address the imaging needs for aerial applicators, it was necessary to identify appropriate image mosaicking software and techniques. The objectives of this study were to evaluate multiple image mosaicking techniques and software packages and to identify free or inexpensive software to meet the imaging needs of agricultural aerial applications.


Materials and Methods


Study Sites

The aerial images for this study were collected from a rectangular cropping area of 38.9  km2 (3891 ha) near College Station, Texas, on July 15, 2015 (upper right corner: 30°33′36.16″N and 96°25′11.74″W, upper left corner: 30°32′26.58″N and 96°26′26.58″W, lower right corner: 30°29′45.52″N and 96°23′31.12″W and lower left corner: 30°31′04.22″N and 96°22′08.16″W). The study area is adjacent to the Brazos River with a good mixture of agricultural crops and other land cover types. A smaller area of 2.19  km2 (219 ha) within the imaging area was selected to evaluate the effect of imaging area on the performance of different image mosaicking techniques. The whole imaging area and the smaller area are designated as sites 1 and 2, respectively.


Low-Cost Airborne Imaging System and Image Acquisition

A low-cost, single-camera imaging system described by Yang and Hoffmann1 was used in this study. The system consisted of a Nikon D90 camera with a 24-mm prime lens to capture RGB images and a GPS receiver to geotag the images. The size of the captured image had an array of 4288×2848  pixels. The camera was mounted on the right step of the aircraft. A wireless remote control was attached to the GPS receiver to automatically trigger the camera for image acquisition. To obtain consistent images, the camera was set to manual mode and the lens focus set to infinity. In order to obtain high-quality images, exposure time, aperture opening, and ISO speed were set to be 1/1000  s, f/6.3, and 200, respectively. All other parameters were set to the defaults. Small overlaps between images may cause mismatches in the mosaicked image. To achieve at least 50% image overlaps along and between flight lines, images were acquired at 5-s intervals with a flight speed of 225  km/h (140 mph) along six flight lines spaced at 1066-m (3500-ft) intervals. Images captured by this system were stored in 12-bit RAW (NEF format) and 8-bit JPEG files in a memory card. A total of 70 continuously captured images belonging to the six adjacent flight lines for sites 1 and 15 images, belonging to four adjacent flight lines for site 2, were selected for mosaicking.


Image Mosaicking

ERDAS Imagine is commonly used to create a mosaicked image from georeferenced individual images. The mosaicked image can be used as a reference image to evaluate the performance of other image stitching software. However, this is a time-consuming process and a well-trained technician needs more than 1 month to georeference all the individual images and generate a mosaicked image for this study, rendering it unsuitable for real-time/near-real-time usage by aerial applicators or other users. In this study, Pix4Dmapper was used to generate mosaicked images and compared with Photoshop and Autostitch.

Image mosaicking was carried out by using Adobe Photoshop CC (Adobe Systems Incorporated, San Jose, California), Pix4Dmapper software (Pix4D, Lausanne, Switzerland), Autostitch (University of British Columbia, Vancouver, Canada), and some other software listed in Wikipedia.27 ERDAS Imagine (Intergraph Corporation, Madison, Alabama) was used to georeference the mosaicked images for accuracy assessment.


Using Pix4Dmapper

Pix4Dmapper converts aerial and oblique images taken by UAV or manned aircraft into georeferenced two-dimensional orthomosaics. It also offers three-dimensional (3-D) information and is easy to use. Users can assess, edit, and improve projects directly in the software, using the rayCloud and Mosaic Editor and seamlessly import results into any professional Geographic Information System, Computer Aided Design, or traditional photogrammetric software.

The processes of image mosaicking using Pix4Dmapper are based on the use of fundamental principles of photogrammetry combined with robust algorithms from computer vision. One of the most commonly used and most rigorous methods is the bundle adjustment algorithms based on structure from motion techniques. It could extract features of individual images that can be matched to their corresponding features in other images, and the whole process is calculated using an incremental approach in which bundle adjustment of an initial image pair is sequentially repeated with more images incorporated at each iteration into a seamless panorama.21,28


Using Photoshop CC

Adobe Photoshop CC, which includes a tool known as Photomerge, is commonly used for image stitching. Adobe Photoshop CC is recommended for image mosaicking because it can create a seamless mosaic from a large number of images. It also provides a complete set of useful tools for image processing and analysis. It offers six different methods for image mosaicking: (1) perspective, (2) cylindrical, (3) spherical, (4) auto, (5) collage, and (6) reposition. The perspective method creates a mosaicked image by selecting one of the input images as the center of the mosaicked image, and then stretches and skews the other images around it as needed. For the cylindrical method, the source images are overlapped as on an unfolded cylinder. The spherical method aligns and transforms the images as if they were for mapping the inside of a sphere. It is suitable for 360-deg panoramas and is also useful for producing mosaics from other images. The auto method first analyzes the input images and creates a layout of the source images automatically and then applies either the perspective, cylindrical, or spherical method to mosaic the images. As the image layout method is different from any of these three methods, the auto option could be seen as a unique method for generating mosaicked images. The collage method only changes rotation or scale of source images in order to overlap the content. The reposition method only aligns the overlapping content of the images but does not stretch or skew any of the images. All six methods were selected and tested to generate mosaicked images in this study.


Using software listed in Wikipedia

Wikipedia summarizes 40 mosaicking software packages used for different applications (Table 1). We downloaded all 40 mosaicking software packages and tried to mosaic the 70 selected images by using the software listed in Table 1. Some software was capable of stitching a full 360-deg spherical panorama or partial cylindrical panorama, while some used video as input, or allowed the user to upload images to a server to generate a panorama. Most of the software was designed for entertainment purposes, not for remote sensing applications. The images captured in the six flight lines had various degrees of geometric distortion, making it more difficult to use some of the software for image mosaicking. After comparing the performance at all these image mosaicking software packages, a total of eight methods (Autostitch, six Photoshop CC-based methods, and Pix4Dmapper) were selected.

Table 1

Software listed on Wikipedia for image mosaicking.

1PanoweaverEasypano Holdings Inc.22PanoramaPlus X4Serif (Europe) Ltd.
2DermandarDermandar23PTAssemblerMax Lyons
3360 Panorama Professional360 Degrees of Freedom24Pix4DmapperaPix4D
4VRstitcher Fisheye Pro360 Degrees of Freedom25Pixtra OmniStitcherPixtra Corp.
5COOL 360Ulead/Corel Corp.26PanoramaStudio 2/ProTobias Hüllmandel Software
6Pixtra PanoStitcherPixtra Corp.27Image Composite EditorMicrosoft
7Panorama Maker 6Arcsoft, Inc.28AIPR Lite 1Mayachitra, Inc.
8ADG Panorama Tools/ProAlbatross Design Group29Montage Image MosaicCaltech/IPAC, JPL, CACR, ISI
9Photoshop CS5Adobe Systems30StitchUpMichel Mürner
10PhotoStitcherMaxim Gapchenko31The Panorama Factory v5Smoky City Design, LLC
11i2Align Quickage Express/ProDualAlign LLC32PanoramaBuilder 7.03cim, Inc.
12PhotoFit Harmony/PremiumTekmate, Inc.33Photoshop CS6Adobe Systems
13PixMaker Lite/Home/Business__34Calico PanoramaKekus
14AutopanoKolor35D JoinerD Vision Works, Ltd.
15Photovista Panoramaiseemedia, Inc.36Stitcher Unlimited 2009Autodesk
16HuginPablo d’Angelo37Panorama PerfectMichal Pohanka
17Panorama Composer 3FirmTools38Gigapan Stitch 1.0Gigapan Systems
18PTgui/PTgui ProNew House Internet Services39Panorama StitcherAlexander Boltnev, Olga Kacher
19PanoramaMakerSTOIK Imaging40AutostitchMatthew Brown & David Lowe
20SharpStitchLibor Tinka41Agisoft PhotoScan ProaAgisoft LLC
21Stitcher 43DVista


Note: Pix4Dmapper and Agisoft Photoscan Pro are not listed on Wikipedia.

For Autostitch software, it takes a step forward in panoramic image mosaicking by automatically recognizing unordered collections of images and automatically finding matches between images. For the unordered input images, the software extracts the scale-invariant feature transform features from all the images first and finds the k nearest neighbors for each feature using the k-dimensional tree algorithm. For each image, it is matched by selecting several candidate matching images that have the most feature matches to this image, and finding geometrically consistent feature matches using the random sample consensus (RANSAC) algorithm to solve for the homography between pairs of images and verifying image matches using a probabilistic model. After finding the connected components of image matches, the bundle adjustment algorithm is performed for each connected component to solve for the rotation and focal length of cameras and the mosaicked image is rendered using multiband blending algorithms to form a seamless panorama.21


Accuracy Assessment of Different Mosaicking Techniques

Image mosaicking software and techniques should be evaluated for their accuracy, time consumption, ease of use, and software expense. To assess the accuracy of the mosaicked images, a total of 107 land markers within the imaging area were selected and located as the ground control point (GCPs) by using a Trimble GPS Pathfinder ProXRT receiver (Trimble Navigation Limited, Sunnyvale, California). The device had an accuracy range of 20 to 65 cm. The mosaicked images were georeferenced or rectified to the Universal Transverse Mercator (UTM), World Geodetic System 1984 (WGS-84), Zone 14, coordinate system based on selected GCPs. The second-order polynomial transformation was used for image rectification. The accuracies of all the methods can be evaluated by using the following formula:


where (x1,y1) and (x2,y2) are the UTM coordinates of the GCPs selected for evaluating the accuracies in the mosaicked and georeferenced images, respectively.

Figure 1 shows the 107 GCPs along with the 70 images in six adjacent flight lines selected for the image mosaicking task for this study. The GCPs marked with different colors were used for image georeferencing (red), for accuracy assessment (yellow), and as manual control points specified for Pix4Dmapper (green), respectively.

Fig. 1

Aerial images plotted in Google Earth and GCPs within site 1 (3891 ha; blue box) and site 2 (219 ha; green box) near College Station, Texas.


The reason for choosing the manual control points for Pix4Dmapper is that these points improve the georeferencing accuracy of the mosaicked image as the software performs both mosaicking and georeferencing simultaneously. The other methods do not need manual control points for mosaicking and need to use separate software, such as ERDAS Imagine for georeferencing. The number of the selected GCPs in each category and their usage are listed in Table 2. In the study, 20 GCPs marked in green color were selected as manual control points for Pix4Dmapper to generate the mosaic for site 1. For the other seven methods, the 20 manual control points used for Pix4Dmapper were used as georeferencing points (11 GCPs) and accuracy assessment points (9 GCPs). For site 2, the 22 GCPs were divided equally, 11 GCPs were used for image georeferencing, and 11 GCPs for accuracy assessment of the mosaicked images.

Table 2

Numbers of GCPs and their usage for different image mosaicking methods for site 1 and site 2.

Manual control points site 1/site 2 (green)Georeferencing points site 1/site 2 (red)Accuracy assessment points site 1/site 2 (yellow)Total points site 1/site 2


Results and Discussion


Image Mosaicking Results

As site 1 covered a very large area, 70 images were used for image mosaicking. After we tested all the eight methods listed in Table 2, only five methods were able to create seamless mosaics, including Pix4Dmapper, Autostitch, and three Photoshop-based methods (auto, spherical, and cylindrical). Figure 2 presents the mosaicked images for the five methods. Site 2 covered a small area with 15 images and was used to illustrate how imaging area could affect mosaicking results. After testing the eight methods shown in Table 2, all the eight methods were able to create seamless mosaicked images for site 2, as shown in Fig. 3. However, the mosaicked images by the Photoshop-collage and Photoshop-reposition methods had obvious mismatches and could not be used for further analysis. Moreover, the mosaicked images looked different among these mosaicking methods. Thus, accuracy assessment is critical to discern the performance of these methods.

Fig. 2

Mosaicked images from 70 images captured in six flight lines within the green box shown in Fig. 1 (site 1) using (a) Pix4Dmapper, (b) Autostitch, (c) Photoshop-auto, (d) Photoshop-spherical, and (e) Photoshop-cylindrical.


Fig. 3

Mosaicked images from 15 images captured in four flight lines within the yellow box shown in Fig. 1 (site 2) using (a) Pix4Dmapper, (b) Autostitch, (c) Photoshop-auto, (d) Photoshop-spherical, (e) Photoshop-cylindrical, (f) Photoshop-perspective, (g) Photoshop-collage, and (h) Photoshop-reposition.


The mosaicked image using Pix4Dmapper was already georeferenced for site 1, while the other four mosaicked images were not georeferenced, so the mosaicked images were georeferenced before accuracy assessment using the numbers of georeferencing points listed in Table 2. The georeferenced images for sites 1 and 2 are shown in Figs. 4 and 5, respectively.

Fig. 4

Georeferenced mosaics for site 1 using (a) Pix4Dmapper, (b) Autostitch, (c) Photoshop-auto, (d) Photoshop-spherical, and (e) Photoshop-cylindrical.


Fig. 5

Georeferenced mosaics for site 2 using (a) Pix4Dmapper, (b) Autostitch, (c) Photoshop-auto, (d) Photoshop-spherical, (e) Photoshop-cylindrical, and (f) Photoshop-perspective.


Table 3 summarizes the pixel resolution, root mean square (RMS) errors, and the overall estimated accuracy of the mosaicked images for sites 1 and 2. As can be seen from Table 3, similar pixel resolutions of 0.40 to 0.48 m were obtained by Pix4Dmapper and Photoshop software, whereas Autostitch produced a much lower resolution of 4.0 m. Although it is not easy to see the difference in pixel resolution due to the large imaging area, the Autostitch-based mosaic had much lower image details. The overall estimated accuracy could be attained simply by multiplying the resolution and the overall RMS errors.

Table 3

Pixel resolutions, RMS errors, and total estimated accuracy of the mosaicked images using six different methods for sites 1 and 2.

Pixel resolution (m) site 1/site 2RMS errors (pixel) site 1/site 2Total estimated accuracy (m) site 1/site 2
X coordinateY coordinateTotal

Note: “—” means that no mosaicked image was obtained or the mosaicked image was not usable.

Pix4Dmapper had the highest accuracy of 1.56 m for site 1. The Autostitch had an accuracy of 12.68 m. For Photoshop, the spherical and cylindrical methods had similar accuracy values of 24.30 and 27.88 m, respectively, whereas the auto method had the lowest accuracy of 103.19 m. For site 2, the same pixel resolutions, 0.4 and 4.0 m, as for site 1 were obtained for Pix4Dmapper and Autostitch, respectively, while the pixel resolutions were 0.41 m for the auto, spherical, and perspective methods and 0.25 m for the cylindrical method. Pix4Dmapper again had the highest accuracy of 1.02 m. Autostitch had the lowest accuracy of 7.40 m. The estimated accuracies for the four methods in Photoshop ranged from 2.60 to 4.51 m.

After comparing the performance of these eight mosaicking methods for the large area (site 1) and the small area (site 2), we found that Pix4Dmapper consistently had the highest accuracy. Some methods in Photoshop such as perspective may not achieve desired results for large areas like site 1, but they may be used for a much smaller area like site 2. The perspective method selects the middle image as the center of the mosaicked image and transforms the other images around it to generate the mosaicked image. If the area is very large as in site 1, the images far from the center will be distorted or mismatched. The Photoshop-auto method had the lowest accuracy when applied for site 1, but it produced a good result for site 2. This was probably because the automatic layout of the input images is not accurate and efficient when a large number of images are involved. For site 1, the auto method had much lower accuracy than the spherical and cylindrical methods. As the collage and reposition methods only change the rotation or scale of source images and does not stretch or skew any of the images, they will not correct the distortion in the input images, resulting in mismatches in the mosaicked image or failure to mosaic the images at all as in site 1. From the results shown in Table 3, it can be seen that the accuracy was much higher for mosaicking images from a small area than from a large area by using all these methods as the small area had smaller variability and similar imaging conditions.

The accuracy results shown in Table 3 are based on the GCPs that were used to georeference the mosaicked images. To validate the accuracy of the mosaicked images, the yellow GCPs shown in Fig. 1 and listed in Table 2 were used to assess the accuracy of the mosaicked images. The average and the standard derivation for all the GCPs are listed in Table 4 for the two sites. The accuracy assessment results for all the GPCs are shown in Fig. 6 for sites 1 and 2.

Table 4

Average error and standard derivation of different image mosaicking methods.

Average (m) site 1/site 23.31/2.6111.72/10.0680.46/9.059.55/5.4919.79/8.36—/10.06
Standard deviation (m) site 1/site 22.96/1.8812.43/6.5041.40/9.776.21/5.4011.34/6.15—/14.14

Note: “—” means that photoshop-perspective failed to create a mosaic for site 1.

Fig. 6

Accuracy assessment results of georeferenced mosaics by different image mosaicking methods for (a) site 1 with 51 GCPs and (b) site 2 with 11 GCPs.


Positional accuracy for site 1 by using Pix4Dmapper ranged from 0.10 to 15.10 m with an average of 3.31 m and a standard derivation of 2.96 m. The accuracy of Autostitch varied from 1.17 to 86.08 m with an average of 11.72 m and a standard derivation of 12.43 m. The average accuracy for the 51 points was 9.55, 19.79, and 80.46 m for the spherical, cylindrical, and the auto methods, respectively. It can be seen from the results that Pix4Dmapper had the highest accuracy among these five image mosaicking methods for site 1, Autostitch, and the Photoshop-spherical method had similar performance, but the Photoshop-spherical method had a much smaller standard deviation and could provide much more consistent results than Autostitch. The Photoshop-auto method was not sufficient to obtain a reliable mosaicked image because it had the highest error compared with the other four methods. Furthermore, mismatches existed in some portions of the mosaicked images by the Photoshop-auto method [Figs. 2(c) and 3(c)]. Therefore, the Photoshop-auto method may not be appropriate for generating mosaicked images for large areas.

For site 2, the positional accuracy of Pix4Dmapper ranged from 0.25 to 5.32 m with an average of 2.61 m and a standard derivation of 1.88 m. The accuracy of Autostitch varied from 3.24 to 23.76 m with an average of 10.06 m and a standard derivation of 6.50 m. The average accuracy was 5.49, 8.36, 9.05, and 10.06 m for the spherical, cylindrical, auto, and perspective methods, respectively. Similarly, Pix4Dmapper had the highest accuracy among these six image mosaicking methods for site 2, and the spherical method had the best performance among the four methods embedded in Photoshop. Autostitch had the lowest accuracy among all the six methods.

The results from Table 4 show that mosaicking accuracy was much better for the small area (site 2) than for the large area (site 1). The average error of Pix4Dmapper changed from 3.31 m for site 1 to 2.61 m for site 2, a reduction of 21.15%. The error for Autostitch was 11.72 m for site 1 and 10.06 m for site 2, a 14.16% reduction. Photoshop-auto had the largest error reduction of 88.75% from 80.46 m for site 1 to 9.05 m for site 2. The Photoshop-spherical and Photoshop-cylindrical methods also had error reductions of 42.51% and 57.76%, respectively, from site 1 to site 2. These results showed that all the methods performed better for the small area than for the large area. Moreover, Pix4Dmapper and Autostitch were more consistent than the three Photoshop methods between the two very different sites.


Time Consumption of Different Image Mosaicking Methods

Another important factor that should be considered is the time consumption of different image mosaicking methods. All the programs were run on a Dell Optiplex 9010 with an i7 processor (3.4 GHz) and 8 GB RAM. The time consumption of all the image mosaicking methods for sites 1 and 2 is listed in Table 5. Autostitch was much faster than all the other methods because it significantly reduced the pixel resolution to about 4 m (Table 3). Thus, Autostitch can be used to quickly examine the quality of the captured image right after image acquisition. It can also be used for analysis if the coarser resolution is sufficient for the application. For the other methods, the running time ranged from 29.12 min for Photoshop-spherical to 50.05 min for Photoshop-auto for site 1. For site 2, the running time varied from 0.38 min for Autostitch to 10.92 min for Pix4Dmapper.

Table 5

Time consumption (min) of all the image mosaicking methods.

Site 130.901.8750.0529.1237.92
Site 210.920.381.151.535.650.871.120.82

Note: “—” means photoshop-perspective, photoshop-collage, and photoshop-reposition failed to create mosaicked images for site 1.


Software Cost

The cost of the software is one factor in the selection of the method. The cost of the subscription-based Photoshop CC software was $19.99 per month with an annual plan. Autostitch can be downloaded free for noncommercial use.29 The cost of Pix4Dmapper was $8700 with a one-time charge, including free support and upgrade for the first year, optional from the second year at $870 per year. Another optional choice is renting Pix4Dmapper at $350 for 30 consecutive days or $3,500 for one full year. Users must balance the cost of the software with its accuracy and computational efficiency, depending on their own needs.


Available Documentation and Tutorials

The ease of use of the software is another consideration for users. Photoshop CC is widely used in many fields and detailed documentation and numerous tutorials are available online. Autostitch is simple to use and instructions and examples for image mosaicking can be found on the Internet. Pix4Dmapper provides detailed user manuals and tutorials for different applications on its website. With the extensive application of this software, new documentation is being added to the website. Both Photoshop CC and Autostitch can be easily used by beginners or experts with the online documentation and tutorials. Pix4Dmapper is designed for creating high-quality professional orthomosaics and 3-D surface models, so some user interface (i.e., adding GPS points to improve positional accuracy) is necessary. Beginners may have some difficulty, but intermediate and advanced users should have no problem in using the software with the available documentation and tutorials.

All the image mosaicking methods could be carried out to generate image mosaics by aerial applicators or practitioners not specialized in image processing. In this study, mosaicked images created by Pix4Dmapper, Autostitch, and the Photoshop-spherical method had respective errors of 3.31, 11.72, and 9.55 m for large areas like site 1, and therefore could be used for aerial application as the swath of typical aerial applicators for aerial pesticide application is 12 to 20 m. And all the methods listed in Table 4 with a minimum accuracy of 10.06 m could be used for small areas like site 2. If the accuracy is the most important factor, Pix4Dmapper will be the best choice because it offers the highest accuracy and the most consistent standard derivation among all the methods. Another advantage for using Pix4Dmapper is that the mosaicked image is already georeferenced and can be used to generate prescription maps. If cost is the most important consideration, Photoshop CC or Autostitch could be selected. Furthermore, with the highest mosaicking speed of 1 min and 52 s for mosaicking 70 images simultaneously, Autostitch can always be used to check the quality of images immediately after image acquisition.

For most agricultural applications, mosaicked images by using Pix4Dmapper, Autostitch, Photoshop-spherical, and Photoshop-cylindrical with an accuracy of about 20 m could meet the needs of monitoring crop growing conditions, detecting pests, and estimating crop yields over large geographic areas. For small areas or individual fields, the same methods can be used to achieve much better accuracy for precision aerial or ground-based applications.


Summary and Conclusions

To compare and identify suitable mosaicking techniques for images from consumer-grade cameras, 70 images acquired by a low-cost, single-camera imaging system were selected in this research. The selected images were mosaicked using Pix4Dmapper, Autostitch, and six methods in Photoshop CC (spherical, cylindrical, auto, perspective, collage, and reposition). The mosaicked images were then georeferenced and the accuracy was assessed by using GCPs. Among all these methods, Pix4Dmapper provided the highest accuracy (3.31 m for site 1 and 2.61 m for site 2) with the smallest standard derivation (2.96 m for site 1 and 1.88 m for site 2). Therefore, Pix4Dmapper can be the first choice if georeferenced imagery with high accuracy is required. Considering the lower software cost, Photoshop CC-spherical (with an accuracy of 9.55 m for site 1 and 5.44 m for site 2) can be an alternative for mosaicking images for some agricultural applications. Furthermore, with the highest mosaicking speed (1 min and 52 s for site 1 and 23 s for site 2), Autostitch can be used to quickly evaluate the quality of the images immediately after the flight mission is completed.

The results from this study also showed that the accuracy of image mosaicking techniques could be greatly affected by the size of the imaging area or the number of the images and that the accuracy would be higher for a small area than for a large area.

The quality of a mosaicked image depends largely on the quality of the individual images used for mosaicking. Therefore, to minimize the distortion and discontinuity in the mosaicked images, it is important that images be taken with sufficient overlap under sunny and relatively calm conditions to minimize image geometric distortion. More research is needed to evaluate these image mosaicking techniques for images acquired from different imaging systems and weather conditions.


This project was conducted as part of a visiting scholar research program, and the first and third authors were financially supported by the China Scholarship Council. The authors wish to thank Fred Gomez and Lee Denham of USDA-ARS in College Station, Texas, for acquiring the images and their assistance in collecting the GCPs for this study. The authors also wish to thank the developers of all the software and the authors of the papers we cited in this research for their excellent work. Mention of trade names or commercial products in this article is solely for the purpose of providing specific information and does not imply recommendation or endorsement by the U.S. Department of Agriculture. USDA is an equal opportunity provider and employer.


1. C. Yang and W. C. Hoffmann, “Low-cost single-camera imaging system for aerial applicators,” J. Appl. Remote Sens. 9(1), 096064 (2015). http://dx.doi.org/10.1117/1.JRS.9.096064 Google Scholar

2. D. Akkaynak et al., “Use of commercial off-the-shelf digital cameras for scientific data acquisition and scene-specific color calibration,” J Opt. Soc. Am. A. Opt. Image Sci. Vis. 31(2), 312–321 (2014). http://dx.doi.org/10.1364/JOSAA.31.000312 Google Scholar

3. T. Sakamoto et al., “An alternative method using digital cameras for continuous monitoring of crop status,” Agric. For. Meteorol. 154, 113–126 (2012).0168-1923 http://dx.doi.org/10.1016/j.agrformet.2011.10.014 Google Scholar

4. C. Yang et al., “An airborne multispectral imaging system based on two consumer-grade cameras for agricultural remote sensing,” Remote Sens. 6(6), 5257–5278 (2014). http://dx.doi.org/10.3390/rs6065257 Google Scholar

5. Y. Afek and A. Brand, “Mosaicking of orthorectified aerial images,” Photogramm. Eng. Remote Sens. 64(2), 115–124 (1998). Google Scholar

6. D. Booth et al., “Precision measurements from very-large scale aerial digital imagery,” Environ. Monit. Assess. 112, 293–307 (2006). http://dx.doi.org/10.1007/s10661-006-1070-0 Google Scholar

7. S. Herwitz et al., “UAV homeland security demonstration,” in AIAA 3rd “Unmanned Unlimited” Technical Conf., Workshop and Exhibit (2004). Google Scholar

8. G. Zhou et al., “High-resolution UAV video data processing for forest fire surveillance,” Tech. Rep. National Sci. Foundation, Old Dominion Univ., Norfolk, Virginia (2006). Google Scholar

9. J. Wu et al., “Geo-registration and mosaic of UAV video for quick-response to forest fire disaster,” in Int. Symp. on Multispectral Image Processing and Pattern Recognition, pp. 678810 (2007). Google Scholar

10. G. Postell and P. Thomas, “Wallops flight facility uninhabited aerial vehicle (UAV) user’s handbook,” in Suborbital Special Orbital Projects Directorate, NASA Wallops Flight Facility, Wallops Island, Virginia (2005). Google Scholar

11. R. Sugiura, N. Noguchi and K. Ishii, “Remote-sensing technology for vegetation monitoring using an unmanned helicopter,” Biosyst. Eng. 90(4), 369–379 (2005). http://dx.doi.org/10.1016/j.biosystemseng.2004.12.011 Google Scholar

12. M. Bryson, M. Johnson-Roberson and S. Sukkarieh, “Airborne smoothing and mapping using vision and inertial sensors,” in IEEE Int. Conf. on Robotics and Automation, pp. 2037–2042, IEEE Press (2009). http://dx.doi.org/10.1109/ROBOT.2009.5152678 Google Scholar

13. H. Eisenbeiss and L. Zhang, “Comparison of DSMs generated from mini UAV imagery and terrestrial laser scanner in a cultural heritage application,” in Int. Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. XXXVI-5, pp. 90–96 (2006). Google Scholar

14. D. Hausamann et al., “Monitoring of gas pipelines—a civil UAV application,” Aircr. Eng. Aerosp. Technol. 77(5), 352–360 (2005).AATEEB http://dx.doi.org/10.1108/00022660510617077 Google Scholar

15. M. Irani and P. Anandan, “About direct methods,” in Vision Algorithms: Theory and Practice, pp. 267–277, Springer, Berlin (2000). Google Scholar

16. H.-Y. Shum and R. Szeliski, “Systems and experiment paper: construction of panoramic image mosaics with global and local alignment,” Int. J. Comput. Vis. 36(2), 101–130 (2000).IJCVEQ0920-5691 http://dx.doi.org/10.1023/A:1008195814169 Google Scholar

17. R. Szeliski and S. B. Kang, “Direct methods for visual scene reconstruction,” in Proc. of the IEEE Workshop on Representation of Visual Scenes, pp. 26–33, IEEE Computer Society (1995). http://dx.doi.org/10.1109/WVRS.1995.476849 Google Scholar

18. D. Capel and A. Zisserman, “Automated mosaicing with super-resolution zoom,” in Proc. of the IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, pp. 885–891, IEEE Computer Society (1998). http://dx.doi.org/10.1109/CVPR.1998.698709 Google Scholar

19. G. Drettakis, L. Robert and S. Bougnoux, “Interactive common illumination for computer augmented reality,” Rendering Tech. 97, 45–56 (1997). http://dx.doi.org/10.1007/978-3-7091-6858-5 Google Scholar

20. P. F. McLauchlan and A. Jaenicke, “Image mosaicing using sequential bundle adjustment,” Image Vision Comput. 20(9), 751–759 (2002). http://dx.doi.org/10.1016/S0262-8856(02)00064-1 Google Scholar

21. M. Brown and D. G. Lowe, “Automatic panoramic image stitching using invariant features,” Int. J. Comput. Vision 74(1), 59–73 (2007).IJCVEQ0920-5691 http://dx.doi.org/10.1007/s11263-006-0002-3 Google Scholar

22. C. Harris, “Geometry from visual motion,” in Active Vision, pp. 263–284, MIT Press Cambridge, Cambridge, Massachusetts (1993). Google Scholar

23. J. Shi and C. Tomasi, “Good features to track,” in Proc. of the Computer Society Conf. on Computer Vision and Pattern Recognition, pp. 593–600, IEEE Computer Society (1994). http://dx.doi.org/10.1109/CVPR.1994.323794 Google Scholar

24. T. Suzuki, Y. Amano and T. Hashizume, “Vision based localization of a small UAV for generating a large mosaic image,” in SICE Annual Conf., pp. 2960–2964 (2010). Google Scholar

25. S. E. Chen, “Quicktime VR: an image-based approach to virtual environment navigation,” in Proc. of the 22nd Annu. Conf. on Computer Graphics and Interactive Techniques, pp. 29–38 (1995). Google Scholar

26. Microsoft, “Image composite editor,”  https://research.microsoft.com/en-us/um/redmond/projects/ice (15 August 2015). Google Scholar

27. Wikipedia, “Comparison of photo stitching software,”  https://en.wikipedia.org/wiki/Comparison_of_photo_stitching_software (20 July 2015). Google Scholar

28. F.-J. Mesas-Carrascosa et al., “Assessing optimal flight parameters for generating accurate multispectral orthomosaicks by UAV to support site-specific crop management,” Remote Sens. 7, 12793–12814 (2015). http://dx.doi.org/10.3390/rs71012793 Google Scholar

29. M. Brown, “AutoStitch: a new dimension in automatic image stitching,” July 31, 2015,  http://matthewalunbrown.com/autostitch/autostitch.html (12 August 2015). Google Scholar


Huaibo Song is an associate professor at the Northwest A&F University in Yangling, China. He received his BSc degree in transportation engineering from Shandong Jiaotong University in 2004 and his PhD degree in electronic engineering from Shandong University in 2009. He is the author of more than 20 journal papers. His current research interests include digital image processing, pattern recognition, and remote sensing image mosaicking. He is currently a visiting scholar at the USDA-ARS Southern Plains Agricultural Research Center in College Station, Texas, USA.

Chenghai Yang is an agricultural engineer with the U.S. Department of Agriculture-Agricultural Research Service’s Aerial Application Technology Research Unit in College Station, Texas, USA. He received his PhD degree in agricultural engineering from the University of Idaho in 1994. His current research is focused on the development and evaluation of remote sensing technologies for detecting and mapping crop pests for precision chemical applications. He has authored or coauthored more than 120 peer-reviewed journal articles and serves on a number of national and international professional organizations.

Biographies for the other authors are not available.

© The Authors. Published by SPIE under a Creative Commons Attribution 3.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Huaibo Song, Huaibo Song, Chenghai Yang, Chenghai Yang, Jian Zhang, Jian Zhang, Wesley C. Hoffmann, Wesley C. Hoffmann, Dongjian He, Dongjian He, J. Alex Thomasson, J. Alex Thomasson, } "Comparison of mosaicking techniques for airborne images from consumer-grade cameras," Journal of Applied Remote Sensing 10(1), 016030 (29 March 2016). https://doi.org/10.1117/1.JRS.10.016030 . Submission:

Back to Top