Many UAS have an integrated GPS receiver together with some kind of INS device acquiring the sensor orientation to provide the georeference. However, both GPS and INS data can easily become unavailable for a period of time during a flight, e.g. due to sensor malfunction, transmission problems or jamming. Imagery gathered during such times lacks georeference. Moreover, even in datasets not affected by such problems, GPS and INS inaccuracies together with a potentially poor knowledge of ground elevation can render location information accuracy less than sufficient for a given task.
To provide or improve the georeference of an image affected by this, an image to reference registration can be performed if a suitable reference is available, e.g. a georeferenced orthophoto covering the area of the image to be georeferenced. Registration and thus georeferencing is achieved by determining a transformation between the image to be referenced and the reference which maximizes the coincidence of relevant structures present both in the former and the latter.
Many methods have been developed to accomplish this task. Regardless of their differences they usually tend to perform the better the more similar an image and a reference are in appearance. This contribution evaluates a selection of such methods all differing in the type of structure they use for the assessment of coincidence with respect to their ability to tolerate unsimilarity in appearance. Similarity in appearance is mainly dependent on the following aspects, namely
the similarity of abstraction levels (Is the reference e.g. an orthophoto or a topographical map?),
the similarity of sensor types and spectral bands (Is the image e.g. a SAR image and the reference a passively sensed one? Was e.g. a NIR sensor used capturing the image while a VIS sensor was used in the reference?),
the similarity of resolutions (Is the ground sampling distance of the reference comparable to the one of the image?),
the similarity of capture parameters (Are e.g. the viewing angles comparable in the image and in the reference?) and
the similarity concerning the image content (Was there e.g. snow coverage present when the image was captured while this was not the case when the reference was captured?).
The evaluation is done by determining the performance of each method with a set of image to be referenced and reference pairs representing various degrees of unsimilarity with respect to each of the above mentioned aspects of similarity.