|
1.IntroductionCurrently, low-altitude photogrammetry and remote sensing are two of the most evolving fields.1–3 There are many studies on the analysis of radiometric imaging quality obtained from low altitudes used in remote sensing products.4–6 In photogrammetry, most of the research is focused on the geometry of images acquired from low altitudes.7–9 The research concerning the accuracy of low-altitude photogrammetric studies does not take into account the impact of the radiometric quality.10–12 However, as shown in many studies,13–15 the processing and evaluation of low-altitude data obtained in the visible and near-infrared (NIR) range are still current and significant research problems. Low-altitude NIR imagery data can include numerous distortions and radiometric inhomogeneities, such as changes in the radiation source (the Sun), terrain relief, the directionality of radiation reflection or emissions from the Earth’s surface, absorption and dispersion in the atmosphere. In practice, weather conditions (clouds and precipitation) and lighting conditions during imaging and sensitivity of the NIR camera sensor are also of importance. However, so far the radiometric quality, which depends mainly on the factors mentioned above, has not been analyzed, especially in the case of NIR images. Previous research proves that there is a need to develop objective radiometric quality indices of images acquired using UAVs.16 Moreover, the artifacts mentioned above in the radiometric quality of images often make the visual analysis and interpretation of images difficult or impossible. In addition, a low radiometric quality of images results in a deterioration of the final accuracy of photogrammetric and remote sensing products. Contemporary photogrammetry solutions and computer vision complement each other. They provide solutions for three-dimensional modeling, Earth surface mapping, navigation, and camera calibration. The contemporary photogrammetric software relies on computer vision solutions—dense image matching and to some extent enabling the automation of low-altitude imagery data processing (Match-AT, Pix4D, Trimble UAS Master, Agisoft Photoscan, and open source software, e.g., SURE17 and VisualSFM18). However, the available tools are still limited by optical images matching and nonoptical images matching defects (e.g., thermal) and defects of low radiometric quality images other than NIR. So far, traditional methods based on matching portions of the image (area-based matching) or matching image features (feature-based matching)19,20 are commonly used. New methods are based on matching every pixel of the image (image dense matching), e.g., using the semiglobal matching-based stereo method.21 Currently, applied algorithms are based on structure from motion22,23 using point descriptors that support the process of automated detection of corresponding features in the images.24 Usually, the following detectors are used to combine images: smallest univalue segment assimilating nucleus,25 efficient maximally stable extremal region,26 random sample consensus,27 SIFT (scale invariant feature transform), which helps to find homologous points that have shifted with respect to each other and are of a different scale28 in images, ASIFT (affine SIFT),29 and SURF (speeded up robust features)30 algorithms. Unfortunately, all these algorithms are inefficient if the scale differences are significant; the images are rotated with respect to each other,31 and there are significant differences in the radiometric quality. Development of these algorithms is supported by open source solutions implemented, e.g., in the OpenCV library. 2.Related WorksContemporary UAV photogrammetry research is concerned mainly with issues of low-altitude image geometry. The radiometric quality of multispectral photogrammetric images is currently the main topic of numerous research projects worldwide. The best-known project dealing with these issues was initiated by Euro-SDR. It concerns the essence of radiometric quality of photogrammetric images.32 In the case of classic multispectral aerial images, the problem of radiometric quality was solved by applying large format digital aerial cameras with a radiometric resolution of 12 bits, equipped with advanced low aberration optics. The UAV custom compact cameras are usually equipped with arrays with an 8-bit radiometric resolution and optics, which cause aberration. Moreover, the lenses of sensors acquiring NIR images are equipped with orange longpass filters and black IR-only longpass filters, which significantly reduce the radiometric quality of the images. Image quality may be defined using various parameters: radiometric resolution and accuracy represented by the noise level, or ground resolution and sharpness described by the modulation transfer function (MTF).33 The radiometric quality of digital images can be defined as a detailed mapping of local irradiation changes recorded by the imaging system while maintaining a continuum of brightness adequate for the mapped scene. The internal radiometric quality of digital images is formed by the local image contrast, tonal range, random noise, and radiometric resolution.32,34 Radiometric quality of an image is also influenced by the sensor’s features determined by sharpness, contrast, or resolution. The research carried out so far concerns the issues of radiometric quality of images from UAVs to a limited degree only. The results of research concerning applications of the signal-to-noise ratio value of the images and the radiometric accuracy and stability of the sensor related to the quality of aerial triangulation and DSM are available in published papers.35–38 Image quality is also described by the MTF,39 the point spread function and the line spread function.40–42 However, these parameters cannot fully describe the image quality43 and they are mainly used in industrial image processing, satellite imagery quality assessment, and for optoelectronic sensor production purposes.44 In digital image processing, image quality assessment is a multiaspect problem depending on the requirements of the application of the processed image. The images represented as arrays are subject to objective and subjective quality assessments. Most classic methods of image quality assessment belong to the group of comparative, i.e., the image reconstructed after compression is compared to the original image. Thus, reliable measures of image quality assessment should be investigated.45 Peak signal-to-noise ratio is commonly used as the quality measure. The mean square error (MSE),46 i.e., the value of the MSE for individual pixels of the image, is another index often used for quality assessment of video data.47 The structural similarity (SSIM)46 index taking into account three types of image distortions (image luminance, contrast, and structure) is often used for the assessment of the image signal quality. For an original signal in the form of and a distorted signal in the form of , the result of applying the SSIM index to image fragments defined by an pixels mask has the form of an image quality map of a resolution that is smaller by rows and columns than the image. It is recommended to apply a two-dimensional Gauss window of size.46 3.Proposed MethodThe paper presents a new index for objective quality assessment of NIR images acquired from low altitudes in various weather and lighting conditions. Since there are no objective methods of quality assessment of digital NIR images acquired from low altitudes, the authors claim that the developed method will make it possible to increase the reliability of photogrammetric studies concerning NIR data from UAVs. In order to prove this claim, the authors developed an experiment for determining and analyzing the statistical values for images acquired in the NIR range (690 to 1050 nm). Image processing and statistical algorithms were used to analyze images in order to determine the objective quality index. The analysis of the results was based on traditional measures applied in image quality assessments. The proposed new method makes it possible to develop the objective image quality index. It was proven that analyses of the authors’ index make it possible to classify images into three groups of radiometric quality. 4.Data Acquisition4.1.Test AreasThe low-altitude NIR images were acquired from four test areas (Fig. 1). The first test area (Liwiec) was located in north-eastern Poland. The terrain is flat and partly forested, with single building development. The low-altitude imagery data were acquired during a few flight missions in various weather and lighting conditions in July 2015. The second test area (Opatow) was covered in agricultural fields. The terrain around the fields was undulating. It is covered by numerous cultivated fields, grassland, and low shrubs. There are single buildings. There is a lake of about 2.5 ha in the central part of the test area. The northern, western, and southern shores of the lake are steep with rocky cliffs. The eastern shore is gentle and situated close to a local road. There is a small forest and single trees north of the lake. The imagery data were acquired in July 2015. The third test area (Nadarzyce) was a low urbanized flat area. The location is characterized by moderate amounts of forested areas, sparsely scattered buildings, agricultural fields, and a lake in the central part of the imaged area. The fourth test area (Tylicz) was located close to a village. The terrain has a significantly more diverse orography (there are greater altitude differences). The acquired images encompassed a fragment of a hill with a ski lift (southern part) and a village with a closely built-up area (northern part). There are forests along the southern side of the river bed. The hill was covered with low grass while the area of the ski slope was covered with snow. The built-up area in the northern part of the study area has a low level of urban development. There are detached houses, road infrastructure, single trees, shrubs, and prevailing grass. The photogrammetric flight was carried out in March 2016. All the images of the four test areas were taken between 11 am and 3 pm. 4.2.Description of the Platform and SensorsThe NIR images were acquired using the fixed-wing Trimble UX5. The UX5 comes with a fully automated take-off, flight, and landing control. Its flight time (endurance) is 50 min. It is an UAV customized for high ground resolution imagery data acquisition. The ground dimensions of the pixel and the area covered in the image increase with the flight altitude. The UX5 is highly rated among mini UAVs. The onboard GPS/INS system allows for fully autonomous flight at a tasked altitude and coverage along and across the flight route. The flight route may be monitored in real time by the flight controller. The UX5 platform enables automatic release of the camera shutter for the acquisition of imagery of the Earth’s surface. The UAV takes off from a particular launcher. The system may be operated at wind speeds of up to and in weather conditions no worse than light intensity precipitation. A Sony NEX-5T camera was installed onboard the Trimble UX5 for imagery data acquisition. It is a compact digital camera equipped with a 16.1-megapixels CMOS array providing maximum image resolution of . It enables ground sampling distance (GSD) of 0.024 to 0.24 m depending on a flight altitude of 75 to 750 m. The images are recorded in JPEG format with lossy compression of data. The camera was equipped with a Voigtlander lens with a fixed focal length of 15 mm and a maximum aperture of .48 The NEX-5T camera was modified in a way that makes it possible to acquire images in the full range of the array’s sensitivity. The filter, which was originally located in front of the array for reducing the spectral range down to 690 nm, was removed. A black filter that blocked visible light, restricting sensor sensitivity to the NIR range of 690 to 1050 nm only, was applied. The sensor records radiation to a maximum wavelength of about 1050 nm. A black filter (B+W 092) that blocked visible light, restricting sensor sensitivity to the NIR range of 690 to 1050 nm only,49,50 was applied. 4.3.Flight CampaignsThe photogrammetric flights were conducted in various weather and lighting conditions at altitudes of 75 to 650 m in four different test areas of diverse topography. Application of the filter means that the images contain only NIR wavelengths. The blue pixels (band 3) are used to record nothing but pure NIR (roughly 800 to 1050 nm), while the red band (band 1) in the images obtained with this filter is, in fact the red edge, roughly from 690 to 770 nm.49 All the acquired images were recorded with an radiometric resolution. Table 1 provides a description of the imaging data used to develop the quality index. Table 2 contains image data obtained for verifying the quality index. Table 1Characteristics of the four flight campaigns: experiment data.
Table 2Characteristics of the four flight campaigns: verification data.
All flights were planned using the Trimble Aerial Imaging software. The lens sharpness setting in the camera was set to infinity while ISO sensitivity was set to AUTO. The weather and lighting conditions for the specific measurement campaigns (Tables 1 and 2) were characterized according to the following criteria:
5.ExperimentsThe research consisted of three phases. The first phase of the experiment was based on the method proposed by Kędzierski and Wierzbicki.16 Each test image was divided into 100 equal segments. According to the characteristics of data recording, band 1 is the red band (roughly 690 to 770 nm), band 2 is the green band, and band 3 contains only NIR, where the blue pixels are used to record nothing but pure NIR (roughly 800 to 1050 nm).49 In this configuration, pixels covered by a blue filter in the Bayer CFA receive only NIR.50 The mean values of the pixel brightness and standard deviation (SD) of the pixel brightness were determined for each channel of the images. In the second phase, the same statistic values—mean value and standard deviation (SD)—were determined for each of the 100 image fragments. The SD values are related (but not limited) to: state of crops, spatial variability, and also shadows. In the third phase, the value of the index (proposed new image quality index) was determined for all test images, and its immunity to a false assessment of the radiometric quality of images was studied. 5.1.Examination of the Gridded Standard Deviations Maps: Analysis of the SD Values DistributionThe size of each segment was of the resolution. In this case, it was . The distribution of standard deviation values is a schema, and therefore each represented fragment is a square. The digital number (DN) and value distribution images are shown in Fig. 2, and Fig. 3 is a graphical representation of these calculations, with the colors resulting from an interpolation of the standard deviation values calculated for the center of each segment. Figure 2 contains an analysis of the NIR images acquired in favorable lighting conditions. All the images have been acquired during sunny days with clear skies. An analysis of the SD values distribution shows that SD values range from 11 to 50 DN in over 30% of the image area for flight mission I, 16 to 50 DN in over 50% of the image area for flight mission VII, and 18 to 45 DN for the majority of the image area with a maximum value of 54 DN for flight mission VIII. Figure 3 contains an analysis of the NIR images acquired in moderate lighting conditions. An analysis of the SD values distribution shows that the SD values range from 11 to 37 DN in over 50% of the image area for flight mission II. 16 to 47 DN in over 50% of the image area for flight mission IV, and 2 to 6 DN for most parts of the image area. Over 50% of the SD values were within 12 to 30 DN for the majority of the image area with a maximum value of 38 DN for flight mission VI. Figure 4 contains an analysis of the NIR images acquired in adverse lighting conditions. An analysis of the SD values distribution shows that the SD values range from 7 to 13 DN in over 50% of the image area for flight mission III. The gridded stats refer to initial research and show only one SD dependency on the fragment of the test image. Based on the previous studies, the assumption of SD dependence on image quality was transferred to the whole image.16 5.2.Examination of SD of G-R-NIR by Image SequenceUsing the initial results, the SD values were then determined for all images in three channels. The image number is the successive image number obtained during a photogrammetric flight over a study area (Figs. 5Fig. 6Fig. 7–8). For images of flight mission I (Fig. 5), the SD values for band 3 (Pure NIR) were in the range from 10.1 to 36.9 DN while the mean value was 16.3 DN. For band 2, the SD values were in the range from 14.1 to 50.4 DN while the mean value was 24.1 DN. For band 1, the SD values were in the range from 2.2 to 60.4 DN while the mean value was 18.0 DN. For images of flight mission II (Fig. 6), the SD values for band 3 (pure NIR) were in the range from 20.8 to 67.9 DN while the mean value was 33.2 DN. For band 2, the SD values were in the range from 18.8 to 59.7 DN while the mean value was 27.6 DN. For band 1, the SD values were in the range from 19.0 to 66.3 DN while the mean value was 32.0 DN. For images of flight mission III (Fig. 7), the SD values for band 3 (pure NIR) were in the range from 4.3 to 45.7 DN while the mean value was 28.1 DN. For band 2, the SD values were in the range from 1.9 to 18.0 DN while the mean value was 6.1 DN. For band 1, the SD values were in the range from 5.4 to 66.0 DN while the mean value was 34.8 DN. For images of the flight mission IV (Fig. 8), the SD values for band 3 (pure NIR) were in the range from 21.6 to 75.1 DN while the mean value was 31.9 DN. For band 2, the SD values were in the range from 18.9 to 61.2 DN while the mean value was 26.6 DN. For band 1, the SD values were in the range from 20.0 to 77.7 DN while the mean value was 30.1 DN. The conducted experiments indicated that the SD values (Figs. 5, 6, and 8) for images acquired in favorable weather conditions, clear sky or few clouds, are high (about 30 DN) for band 3 (pure NIR). The values for images acquired in moderate or adverse weather conditions are lower (by about 30%). Additionally, the SD values for band 2 are significantly lower than the other channels (by about 50%). This observation is an additional confirmation that the images were acquired in worse weather and lighting conditions (Figs. 6 and 8). The SD indicates images of potentially poor radiometric quality. The regular peaks visible on the plots (Figs. 5–8) are a result of a turning maneuver of the UAV. At the moment of performing this maneuver, there is a sudden change in the direction and angle of imaging. This leads to a sudden deterioration in image quality. The observed relations are closely related to Rayleigh scattering for which the scattering coefficient and the scattered light intensity are inversely proportional to the fourth power of the light wavelength. This indicates that the scattering of radiation at 1000 nm wavelength in the NIR band is about 40 times weaker than of the blue light at 400 nm wavelength. Observations of test images indicate that the Rayleigh scattering influence is reduced in the NIR band, and therefore the amount of information is increased. The information in the visible band is scattered by fog or precipitation. Using these observations and experience from previous experiments,16 the authors developed the following equation for determining the value of the radiometric quality index of NIR images acquired from low altitudes: where is an NIR radiometric quality index, is an average DN pixel value in a given band [DN in an 8-bit scale (0 to 255)], is a pixel brightness standard deviation value in a given band, is a weight value of a given band determined empirically on the basis of the relative luminance value, is a band sequence number, and is a number of bands.The relative luminance of an image may be defined as the result from the three bands. A conversion of color information from RGB to luminance was made. The luminance () can be calculated from linear RGB components as follows:51 where is an image luminance, is a red band of the image, is a green band of the image, and is a blue band of the image.For further considerations, it was assumed that the red and NIR components transfer the most information. Figures 6–8 illustrate the relationship of SD to band 1 on individual images concerning what is in them. Figures 5–7 represent the SD value for all images acquired in a given area. Based on this analysis and the luminance value, the result is an image quality index—. Therefore, for the test images, the values of the weights were assumed in a different order than in the classic relative luminance distribution: The values of the three weights (0.2126; 0.0722; 0.7152) for each band were determined empirically using the modified luminance value. The determined weight values result from the transmission of each of the bands in the SONY NEX-5T camera equipped with a black IR-only longpass filter (Fig. 9). When this filter is used, the transmission is lowest for band 2 (G)—lowest weight, next for band 1 (R), and finally band 3 (NIR). The graph in Fig. 9 gives a rough idea about the cut-on and cut-off wavelengths for camera bands and the black filter. In practice, the black filter may transmit residual amounts of green and red radiation. The modified equation for the image quality index will have the following form: Note that the denominator in Eq. (1) will equal 1 because .5.3.Examination of the WNIR Metric Using the “Experimentation” Image SequenceThe following figures present the index diagrams for each sample of images: Analysis of the figure (Fig. 10) shows that the index value for this data sample (images acquired in favorable weather conditions) is in the range from 3.4 to 29.1. The index value does not exceed 10.0 for the majority of the images. The index value for images acquired in moderate weather conditions (clouds and mist) is in the range from 2.2 to 8.4 (Fig. 11). The average value of the index is 5.4 for this data sample. The index value for images acquired in adverse weather conditions (clouds, mist, and light rain) is in the range from 1.2 to 5.0 (Fig. 12). The average value of the index is 2.9. The index value for images acquired in moderate weather conditions (few clouds) is in the range from 1.9 to 8.2 (Fig. 13). The average value of the index is 5.6. Results of visual analysis of the images and their histograms for the specific bands were used to determine the index value ranges for classifying the images as good, medium, or low radiometric quality. The low radiometric quality images should be rejected from further photogrammetric processing. The index value ranges (Table 3) were determined empirically. The results compiled in the graphs (Figs. 7–10) were used to determine the intervals. Table 3Classification of radiometric quality of images with respect to WNIR value.
As mentioned, the NIR images’ quality index value ranges were determined empirically. The analysis showed that it is difficult to define a one-value limit between images of good and medium radiometric quality. It is probably a result of the technique of recording NIR images acquired by the sensor. This leads to a conclusion that the NIR information in the test images was not only recorded in band 3 but probably partially also in the two other bands. The results compiled on the graphs (Figs. 10–13) were used to determine the intervals. Moreover, the type of ground surface, i.e., many areas can be characterized as highly humid, in the photographed areas affected the signal in the NIR range. As a result, a lower DN value occurred in the image, i.e., there were local decreases in brightness. 6.Results6.1.Examination of the WNIR Metric Using the “Validation” Image SequenceIn order to verify the proposed NIR images’ quality index, a separate set of test data was used to calculate the index values. A set of data acquired during flight mission V in the area of Nadarzyce at an altitude of 650 m was used as the first sample for verification. Most of the images were acquired in favorable weather conditions with few clouds (moderate weather). The index value for images acquired in these conditions (Fig. 14) is in the range from 4.4 to 17.0. Using the index, 212 images of the test sample may be assigned to the good radiometric quality category but, half may also be classified as medium radiometric quality images. This ambiguity may be caused by the properties (heterogeneous texture) of the area of the images’ acquisition and demonstrates the vulnerability of the index. Another sample for verification contained imagery data acquired during flight mission VI from an altitude of 300 m in the area of Opatow in moderate weather conditions. The index value was determined for a sample of 121 images. The index value for images acquired in these conditions (Fig. 15) is in the range from 2.9 to 7.5. The average value of the index is 5.1 for this sample. This sample contains 76% of images that may be classified as medium radiometric quality ones, 20% as of low quality, and the remaining 4% as good quality. The index value for images acquired during flight mission VII is in the range from 1.2 to 8.9 (Fig. 16). Results of visual analysis of the images and the determined values of the index classify 1010 images as a medium, 29 images as good, and 260 as low radiometric quality. In most cases, images classified to the last group covered water or forested areas. The index value for images acquired in favorable lighting conditions (Fig. 17) is in the range from 2.0 to 7.5. The average value of the index is 4.8 for this sample. Results of test data analysis and comparison with the determined values of the index classify 291 images as medium, 4 images as good, and 50 as low radiometric quality. As in the previous sample, low radiometric quality images covered forested areas and water zones. 6.2.Verification WNIR Metric Using Other Image Quality IndexThe SSIM index was proposed for verification of the index. It was selected because it is similar to the method of visual assessment of quality and it is partly based on statistical values of the compared images. Twenty sample images for three representative areas were selected for the tests: agricultural field (I), urban area (II), and forest (III). Selected images were obtained as part of flight mission I. The first group of images (I) contained images acquired over cultivated areas, fields, and wasteland. Acquisition angles ranged from 1.1 deg to 3.1 deg. The second group of imagery data (II) contained data acquired over inhabited areas. In the photographed area, there were family houses in terraced and scattered-side housing. Acquisition angles ranged from 2.2 deg to 4.3 deg. The third group of images (III) contained images acquired over forested areas. In the photographed area, there were forests and wooded and bushy areas. Acquisition angles ranged from 1.3 deg to 4.7 deg. These areas were selected on the basis of visual assessment and the index value. The SSIM index value (Fig. 18) determined for three categories of 20 images each confirms the efficiency of the authors’ index for classifying images, into specific groups. The determined SSIM index values confirmed the assumption that the quality of images of forested areas would be the lowest. An image of very good quality () was selected as a reference for tests. The average similarity between the reference image and the ones classified to three representative groups was 73% (agricultural), 62% (low urban area), and 56% (forest). It is convergent with predictions by the new index of NIR images quality, . 7.DiscussionThe presented experiments and their results prove the efficiency of the developed index of radiometric quality assessment of images acquired from low altitudes. Analyzing the numerical value of SD, the authors observed that with its increase, the radiometric quality of the images improved. As it had been proven on the basis of the conducted research, the developed index of radiometric quality assessment may be applied to low altitude NIR images of agricultural areas. In order to confirm the validity of the proposed index, the authors decided to compare the obtained results, with similar analyses performed using the SSIM index. The new index is dedicated to assessing the quality of images in photogrammetric and remote sensing UAV studies. The classification of UAV image quality according to the new index has the potential to increase the accuracy of the UAV workflow. The proposed method of quality assessment of NIR imagery can help identify images that have a smaller number of tie points or, a lower accuracy of remote sensing classification. This is directly applicable in the field of precision agriculture. The developed method is limited to NIR images and it is time consuming for data samples containing a few thousand images. The research results and work done by other researchers35,52,53 indicate that the proposed method of quality assessment of NIR images for UAV photogrammetry and remote sensing will help reject low radiometric quality images from photogrammetric processing. It will make it possible to increase the quality and accuracy of photogrammetric studies, especially for precision agriculture and other remote sensing studies. The acquired imagery data may be subject to an initial assessment of whether a valid photogrammetric or remote sensing study is feasible or a repeated flight in better weather conditions is required. Based on the defined index value ranges, it will be possible to identify images of potentially low radiometric quality, which in turn can have an impact on the accuracy of photogrammetric products, especially in precision agriculture. The topic of radiometric quality is a great concern for the implementation of photogrammetric and digital image processing. The proposed index can be used in a vast number of ways. One of these is to detect and eliminate poor quality images acquired using a UAV for photogrammetric processing purposes. The quality index has been tested for typical agricultural areas. The effectiveness of the indicator for high-urban areas requires further research. 8.ConclusionsThe results of developing an objective index for the quality assessment of NIR images acquired from low altitudes for photogrammetric and remote sensing studies are presented and discussed in the paper. Almost 5000 images acquired in various weather and lighting conditions from altitudes of 75 to 650 m were analyzed in the experiments. The results of the tests made it possible to develop an objective index for the quality assessment of NIR images acquired from low altitudes on the basis of statistical analyses of images and a derived relationship between the image quality and relative luminance. Further research of the authors will focus on developing other image quality assessment indices for images acquired from UAVs in other spectral ranges. AcknowledgmentsThis paper has been supported by a grant cofinanced by the Military University of Technology, the Faculty of Civil Engineering and Geodesy, Geodesy Institute. ReferencesG. S. C. Avellar et al.,
“Multi-UAV routing for area coverage and remote sensing with minimum time,”
Sensors, 15 27783
–27803
(2015). http://dx.doi.org/10.3390/s151127783 SNSRES 0746-9462 Google Scholar
M. Shahbazi et al.,
“Development and evaluation of a UAV-photogrammetry system for precise 3D environmental modeling,”
Sensors, 15 27493
–27524
(2015). http://dx.doi.org/10.3390/s151127493 SNSRES 0746-9462 Google Scholar
I. Colomina and P. Molina,
“Unmanned aerial systems for photogrammetry and remote sensing: a review,”
ISPRS J. Photogramm. Remote Sens., 92 79
–97
(2014). http://dx.doi.org/10.1016/j.isprsjprs.2014.02.013 IRSEE9 0924-2716 Google Scholar
R. Hruska et al.,
“Radiometric and geometric analysis of hyperspectral imagery acquired from an unmanned aerial vehicle,”
Remote Sens., 4
(9), 2736
–2752
(2012). http://dx.doi.org/10.3390/rs4092736 RSEND3 Google Scholar
J. Kelcey and A. Lucieer,
“Sensor correction and radiometric calibration of a 6-band multispectral imaging sensor for UAV remote sensing,”
in Int. Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences,
393
–398
(2012). Google Scholar
A. Orych et al.,
“Impact of the cameras radiometric resolution on the accuracy of determining spectral reflectance coefficients,”
in Int. Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences,
347
(2014). Google Scholar
D. Turner, A. Lucieer and C. Watson,
“An automated technique for generating georectified mosaics from ultra-high resolution unmanned aerial vehicle (UAV) imagery, based on structure from motion (SfM) point clouds,”
Remote Sens., 4 1392
–1410
(2012). http://dx.doi.org/10.3390/rs4051392 RSEND3 Google Scholar
M. Kedzierski et al.,
“Detection of gross errors in the elements of exterior orientation of low-cost UAV images,”
in Baltic Geodetic Congress (Geomatics),
95
–100
(2016). http://dx.doi.org/10.1109/BGC.Geomatics.2016.26 Google Scholar
A. Fryskowska et al.,
“Calibration of low cost RGB and NIR UAV cameras,”
in Int. Archives of the Photogrammetry Remote Sensing and Spatial Information Sciences,
817
–821
(2016). Google Scholar
S. Mikrut,
“Classical photogrammetry and UAV–selected aspects,”
in Int. Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences,
947
–952
(2016). Google Scholar
P. Burdziakowski et al.,
“A modern approach to an unmanned vehicle navigation,”
in 16th Int. Multidisciplinary Scientific GeoConf. SGEM 2016,
747
–758
(2016). Google Scholar
M. Przyborski et al.,
“Photogrammetric development of the threshold water at the dam on the Vistula River in Wloclawek from unmanned aerial vehicles (UAV),”
in SGEM2015 Conf. Proc.,
(2015). Google Scholar
E. R. Hunt et al.,
“Acquisition of NIR-green-blue digital photographs from unmanned aircraft for crop monitoring,”
Remote Sens., 2
(1), 290
–305
(2010). http://dx.doi.org/10.3390/rs2010290 RSEND3 Google Scholar
F. Bachmann et al.,
“Micro UAV based georeferenced orthophoto generation in VIS+ NIR for precision agriculture,”
in Int. Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences,
11
–16
(2013). Google Scholar
F. Garcia-Ruiz et al.,
“Comparison of two aerial imaging platforms for identification of Huanglongbing-infected citrus trees,”
Comput. Electron. Agric., 91 106
–115
(2013). http://dx.doi.org/10.1016/j.compag.2012.12.002 CEAGE6 0168-1699 Google Scholar
M. Kędzierski and D. Wierzbicki,
“Radiometric quality assessment of images acquired by UAV’s in various lighting and weather conditions,”
Measurement, 76 156
–169
(2015). http://dx.doi.org/10.1016/j.measurement.2015.08.003 0263-2241 Google Scholar
M. R. K. Wenzel,
“SURE—photogrammetric surface reconstruction from imagery,”
(2017) http://www.ifp.uni-stuttgart.de/publications/software/sure/index.en.html January ). 2017). Google Scholar
C. Wu,
“VisualSFM: a visual structure from motion system,”
(2017) http://ccwu.me/vsfm/ January ). 2017). Google Scholar
W. Förstner and E. Gülch,
“A fast operator for detection and precise location of distinct points, corners and centers of circular features,”
in Proc. of the ISPRS Intercommission Workshop on Fast Processing of Photogrammetric Data,
281
–305
(1987). Google Scholar
A. Grün,
“Adaptive least squares correlations: a powerful matching techniques,”
S. Afr. J. Photogramm. Remote Sens. Cartography, 14
(3), 175
–187
(1985). Google Scholar
H. Hirschmuller,
“Stereo processing by semiglobal matching and mutual information,”
IEEE Trans. Pattern Anal. Mach. Intell., 30
(2), 328
–341
(2008). http://dx.doi.org/10.1109/TPAMI.2007.1166 ITPIDJ 0162-8828 Google Scholar
B. K. P. Horn,
“Relative orientation,”
Int. J. Comput. Vision, 4 59
–78
(1990). http://dx.doi.org/10.1007/BF00137443 IJCVEQ 0920-5691 Google Scholar
D. P. Robertson and R. M. Varga,
“Structure from motion,”
Practical Image Processing and Computer Vision, 13 1
–49 Halsted Press, New York
(2009). Google Scholar
D. G. Lowe,
“Distinctive image features from scale-invariant keypoints,”
Int. J. Comput. Vision, 60
(2), 91
–110
(2004). http://dx.doi.org/10.1023/B:VISI.0000029664.99615.94 IJCVEQ 0920-5691 Google Scholar
S. M. Smith and J. M. Brady,
“SUSAN—a new approach to low level image processing,”
Int. J. Comput. Vision, 23
(1), 45
–78
(1997). http://dx.doi.org/10.1023/A:1007963824710 IJCVEQ 0920-5691 Google Scholar
J. Matas et al.,
“Robust wide-baseline stereo from maximally stable extremal regions,”
Image Vision Comput., 22
(10), 761
–767
(2004). http://dx.doi.org/10.1016/j.imavis.2004.02.006 IVCODK 0262-8856 Google Scholar
M. A. Fischler and R. C. Bolles,
“Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,”
Commun. ACM, 24 381
–395
(1981). http://dx.doi.org/10.1145/358669.358692 CACMA2 0001-0782 Google Scholar
D. G. Lowe,
“Object recognition from local scale-invariant features,”
in Proc. of the Seventh IEEE Int. Conf. on Computer Vision,
1150
–1157
(1999). http://dx.doi.org/10.1109/ICCV.1999.790410 Google Scholar
J. M. Morel and G. Yu,
“ASIFT: a new framework for fully affine invariant image comparison,”
SIAM J. Imag. Sci., 2
(2), 438
–469
(2009). http://dx.doi.org/10.1137/080732730 Google Scholar
H. Bay et al.,
“Speeded-up robust features (SURF),”
Comput. Vision Image Understanding, 110
(3), 346
–359
(2008). http://dx.doi.org/10.1016/j.cviu.2007.09.014 CVIUF4 1077-3142 Google Scholar
S. Zancajo-Blazquez et al.,
“An automatic image-based modelling method applied to forensic infography,”
PLoS One, 10 e0118719
(2015). http://dx.doi.org/10.1371/journal.pone.0118719 POLNCL 1932-6203 Google Scholar
E. Honkavaara et al.,
“The EuroSDR project radiometric aspects of digital photogrammetric images-results of the empirical phase,”
in Int. Archives of the Photogrammetry Remote Sensing and Spatial Information Sciences,
(2011). Google Scholar
M. Crespi and L. De Vendictis,
“A procedure for high resolution satellite imagery quality assessment,”
Sensors, 9 3289
–3313
(2009). http://dx.doi.org/10.3390/s90503289 SNSRES 0746-9462 Google Scholar
K. Pyka,
“The use of wavelets for evaluation of loss in radiometric quality in the orthophoto mosaicking process AGH UWND,”
Polish university,
(2005). Google Scholar
N. Haala, M. Cramer and M. Rothermel,
“Quality of 3D point clouds from highly overlapping UAV imagery,”
in Int. Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences,
183
–188
(2013). Google Scholar
N. Haala,
“Comeback of digital image matching,”
Photogrammetric Week ’09, 289
–301 Wichmann Verlag, Heidelberg, Germany
(2009). Google Scholar
T. Rosnell and E. Honkavaara,
“Point cloud generation from aerial image data acquired by a quadrocopter type micro unmanned aerial vehicle and a digital still camera,”
Sensors, 12
(1), 453
–480
(2012). http://dx.doi.org/10.3390/s120100453 SNSRES 0746-9462 Google Scholar
M. Rothermel and N. Haala,
“Potential of dense matching for the generation of high quality digital elevation models,”
in ISPRS Hannover Workshop 2011: High-Resolution Earth Imaging for Geospatial Information,
(2011). Google Scholar
F. D. Java, F. Samadzadegan and P. Reinartz,
“Spatial quality assessment of pan-sharpened high resolution satellite imagery based on an automatically estimated edge based metric,”
Remote Sens., 5 6539
–6559
(2013). http://dx.doi.org/10.3390/rs5126539 RSEND3 Google Scholar
E. Honkavaara et al.,
“Hyperspectral reflectance signatures and point clouds for precision agriculture by light weight UAV imaging system,”
in ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences,
(2012). Google Scholar
E. Honkavaara et al.,
“Processing and assessment of spectrometric, stereoscopic imagery collected using a lightweight UAV spectral camera for precision agriculture,”
Remote Sens., 5 5006
–5039
(2013). http://dx.doi.org/10.3390/rs5105006 Google Scholar
E. Honkavaara, Calibrating Digital Photogrammetric Airborne Imaging Systems Using a Test Field, Finnish Geodetic Institute, Helsinki University of Technology, Espoo, Finland
(2008). Google Scholar
T. Kim, H. Kim, H. Kim,
“Image-based estimation and validation of NIIRS for high-resolution satellite images,”
in Int. Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences,
1
–4
(2008). Google Scholar
L. Li, H. Luo and H. Zhu,
“Estimation of the image interpretability of ZY-3 sensor corrected panchromatic nadir data,”
Remote Sens., 6
(5), 4409
–4429
(2014). http://dx.doi.org/10.3390/rs6054409 RSEND3 Google Scholar
Q. Huynh-Thu and M. Ghanbari,
“Scope of validity of PSNR in image/video quality assessment,”
Electron. Lett., 44
(13), 800
(2008). http://dx.doi.org/10.1049/el:20080522 ELLEAK 0013-5194 Google Scholar
Z. Wang et al.,
“Image quality assessment: from error measurement to structural similarity,”
IEEE Trans. Image Process., 13 600
–612
(2004). http://dx.doi.org/10.1109/TIP.2003.819861 Google Scholar
A. Bhat, I. Richardson and S. Kannangara,
“A new perceptual quality metric for compressed video,”
in IEEE Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP ’09),
(2010). http://dx.doi.org/10.1109/ICASSP.2009.4959738 Google Scholar
M. Kedzierski, A. Fryśkowska and D. Wierzbicki, Photogrammetric Studies from Low Altitude, 13
–14 WAT, Warsaw
(2014). Google Scholar
Trimble UAS, “Trimble UX5 aerial imaging solution vegetation monitoring frequently asked questions,”
(2013) http://surveypartners.trimble.com April 2017). Google Scholar
K. Pauly,
“Towards calibrated vegetation indices from UAS-derived orthomosaics,”
in Proc. of the 13th Int. Conf. on Precision Agriculture,
(2016). Google Scholar
S. A. Genchi et al.,
“Structure-from-motion approach for characterization of bioerosion patterns using UAV imagery,”
Sensors, 15
(2), 3593
–3609
(2015). http://dx.doi.org/10.3390/s150203593 SNSRES 0746-9462 Google Scholar
F. J. Mesas-Carrascosa et al.,
“Accurate ortho-mosaicked six-band multispectral UAV images as affected by mission planning for precision agriculture proposes,”
Int. J. Remote Sens., 38
(8–10), 2161
–2176
(2016). http://dx.doi.org/10.1080/01431161.2016.1249311 IJSEDK 0143-1161 Google Scholar
Y. Yang, Z. Lin and F. Liu,
“Stable imaging and accuracy issues of low-altitude unmanned aerial vehicle photogrammetry systems,”
Remote Sens., 8
(4), 316
(2016). http://dx.doi.org/10.3390/rs8040316 Google Scholar
BiographyDamian Wierzbicki is an assistant professor at the Military University of Technology. He received his PhD in photogrammetry from the Military University of Technology in 2015. His current research interests include UAV photogrammetry and digital image processing. He is the author of several articles and papers presented at national and international conferences. Anna Fryskowska is an assistant professor at the Military University of Technology. She received her PhD in photogrammetry from the Military University of Technology in 2013. Her current research interests include terrestrial laser scanning and UAV photogrammetry. She is the author of several articles and papers presented at national and international conferences. Michal Kedzierski is a professor at the Military University of Technology. His research interests are UAV photogrammetry, digital image processing, and airborne and terrestrial laser scanning. He is the author of a few dozen articles and papers presented at national and international conferences. Michalina Wojtkwoska is an assistant professor at the Military University of Technology. She received her PhD in photogrammetry the from Military University of Technology in 2014. Her current research interests include terrestrial laser scanning. Paulina Delis is an assistant professor at the Military University of Technology. She received her PhD in photogrammetry from the Military University of Technology in 2016. Her current research interests include UAV photogrammetry and digital image processing. She is the author of several articles and papers presented at national and international conferences. |