Commercial off-the shelf systems of UAVs and sensors are touted as being able to collect remote-sensing data on crops that include spectral reflectance and plant height. Historically a great deal of effort has gone into quantifying and reducing the error levels in the geometry of UAV-based orthomosaics, but little effort has gone into quantifying and reducing the error of the reflectance and plant-height. We have been developing systems and protocols involving multifunctional ground-control points (GCPs) in order to produce crop phenotypic data that are as repeatable as possible. These multifunctional GCPs aid not only geometric correction, but also image calibration of reflectance and plantheight. The GCPs have known spectral-reflectance characteristics that are used to enable reference-based digital numberto-reflectance calibration of multispectral images. They also have known platform heights that are used to enable reference-based digital surface model-to-height maps. Results show that using these GCPs for reflectance and plantheight calibrations significantly reduces the error levels in reflectance (ca. 50% reduction) and plant-height (ca. 20% reduction) measurements.
Unmanned Aerial System (UAS) is becoming a popular choice when acquiring fine spatial resolution images for precision agriculture applications. Compared to other remote sensing data collection platforms, UAS can acquire image data at relatively lower cost with finer spatial resolution with more flexible schedule. In recent years, multispectral sensors that can capture near infrared (NIR) and red edge spectral reflectance have been successfully integrated with UAS, and it is offering more versatility in soil and field analysis, crop monitoring, and plant health assessment. In this study, we aim to investigate the capability of UAS-based crop monitoring system to determine the best management practices for 3 different tomato varieties comparing different planting dates, plant density, use of plastic mulch and fertilization rate. The field and UAS data were acquired during Spring 2016, 2017, and 2018 located in Weslaco, TX. To compare the effect of various treatments in cropping systems, physiological parameters and vegetation indexes (Canopy Cover, Canopy Height, Canopy Volume and Excess Greenness) were extracted from red, green, blue (RGB) data and correlated with final yield data to evaluate practice/treatment to maximize tomato yield. During Spring 2016, we observed highest yield from the early March planting date using white plastic mulch. The results also indicated that the variety yielded higher presented a slow canopy decay towards the end of the season. In Spring 2017, there were differences in yield among the three tomato varieties depending on the fertilization rate, DRP-8551 performed better at low nitrogen level, Mykonos performed better on the two higher nitrogen rates and TAM-Hot-Ty had no significant difference among treatments. Finally, during Spring 2018, it was observed that early March produced the best yields and varieties that were able to slow canopy decay towards the end of season performed better. No significant difference was observed between plant density. It is expected that proposed system can be used to collect reliable data and develop variety and environment specific management practices to increase marketable yield and reduce production cost.
Recent years have witnessed enormous growth in Unmanned Aircraft System (UAS) and sensor technology which made it possible to collect high spatial and temporal resolutions data over the crops throughout the growing season. The objective of this research is to develop a novel machine learning framework for marketable tomato yield estimation using multi-source and spatio-temporal remote sensing data collected from UAS. The proposed machine learning model is based on Artificial Neural Network (ANN) and it takes UAS based multi-temporal features such as canopy cover, canopy height, canopy volume, Excessive Greenness Index along with weather information such as humidity, precipitation, temperature, solar radiations and crop evapotranspiration (ETc) as input and predicts the corresponding marketable yield. The predicted yield is validated using the actual harvested yield. Breeders may be able to use the predicted yield as a parameter for genotype selection so that they can not only increase their experiment size for faster genotype selection but also to make efficient and informed decision on best performing genotypes. Moreover, yield prediction maps can be used to develop within-field management zones to optimize field management practices.
Unmanned Aerial System (UAS) is getting to be the most important technique in recent days for precision agriculture and High Throughput Phenotyping (HTP). Attributes of sorghum panicle, especially, are critical information to assess overall crop condition, irrigation, and yield estimation. In this study, it is proposed a method to extract phenotypes of sorghum panicles using UAS data. UAS data were acquired with 85% overlap at an altitude of 10m above ground to generate super high resolution data. Orthomosaic, Digital Surface Model (DSM), and 3D point cloud were generated by applying the Structure from Motion (SfM) algorithm to the imagery from UAS. Sorghum panicles were identified from orthomosaic and DSM by using color ratio and circle fitting. The cylinder fitting method and disk tacking method were proposed to estimate panicle volume. Yield prediction models were generated between field-measured yield data and UAS-measured attributes of sorghum panicles.
Field-based high-throughput phenotyping is a bottleneck to future breeding advances. The use of remote sensing with
unmanned aerial vehicles (UAVs) can change the way agricultural research operates by increasing the spatiotemporal
resolution of data collection to monitor status of plant growth. A fixed-wing UAV (Tuffwing) was operated to collect
images of a sorghum breeding research field with 70% overlap at an altitude of 120 m. The study site was located at Texas
A and M AgriLife Research’s Brazos Bottom research farm near College Station, Texas, USA. Relatively high-resolution
(>2.7cm/pixel) images were collected from May to July 2017 over 880 sorghum plots (including six treatments with four
replications). The collected images were mosaicked and structure from motion (SfM) calculated, which involves
construction of a digital surface model (DSM) by interpolation of 3D point clouds. Maximum plant height for each
genotype (plot) was estimated from the DSM and height calibration implemented with aerial measured values of groundcontrol
points with known height. Correlations and RMSE values between actual height and estimated height were
observed over sorghum across all genotypes and flight dates. Results indicate that the proposed height calibration method
has a potential for future application to improve accuracy in plant height estimations from UAVs.
The objective of this research is to develop a novel machine learning framework for automatic cotton genotype selection using multi-source and spatio-temporal remote sensing data collected from Unmanned Aerial System (UAS). The proposed machine learning model is based on Artificial Neural Network (ANN) and it takes UAS based multi-temporal features such as canopy cover, canopy height, canopy volume, Normalized Difference Vegetation Index (NDVI), Excessive Greenness Index along with non-temporal features such as cotton boll count, boll size and boll volume as input and predicts the corresponding yield. Testing the performance of our model using actual yield resulted in an R square value of approximately 0.9. The proposed cotton genotype selection model is expected to revolutionize the cotton breeding research by providing valuable tools to cotton breeders so that they can not only increase their experiment size for faster genotype selection but also make efficient and informed decision on best performing genotype selection.
Land leveling is the initial step for increasing irrigation efficiencies in surface irrigation systems. The objective of this paper was to evaluate potential utilization of an unmanned aerial system (UAS) equipped with a digital camera to map ground elevations of a grower’s field and compare them with field measurements. A secondary objective was to use UAS data to obtain a digital terrain model before and after land leveling. UAS data were used to generate orthomosaic images and three-dimensional (3-D) point cloud data by applying the structure for motion algorithm to the images. Ground control points (GCPs) were established around the study area, and they were surveyed using a survey grade dual-frequency GPS unit for accurate georeferencing of the geospatial data products. A digital surface model (DSM) was then generated from the 3-D point cloud data before and after laser leveling to determine the topography before and after the leveling. The UAS-derived DSM was compared with terrain elevation measurements acquired from land surveying equipment for validation. Although 0.3% error or root mean square error of 0.11 m was observed between UAS derived and ground measured ground elevation data, the results indicated that UAS could be an efficient method for determining terrain elevation with an acceptable accuracy when there are no plants on the ground, and it can be used to assess the performance of a land leveling project.
Generally, an acquired stereoscopic image pair needs to be pre-processed geometrically before 3D viewing, because the
parallax and alignment of the pair is not optimal for binocular vision. A stereo image obtained without using a
specialized stereo camera system can have several problems that disrupt comfortable 3D viewing, such as insufficient or
excessive baseline lengths between the two images. We present a reconstruction technique for the stereo pair images that
maximizes visual comfort. First, a disparity map is generated from the stereo image by multiple footprints stereo
algorithm, and then a synthetic stereomate is created using the disparity map and a right image of the given stereo pair.
At this time, we adjust the disparity map to create a more realistic 3D effect. Most of the frequency disparity is
reassigned to zero and the maximum disparity is revised as a parallax comfortable for human eyes. The occlusion of the
synthetic stereomate is corrected by an inpainting method. Through the experiments, we could obtain a registered
stereoscopic image with an optimized parallax. To evaluate the proposed technique, our results were compared with the
original stereo pairs by viewing the 3D stereo anaglyphs.
Anaglyph is the simplest and the most economical method for 3D visualization. However, anaglyph has several drawbacks such as loss of color or visual discomfort, e.g., region merging and the ghosting effect. In particular, the ghosting effect, which is caused by green penetrating to the left eye, brings on a slight headache, dizziness and vertigo. Therefore, ghosting effects have to be reduced to improve the visual quality and make viewing of the anaglyph
comfortable. Since red lightness is increased by penetration by green, the lightness of the red band has to be compensated for. In this paper, a simple deghosting method is proposed using the red lightness difference of the left and right images. We detected a ghosting area with the criterion, which was calculated from the statistics of the difference image, and then the red lightness of the anaglyph was changed to be brighter or darker according to the degree of the difference. The amount of change of red lightness was determined empirically. These adjustments simultaneously reduced the ghosting effect and preserved the color lightness within the non-ghosting area. The proposed deghosting method works well, and the goal of this paper was to detect the ghosting area automatically and to reduce the ghosting.