Open Access
23 August 2016 Cotton growth modeling and assessment using unmanned aircraft system visual-band imagery
Tianxing Chu, Ruizhi Chen, Juan A. Landivar, Murilo M. Maeda, Chenghai Yang, Michael J. Starek
Author Affiliations +
Abstract
This paper explores the potential of using unmanned aircraft system (UAS)-based visible-band images to assess cotton growth. By applying the structure-from-motion algorithm, the cotton plant height (ph) and canopy cover (cc) information were retrieved from the point cloud-based digital surface models (DSMs) and orthomosaic images. Both UAS-based ph and cc follow a sigmoid growth pattern as confirmed by ground-based studies. By applying an empirical model that converts the cotton ph to cc, the estimated cc shows strong correlation (R2=0.990) with the observed cc. An attempt for modeling cotton yield was carried out using the ph and cc information obtained on June 26, 2015, the date when sigmoid growth curves for both ph and cc tended to decline in slope. In a cross-validation test, the correlation between the ground-measured yield and the estimated equivalent derived from the ph and/or cc was compared. Generally, combining ph and cc, the performance of the yield estimation is most comparable against the observed yield. On the other hand, the observed yield and cc-based estimation produce the second strongest correlation, regardless of the complexity of the models.

1.

Introduction

Monitoring crop growth has been realized to be of crucial importance for effective crop management. Factors of crop growth, such as plant height (ph), canopy cover (cc), canopy temperature, and water stress, are usually strong indicators for irrigation scheduling, harvest, fertilization, pesticide application, and production.1,2 Remote sensing has proven to be an appropriate technology accessing abundant, detailed, and quantitative crop information.3,4 For remotely monitoring the growth and health status, satellites provide well-tuned imagery products, however, their temporal and spatial limits sometimes tend to lag behind the requirement of crop growth monitoring, especially during the early stages of crop growth. In this regard, piloted aerial vehicles with portable remote sensors enable a complementary manner for monitoring a designated area.57 Nevertheless, the use of an airborne sensing approach usually requires high user cost and professionalism, and may cause lengthy delivery of products and often limited temporal frequency.8 Moreover, airborne-based imagery is not capable of offering images with sufficient detail to detect crop disease symptoms from individual plant leaves.9 As an alternative, ground-based sensing systems collect various in-situ data using equipment such as global positioning system, multispectral optical camera, ultrasonic ranging sensor, infrared radiometer, and temperature and relative humidity probes.10 The quality and performance of the data collected by ground-based sensing systems are usually satisfactory,11 although the data collection process is complex and time-consuming.

On the other hand, commercially available unmanned aircraft system (UAS) platforms are currently capable of carrying different types of remote sensors and have been recently explored for precision agriculture applications.1214 The UAS platforms allow sensing crop growth with low flying altitude, flexible revisit frequency, and reasonable price. Recent UAS-relevant precision agriculture studies have used fixed-wing15,16 and multirotor17,18 platforms by carrying various types of equipment.8,1922

Discrete ranging airborne LiDAR systems can provide multireturn capabilities and have been extensively utilized in forestry and vegetation for structure characterization and biomass estimation.23,24 However, over short cotton plants, which will be discussed in this study, the value of multireturn LiDAR is debatable because of limitations in range resolution. Range resolution in multireturn LiDAR refers to the minimum discrimination distance between consecutive returns (e.g., canopy top versus ground) along the slant path of the laser pulse. Range resolution is largely a function of the laser pulse length and response time of the receiver electronics for a given LiDAR system. The pulse widths of most commercially-available airborne LiDAR systems are too long to consistently capture more than one return in terms of observing a single cotton plant. For example, a 10-ns-pulse-length LiDAR system equates to 3-m “blind zone” for target discrimination with a single-channel detector.25 LiDAR systems with shorter pulse widths or full-waveform digitization may enhance the potential for effective cotton analysis, yet airborne LiDAR remains a relatively expensive method due to sensor costs and payload weight necessitating the use of larger aircraft. Although small UAS are now being equipped with lightweight, low-cost LiDAR sensors, the ranging characteristics of these sensors coupled with direct georeferencing solutions from the miniaturized onboard aiding technology limit performance.26 Point clouds produced from such systems are currently not equivalent to point cloud fidelity produced from expensive survey-grade airborne LiDAR systems although this gap will close as technology evolves.

As a sound and economical alternative to state-of-the-art LiDAR systems, new investigations have revealed great potentials in precision agriculture using visual-band consumer-grade cameras/red, green, and blue (RGB) cameras.2729 As opposed to using costly LiDAR scanners, the structure-from-motion (SfM) techniques, developed by the photogrammetry community, have paved the way for utilizing low-cost visual-band cameras to access high-resolution three-dimensional (3-D) first-surface-return point cloud and plant profile information.

Recent studies, such as Refs. 16, 17, 19, 2728.29, have investigated potentials for obtaining precise and reliable 3-D models of plants and achieved vegetation monitoring based on UAS imagery. Compared with previous works, the following contributions are addressed: (1) comprehensive cotton growth quantification was conducted in a test field from UAS imagery. While previous studies mainly focused on assessment of 3-D crop parameters from a single date flight campaign, comprehensive experimental campaigns with ultralow flight altitude were made in this study, enabling precise geospatial quantifications and full growth estimate over the life cycle of the cotton plants. (2) Cotton growth modeling and assessment using UAS visual-band imagery were introduced in this study. While related works primarily investigated trees and grain plants such as barley, UAS imagery-based cotton growth and vegetation monitoring potentials were rarely addressed. Furthermore, the relationship between cotton yield and plant growth parameters has not yet been sufficiently investigated, therefore, the potential of using plant growth parameters for yield prediction prior to maturation stage was examined in this study.

Therefore, a case study of comprehensive cotton growth modeling and assessment is presented in this paper. A lightweight UAS platform was setup and used to frequently fly over the designated cotton field. The study primarily focuses on the use of the low-cost UAS visible imagery and geospatial computing methods for estimating cc and ph information and analyzing their relationship with the cotton yield by testing regression models. The date when plant growth stabilizes was selected for developing and assessing the regression models.

2.

Material and Methods

2.1.

Study Site

A field trial was established at the Texas A&M AgriLife Research and Extension Center (27° 46.948′ N, 97° 33.605′ W) at Corpus Christi, Texas, during the summer of 2015. The field was 85-m long and 54-m wide. Seeds of 35 cotton varieties were planted on April 1, 2015, in north-to-south oriented rows at a rate of 13 seeds/m. Each plot consisted of two rows that were 10.7-m long, spaced at 0.96 m. Each variety was replicated four times as indicated by different colors in Fig. 1. In the first replication, the variety numbers were arranged in ascending order, while in the latter replications, plots were arranged in a randomized complete block design. The border plots with dark blue color in Fig. 1 represented filler rows and were excluded from subsequent processing and analysis. Table 1 shows the cotton varieties and types planted in the study field. Cotton emergence was observed on April 6, 2015, and the cotton was harvested on August 17, 2015. Four ground control points (GCPs) were set up at the corners of the test field and used as geodetic benchmarks for image georeferencing purpose. The coordinates of the GCPs were measured using the Altus APS-3 receiver (Altus Positioning Systems, Torrance, California) in support of the TxDOT virtual reference station network, which enables instant access to centimeter-level positioning accuracy under the WGS-84 frame.

Fig. 1

Geographical illustration of the cotton study site.

JARS_10_3_036018_f001.png

Table 1

Cotton varieties and the types planted in the study field.

Variety numberVariety type
1ST 4946 GLB2
2PHY 333 WRF
3PHY 495 W3RF
4PHY 499 WRF
5PHY 312 WRF
6PHY 552 WRF
7PHY 444 WRF
8NG 3405 B2XF
9NG 3406 B2XF
10NG 5007 B2XF
11AMDG 7824
12DG 3385 B2XF
13UA 103
14UA 222
15HQ 210 CT
16ST 4747 GLB2
17DP 1219 B2RF
18DP 1044 B2RF
19DP 1359 B2RF
20DP 1555 B2RF
21DP 1549 B2XF
22DP 1522 B2XF
23DP 1518 B2XF
24MON15R934XR
25ST 6182 GLT
26DP 1553 B2XF
27MON15R525 B2XF
28MON15R551 B2XF
2912WSTR307-2 B2RF
30FM 2007 GLT
31CT15426 B2XF
32CT15545 B2XF
33CT15444 B2XF
34CT15425 B2XF
35CT15634 B2RF
FillerST4946 GLB2

2.2.

Ground Data Collection

Routine management practices such as fertility, disease prevention, and weed and insect control followed the guidelines provided by the Texas A&M AgriLife Extension Service for the region. To determine population and plant density, stand counts were conducted on 0.0004 hectare on April 28, 2015, for each plot. All plots were harvested using a custom two-row cotton spindle picker model 9900 (Deere & Company, Moline, Illinois) on August 17, 2015. This equipment was modified for small-plot research and allowed yield to be established on a per plot basis.

2.3.

Unmanned Aircraft System Platform and Imagery Data Collection

The UAS platform used for this study was a Phantom 2 Vision+ multirotor copter (DJI, Shenzhen, Guangdong, China). Its integrated fisheye-lens camera provided 14 megapixel images with RGB channels.30 Its gimbal enabled stabilized nadir observation to the field during the flight experiments. Similar to the airborne photogrammetric technique, the UAS platform requires taking images with a high degree of spatial overlap for a favorable processing outcome.27 To select an appropriate flight strategy, the optimal image resolution, camera response time per shot, as well as the battery life were also taken into consideration. After balancing all these factors, it was determined that the UAS flew at an average height of 15 m with an average horizontal speed of 1  m/s, and the camera captured a shot once the UAS traveled 1 m horizontally.

Raw images were collected throughout the whole cotton growing and development phases from early April to late July 2015 for a total of 16 datasets. Specifically, during the first week after emergence, data were collected on a per-day basis for the purpose of monitoring the germination process. The routine flight experiment was conducted every week or every other week afterward, depending on the local weather conditions.

The flight time for each mission was 20  min to cover the whole test field, and the nadir-view images were taken to enable 70% along-track and 60% across-track overlap between the images. All flights were conducted around 12:00 PM local time to ensure the homogeneity of light intensity. To minimize human intervention and maintain consistency between different flight attempts, the camera settings (e.g., white balance, ISO rating, sharpness, and exposure parameters) were kept on automatic or default mode.

2.4.

Growth Information Generation

At this stage, the goal is to extract cotton ph and cc estimates for each flight. Figure 2 shows the overall processing sketch using RGB images obtained from our lightweight UAS platform. Our study case first utilized the Pix4Dmapper Pro software (Pix4D SA, 1015 Lausanne, Switzerland) to generate initial mapping products. The software uses scale-invariant feature transform,31 or similar descriptor algorithm, to find key points and match a large set of images. It geometrically describes the projection between two corresponding points in a pair of images representing the same 3-D object. The GCPs were then loaded to the process to create georeferenced products. After importing the matched key points into the SfM algorithm, two-dimensional (2-D), and 3-D products (georeferenced orthomosaic images and 3-D point clouds) were generated. For generating 3-D models, SfM is an important stage to establish the correspondence between images and the reconstruction of 3-D objects to be studied. It requires multiple overlapped images as input to extract the 3-D point cloud given 2-D image common features. Detailed descriptions of the SfM algorithm and workflow can be found in Chapter 10 of Ref. 32. Similar to the Pix4Dmapper that was used in this research, the frequently implemented photogrammetry techniques can also be accessed in other equivalent commercially available software packages and open source solutions such as Agisoft PhotoScan Professional,33 Correlator3D UAV,34 insight3d,35 and VisualSFM.36

Fig. 2

Diagram of the cotton growth estimation using RGB images obtained from the lightweight UAS platform.

JARS_10_3_036018_f002.png

The orthomosaic images generated had an average file size of 410 megabytes. Figure 3 shows an example of the point cloud data, based on the flight experiment carried out on June 26, 2015, focusing on a small part of the whole test field. The legend shows negative height because the height was georeferenced to the WGS-84 reference ellipsoid. A 0.9-m height difference is observable from the bare soil to the peak (highest point) of the cotton plants. In our study case, over 18 million points were generated in a point cloud file for the whole field for each flight, making the file size usually larger than 650 megabytes.

Fig. 3

An example of the point cloud data, based on the flight experiment carried out on June 26, 2015, focusing on a small part of the whole test field.

JARS_10_3_036018_f003.png

Cotton ph was estimated using the Quick Terrain Modeler 8.0.5 (Applied Imagery, Chevy Chase, Maryland) after loading the point clouds. In a first step, a bare soil-based digital surface model (DSM) (terrain elevation), DSM0, is supposed to be created by the point cloud dataset collected before cotton emergence. In this study, however, the first flight experiment was completed on April 7, 2015, the second day after cotton emergence was observed. In spite of that, its DSM was still considered the DSM0 due to the tiny and ignorable size of cotton hypocotyls observed by the UAS camera. The DSM0 was then subtracted from each subsequent digital surface model DSMi after emergence, producing the cotton surface profile CSPi at the i‘th day after emergence (DAE). To allow an estimate of the uncertainty of height fields, the bare soil areas of the DSMs after emergence were compared against the DSM0. In general, the DSM uncertainty at the bare soil areas was 5  cm among all flight experiments in this study. It was mainly introduced by routine tractor plows, meteorology changes as well as image processing errors.

In a specific CSP model, the ph information of each variety was determined by averaging the height samples around the centerline of the cotton profile. More specifically in Fig. 4, the two rows in the cyan frame represent a variety plot, and the red dots depict the samples to be used for calculating the ph of this variety. The red dots were cloud points around the centerline of the cotton profile with 0.10-m width at each row. The height was finally calculated by summing all the samples of these two rows and dividing by the sample number.

Fig. 4

Point cloud illustration of ph calculation for each cotton variety plot.

JARS_10_3_036018_f004.png

For assessing cc, the georeferenced orthomosaic images were loaded into ArcMap 10.3.1 platform (Esri, Redlands, California) for classification. The study area was classified into two categories (i.e., cotton canopy and bare soil). After manually training a few samples on each category, the software performed interactive supervised classification and cc was calculated as the proportion of the ground area covered by the vertical projection of crown perimeters, as described in Ref. 37.

3.

Results

3.1.

Plant Height

The UAS flight experiments ranged from April 7 to July 23, 2015, which covered the cotton growing and development cycle from emergence to early harvest stages. Due to the low flight altitude in this study case, the DSMs generated in Sec. 2.4 yielded 7.3  mm/pixel horizontal resolution. The ph and cc were estimated from April 12, 2015, (7 DAE) and the datasets collected previously were excluded because of the small size of the cotton seedlings during the early germination stage.

Cotton development follows a sigmoid curve pattern characterized by a slow growth rate during emergence, followed by a geometric increase in growth during leaf production, blooming and boll development phases, and slowing down and declining during maturation.38 Figure 5 illustrates the sigmoid regression curve of cotton ph against DAE based on the visible UAS imagery with SSE=sum of squared errors, R2=coefficient of determination, adjusted R2=adjusted coefficient of determination, and RMSE=root-mean-square errors. Each green dot represents the estimated ph on a specific data collection day by averaging the height statistics of all 35 varieties in 4 replications.

Fig. 5

Sigmoid regression of cotton ph against DAE based on UAS imagery.

JARS_10_3_036018_f005.png

The cotton grew slowly before May 1, 2015, (26 DAE) and then exponentially until June 19, 2015 (75 DAE). The maximum crop height was observed on July 2, 2015 (88 DAE). The increased variation in crop height after 40 DAE is likely caused by the phenotypic differences between the varieties.

3.2.

Canopy Cover

In addition to the ph estimation, cc is also closely related to crop growth and development. To assess the cc, the orthomosaic images, which were created after applying the SfM algorithm, were loaded into the ArcMap engine. The camera settings were kept on automatic or default mode, and the image brightness values, for both bare soil and canopy, varied from day to day. Therefore, each orthomosaic image generated an individual set of training samples, representing either canopy or soil, and the interactive supervised classification strategy, specifically maximum likelihood classification using sample set,39 was performed to form the classified raster image. After making a raster-to-polygon conversion, cc was obtained by summing the area of all polygons over the investigated area. It is worth noting that the presence of weeds in the field may result in overestimation of cc. To eliminate the interference of the weeds, particularly during the early growing stages, polygons in between two adjacent rows were removed and excluded from the cc estimation. This process was performed by discriminating the 2-D coordinates of the centroid of the polygons, and any polygons that appeared off the cotton rows were filtered out.

The sigmoid curve in Fig. 6 depicts the cc regression against days after cotton emergence, confirming similar growth pattern as the ph. The cc calculated was based on the whole test field, rather than focusing on a specific area.

Fig. 6

Sigmoid regression of cc against DAE based on UAS imagery.

JARS_10_3_036018_f006.png

3.3.

Relationship Between Plant Height and Canopy Cover

In this study, the ground measurements of cotton ph were not available, which disables a direct validation of the height model estimated from UAS imagery. Alternatively, an indirect method was conducted in this section to justify the UAS-based ph information.

As shown in Ref. 40, there is a linear relationship between ph and cc, which satisfies

Eq. (1)

cc=1.12×(ph/Row Spacing).
By applying Eq. (1), the estimated phs shown in Fig. 5 were converted to estimated ccs against DAE.

Comparison between observed cc derived from the orthomosaic image classification and estimated cc calculated using Eq. (1) is shown in Fig. 7. Results show a strong linear relationship between these two sets and demonstrate that the estimated and observed ccs coincide well with a coefficient of determination R2 of 0.990. This, in turn, proves the reliability of crop height estimated from UAS imagery. As for the validity of the observed cc, it will be addressed in a separate paper with an average accuracy of 88% achieved.

Fig. 7

Comparison between the observed cc derived from orthomosaic classification and estimated cc calculated using Eq. (1). The 1:1 relationship is represented by the solid line, and the dashed line depicts the linear regression between actual observed cc and estimated equivalent.

JARS_10_3_036018_f007.png

4.

Yield Modeling and Discussion

4.1.

Yield Modeling and Analysis

In this paper, an investigation was conducted to model and assess the relationship of the cotton yield with the ph and/or cc on June 26, 2015, the approximate date of transition between rapid growth and crop maturation.

Three regression types (linear, quadratic, and exponential) were established for modeling the cotton yield (Table 2). The independent variables were ph and/or cc on June 26, 2015. The yield models were calibrated using all 35 cotton varieties (samples) of replications 2, 3, and 4 as shown in Fig. 1. Therefore, a total of 105 samples were used for model calibration Eqs. (2–10). In each sample, the plot yield, height, and cc information were calculated independently. On the other hand, the remaining 35 samples from replication 1 were used for model validation.

Table 2

Regression models established for modeling cotton yield against the ph and/or cc in terms of linear, quadratic, and exponential fits.

Dependent variableIndependent variableFit typeRegression model
Yield (103  kg/Ha)ph (m) cc (%)Linear

Eq. (2)

yld=0.4951+2.93×ph

Eq. (3)

yld=0.02329+4.841×cc

Eq. (4)

yld=0.3881+2.038×ph+2.693×cc
Quadratic

Eq. (5)

yld=2.978+11.43×ph5.158×ph2

Eq. (6)

yld=5.45213.33×cc+15.14×cc2

Eq. (7)

yld=1.162+19.69×ph26.95×cc+0.7514×ph231.84×ph×cc+46.93×cc2
Exponential

Eq. (8)

yld=1.289×exp(0.9852×ph)

Eq. (9)

yld=1.071×exp(1.671×cc)

Eq. (10)

yld=1.36×exp(0.7691×ph)+0.002249×exp(8.261×cc)

A comparison of the estimated yield against the observed yield for the 35 samples from replication 1 was made to evaluate the regression models. Correlations vary according to the models applied and Figs. 8Fig. 910 present the validation results estimating cotton yield using the models established in Table 2. Specifically, the estimated yields of subfigures (a–c) in Fig. 8 are derived from fitting Eqs. (2–4) in Table 2. Similarly, the estimated yields of subfigures in Figs. 9 and 10 are derived from Eqs. (5–7) and Eqs. (8–10), respectively. The blue asterisks in each subfigure depict 35 validation samples (varieties) in the first cotton replication. The dash lines illustrate the linear relationship between estimated and observed yield, while the solid lines indicate a 1:1 relationship. θ angle denotes the included angle between the dash line and solid line in units of degrees. A smaller θ angle indicates a smaller separation to the 1:1 line which represents the most ideal estimation-observation relationship.

Fig. 8

Validation estimating cotton yield using the model Eqs. (2–4) established in Table 2. Estimated yields of subfigures (a), (b), and (c) are derived from fitting Eqs. (2), (3), and (4), respectively.

JARS_10_3_036018_f008.png

Fig. 9

Validation estimating cotton yield using the model Eqs. (5–7) established in Table 2. Estimated yields of subfigures (a), (b), and (c) are derived from fitting Eqs. (5), (6), and (7), respectively.

JARS_10_3_036018_f009.png

Fig. 10

Validation estimating cotton yield using the model Eqs. (8–10) established in Table 2. Estimated yields of subfigures (a), (b), and (c) are derived from fitting Eqs. (8), (9), and (10), respectively.

JARS_10_3_036018_f010.png

Generally, the yield estimated by both ph and cc produces better linear results than those obtained by using only ph or cc, regardless of the fit type. In other words, Figs. 8(c), 9(c), and 10(c) outperform other solutions by using fitting Eqs. (4), (7), and (10), respectively, in terms of the SSE, R2, RMSE, and θ angle. From the perspective of the fit type, all the models combining both ph and cc, i.e., Figs. 8(c), 9(c), and 10(c), generate similar correlation results as depicted in the corresponding subfigures. Likewise, the ph-only models, i.e., Figs. 8(a), 9(a), and 10(a), are not sensitive to the applied fit types according to the in-figure evaluation statistics, yet generate the worst correlations. The performance of the models using only cc was found to be between the ph-only models and the ph/cc combination models. For example, Fig. 8(b) performs worse than Fig. 8(c), while better than Fig. 8(a). The same phenomenon also appears in exponential models. However, as an exception, Fig. 9(b) shows a slight improvement on the correlation statistics than Fig. 9(c) by estimating the yield using only cc and generates the least dispersed correlation results among quadratic Eqs. (5–7).

Therefore, according to the result analysis, cotton yield is found correlated to both ph and cc by establishing linear, quadratic, and exponential models as stated in Table 2. These models generally agree with the expected behavior of larger yield at larger ph and cc. Different from previous studies on physiological progression of cotton growth and yield, these results provide a promising opportunity of predicting the yield of cotton when the rapid growth phase is about to finish before maturation and boll opening by considering the 2-D and 3-D growth statistics. Moving forward, upon these results, physiological rules might be acquired by agricultural producers to quantitatively manage and schedule plant care activities (such as irrigation, fertilization, and pest control) that, in turn, benefit healthy and robust growth of the cotton plants. To better understand the variability that may arise in estimating cotton growth and yield pattern to ensure the most efficient use of the concerns addressed above, producers or researchers are advised to arrange long-term observations under a multiyear scope. It is also worth noting that the estimation-observation validation lines, as shown in Figs. 8(c), 9(c), and 10(c), are still 24  deg off the 1:1 relationship, which makes most estimated yield lower than the observation. This phenomenon deserves further examination to enhance follow-up studies.

In this specific case, the independent variables used for estimating the cotton yield, i.e., the ph and cc, are all based on the image dataset collected on June 26, 2015, the end of the rapid increase in growth. The models established by using other dates may produce different results considering the nature of the cotton growth, and this possibility will be further investigated in a future study. Moreover, both the model training and validation phases used cotton samples of various varieties. This means the models proposed in this study ignore the differences of the growth characteristics of these varieties, thus a more delicate work needs to be carried out if aiming at estimating the yield of a specific variety.

4.2.

LiDAR-Derived Vegetation Metrics for Unmanned Aircraft System-Structure-From-Motion Point Cloud Data

UAS-based SfM photogrammetry to derive 3-D point cloud data of crop structure, as performed here, represents an alternative to LiDAR. Although a comparison of airborne LiDAR versus UAS-SfM for crop monitoring is beyond the scope of discussion, it is important to emphasize some key differences between the methods to assess the potential of LiDAR-based metrics adapted to SfM point cloud data for progressing future results in this work. SfM relies on image-to-image pixel correspondence and collinearity to reconstruct the 3-D scene. As such, it generates what is called a first (or single) return point cloud, whereas modern discrete-return (or full-waveform) airborne LiDAR systems provide multireturn detection capability.41 This multireturn capability has rendered LiDAR being widely applied to forestry because it enables canopy and below canopy measurement. As mentioned in Sec. 1, however, the potential benefits of multireturn LiDAR data over short vegetation such as cotton is debatable due to limitations in the range resolution of many LiDAR systems. Another key difference in point cloud phenomenology between the two methods over vegetation stems from LiDAR being a pulsed ranging technique, whereas SfM is photogrammetric and susceptible to false parallax induced from moving vegetation between overlapping images (such as from wind). This can sometimes result in noisier point cloud data over vegetation dependent on weather conditions and vegetation structure.42 However, SfM computed from hyperspatial resolution imagery collected from a low flying UAS, as done here, can provide upward of two orders of magnitude increase in point density relative to traditional airborne LiDAR collected at higher altitudes above ground. This high point density enables noise to be easily smoothed and provides a high-definition point cloud for reconstructing vegetation structure.

In this work, SfM point cloud data were used to extract a single biophysical metric, mean ph, for which to assess cotton growth rates and predict yield performance. Results showed that mean ph values enabled high accuracy cc estimation validating SfM data integrity. Given the hyperspatial point density of the UAS-SfM approach, this opens the door for the application of a wide array of LiDAR vegetation metrics adapted to SfM point clouds for improving crop phenotyping and yield prediction. The literature provides numerous examples of LiDAR-based metrics for estimating biophysical attributes of vegetation, primarily from the forestry community.23,4345 Multireturn LiDAR metrics often employ descriptive structure statistics calculated from height normalized LiDAR point clouds. These include selected height measures such as percentile of height, mean height, or max height; variability of height measures such as coefficient of variation; selected canopy return density measures; and selected cc measures such as cc above mean height.44 Full-waveform LiDAR metrics describe the radiometric and geometric attributes of the return waveforms over canopy.46 Waveform shape fitting methods are used to extract metrics such as height of median energy, standard deviation of pulse width, waveform distance, number of peaks, roughness of outermost canopy, and so on.44 With SfM being photogrammetric, full-waveform digitization is not plausible; however, several studies, such as Refs. 46 and 47, have shown the potential of using small-footprint, discrete-return LiDAR data to generate pseudowaveforms for reconstructing waveform composition over forested terrain. Such methods are well suited for adaption to the very high point density of the UAS-SfM point cloud for biophysical parameter estimation at the plant and plot level.

The aforementioned LiDAR metrics open an array of possibilities for enhancing UAS-SfM approaches to monitor crop performance such as cotton. Furthermore, by expanding the range of the input feature set, advanced machine learning approaches can be applied to progress the predictive performance for yield and growth estimation. Such exploration is currently being investigated in extension to the results presented here.

5.

Conclusions

UAS platforms offer flexible, high labor efficiency, cost-effective, and nondestructive approaches to monitor crop growth and development with adequate resolution and revisit frequency. In this study, the feasibility to monitor and model life-cycle cotton growth using UAS imagery approach was examined. The imagery sensor used on the UAS was a commercial-grade fisheye RGB camera. A total of 12 image datasets, covering germination to early maturation phase, were used. By applying the SfM algorithm, individual images formed georeferenced orthomosaic images with dense 3-D point clouds. The ph and cc information, derived from the point clouds and orthomosaic images, follow the sigmoid growth curve as established by the ground-based measurements. Statistical analysis confirmed the reliability of UAS-based plant growth information by cross validating the ph and cc using an empirical model. Potential was found to predict the cotton yield using ph and cc obtained on June 26, 2015, the date when the sigmoid growth curve tended to decline in slope. However, the yield predicted by using both ph and cc is prone to slight underestimation, when compared to the ground-measured yields. Moreover, the observed yield and cc-based estimation produces the second strongest correlation, regardless of the complexity of the models. On the contrary, the ph-only models on the same day, were found least correlated to the observed yield. This preliminary study seeks to help cotton producers acquire physiological rules for better management practices that are implemented today, with the ultimate goal of promoting sustainable cotton production.

Future studies involve collecting more datasets in a multiple-year scale, improving the yield estimation models by taking into account more parameters and the differences between varieties. In addition, future studies will also investigate data fusion from additional sensors, such as multispectral, hyperspectral, and thermal imaging sensors.

Acknowledgments

This work was partially funded by the Cotton Incorporated Project (No. 15-669TX) and the National Science Foundation Project (No. 1429518).

References

1. 

A. Srinivasan, Handbook of Precision Agriculture: Principles and Applications, Food Products Press, Binghamton, New York (2006). Google Scholar

2. 

M. A. Oliver, Geostatistical Applications for Precision Agriculture, Springer, Dordrecht, Netherlands (2010). Google Scholar

3. 

M. D. Johnson et al., “Crop yield forecasting on the Canadian Prairies by remotely sensed vegetation indices and machine learning methods,” Agric. For. Meteorol., 218–219 74 –84 (2016). http://dx.doi.org/10.1016/j.agrformet.2015.11.003 0168-1923 Google Scholar

4. 

G. M. Richter et al., “Assessing on-farm productivity of Miscanthus crops by combining soil mapping, yield modelling and remote sensing,” Biomass Bioenergy, 85 252 –261 (2016). http://dx.doi.org/10.1016/j.biombioe.2015.12.024 Google Scholar

5. 

C. Yang and W. C. Hoffmann, “Low-cost single-camera imaging system for aerial applicators,” J. Appl. Remote Sens., 9 (1), 096064 (2015). http://dx.doi.org/10.1117/1.JRS.9.096064 Google Scholar

6. 

C. Yang et al., “An airborne multispectral imaging system based on two consumer-grade cameras for agricultural remote sensing,” Remote Sens., 6 (6), 5257 –5278 (2014). http://dx.doi.org/10.3390/rs6065257 Google Scholar

7. 

H. Li et al., “‘Extended spectral angle mapping (ESAM)’ for citrus greening disease detection using airborne hyperspectral imaging,” Precis. Agric., 15 (2), 162 –183 (2013). http://dx.doi.org/10.1007/s11119-013-9325-6 Google Scholar

8. 

C. Zhang and J. M. Kovacs, “The application of small unmanned aerial systems for precision agriculture: a review,” Precis. Agric., 13 (6), 693 –712 (2012). http://dx.doi.org/10.1007/s11119-012-9274-5 Google Scholar

9. 

F. Garcia-Ruiz et al., “Comparison of two aerial imaging platforms for identification of Huanglongbing-infected citrus trees,” Comput. Electron. Agric., 91 106 –115 (2013). http://dx.doi.org/10.1016/j.compag.2012.12.002 CEAGE6 0168-1699 Google Scholar

10. 

R. Sui and J. A. Thomasson, “Ground-based sensing system for cotton nitrogen status determination,” Trans. ASABE, 49 (6), 1983 –1991 (2006). http://dx.doi.org/10.13031/2013.22279 Google Scholar

11. 

Y. Huang et al., “Development and prospect of unmanned aerial vehicle technologies for agricultural production management,” Int. J. Agric. Biol. Eng., 6 (3), 1 –10 (2013). IJSFEO Google Scholar

12. 

F. López-Granados et al., “Early season weed mapping in sunflower using UAV technology: variability of herbicide treatment maps against weed thresholds,” Precis. Agric., 17 183 –199 (2016). http://dx.doi.org/10.1007/s11119-015-9415-8 Google Scholar

13. 

C. M. Gevaert et al., “Generation of spectral-temporal response surfaces by combining multispectral satellite and hyperspectral UAV imagery for precision agriculture applications,” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., 8 (6), 3140 –3146 (2015). http://dx.doi.org/10.1109/JSTARS.2015.2406339 Google Scholar

14. 

J. Primicerio et al., “A flexible unmanned aerial vehicle for precision agriculture,” Precis. Agric., 13 (4), 517 –523 (2012). http://dx.doi.org/10.1007/s11119-012-9257-6 Google Scholar

15. 

C. Lelong et al., “Assessment of unmanned aerial vehicles imagery for quantitative monitoring of wheat crop in small plots,” Sensors, 8 (5), 3557 –3585 (2008). http://dx.doi.org/10.3390/s8053557 SNSRES 0746-9462 Google Scholar

16. 

P. J. Zarco-Tejada et al., “Tree height quantification using very high resolution imagery acquired from an unmanned aerial vehicle (UAV) and automatic 3D photo-reconstruction methods,” Eur. J. Agron., 55 89 –99 (2014). http://dx.doi.org/10.1016/j.eja.2014.01.004 EJAGET Google Scholar

17. 

J. Bendig et al., “Combining UAV-based plant height from crop surface models, visible, and near infrared vegetation indices for biomass monitoring in barley,” Int. J. Appl. Earth Obs. Geoinf., 39 79 –87 (2015). http://dx.doi.org/10.1016/j.jag.2015.02.012 Google Scholar

18. 

F.-J. Mesas-Carrascosa et al., “Assessing optimal flight parameters for generating accurate multispectral orthomosaicks by UAV to support site-specific crop management,” Remote Sens., 7 (10), 12793 –12814 (2015). http://dx.doi.org/10.3390/rs71012793 Google Scholar

19. 

H. Aasen et al., “Generating 3D hyperspectral information with lightweight UAV snapshot cameras for vegetation monitoring: from camera calibration to quality assurance,” ISPRS J. Photogramm. Remote Sens., 108 245 –259 (2015). http://dx.doi.org/10.1016/j.isprsjprs.2015.08.002 IRSEE9 0924-2716 Google Scholar

20. 

P. J. Zarco-Tejada, V. González-Dugo and J. A. J. Berni, “Fluorescence, temperature and narrow-band indices acquired from a UAV platform for water stress detection using a micro-hyperspectral imager and a thermal camera,” Remote Sens. Environ., 117 322 –337 (2012). http://dx.doi.org/10.1016/j.rse.2011.10.007 Google Scholar

21. 

F. Agüera, F. Carvajal and M. Pérez, “Measuring sunflower nitrogen status from an unmanned aerial vehicle-based system and an on the ground device,” Int. Arch. Photogram. Remote Sens. Spatial Inform. Sci., XXXVIII-1/C22 33 –37 (2011). http://dx.doi.org/10.5194/isprsarchives-XXXVIII-1-C22-33-2011 1682-1750 Google Scholar

22. 

A. I. De Castro et al., “Detection of laurel wilt disease in avocado using low altitude aerial imaging,” PLoS One, 10 (4), e0124642 (2015). http://dx.doi.org/10.1371/journal.pone.0124642 POLNCL 1932-6203 Google Scholar

23. 

M. A. Wulder et al., “Lidar sampling for large-area forest characterization: a review,” Remote Sens. Environ., 121 196 –209 (2012). http://dx.doi.org/10.1016/j.rse.2012.02.001 Google Scholar

24. 

L. Wallace et al., “Development of a UAV-LiDAR system with application to forest inventory,” Remote Sens., 4 (6), 1519 –1543 (2012). http://dx.doi.org/10.3390/rs4061519 Google Scholar

25. 

K. C. Slatton et al., “Airborne laser swath mapping: achieving the resolution and accuracy required for geosurficial research,” Geophys. Res. Lett., 34 (23), L23S10 (2007). http://dx.doi.org/10.1029/2007GL031939 GPRLAJ 0094-8276 Google Scholar

26. 

R. A. Chisholm et al., “UAV LiDAR for below-canopy forest surveys,” J. Unmanned Veh. Syst., 1 61 –68 (2013). http://dx.doi.org/10.1139/juvs-2013-0017 Google Scholar

27. 

D. Gatziolis et al., “3D tree dimensionality assessment using photogrammetry and small unmanned aerial vehicles,” PLoS One, 10 (9), e0137765 (2015). http://dx.doi.org/10.1371/journal.pone.0137765 POLNCL 1932-6203 Google Scholar

28. 

R. Díaz-Varela et al., “High-resolution airborne UAV imagery to assess olive tree crown parameters using 3D photo reconstruction: application in breeding trials,” Remote Sens., 7 (4), 4213 –4232 (2015). http://dx.doi.org/10.3390/rs70404213 Google Scholar

29. 

J. Torres-Sánchez et al., “High-throughput 3-D monitoring of agricultural-tree plantations with unmanned aerial vehicle (UAV) technology,” PLoS One, 10 (6), e0130479 (2015). http://dx.doi.org/10.1371/journal.pone.0130479 POLNCL 1932-6203 Google Scholar

30. 

DJI Phantom 2 Vision+, (2016) http://www.dji.com/product/phantom-2-vision-plus/feature May ). 2016). Google Scholar

31. 

D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. Comput. Vis., 60 91 –110 (2004). http://dx.doi.org/10.1023/B:VISI.0000029664.99615.94 IJCVEQ 0920-5691 Google Scholar

32. 

R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, 2nd ed.Cambridge University Press, Cambridge, UK (2004). Google Scholar

33. 

, Professional Edition, Version 1.2.4,Agisoft PhotoScan,” (2016) http://www.agisoft.com/downloads/installer/ May ). 2016). Google Scholar

35. 

insight3d v0.3.2, (2016) http://insight3d.sourceforge.net/ May ). 2016). Google Scholar

36. 

C. Wu, “VisualSFM: a visual structure from motion system,” (2016) http://ccwu.me/vsfm/ May ). 2016). Google Scholar

37. 

S. Jennings, N. Brown and D. Sheil, “Assessing forest canopies and understorey illumination: canopy closure, canopy cover and other measures,” Forestry, 72 (1), 59 –74 (1999). http://dx.doi.org/10.1093/forestry/72.1.59 Google Scholar

38. 

D. Zhao et al., “Growth and physiological responses of cotton (Gossypium hirsutum L.) to elevated carbon dioxide and ultraviolet-B radiation under controlled environmental conditions,” Plant Cell Environ., 26 (5), 771 –782 (2003). http://dx.doi.org/10.1046/j.1365-3040.2003.01019.x Google Scholar

39. 

J. J. Settle and S. S. Briggs, “Fast maximum likelihood classification of remotely sensed imagery,” Int. J. Remote Sens., 8 (5), 723 –734 (1987). http://dx.doi.org/10.1080/01431168708948683 IJSEDK 0143-1161 Google Scholar

40. 

D. N. Baker, J. R. Lambert and J. M. McKinion, “GOSSYM: a simulator of cotton crop growth and yield,” 134 Clemson, South Carolina (1983). Google Scholar

41. 

F. Leberl et al., “Point clouds: Lidar versus 3D vision,” Photogramm. Eng. Remote Sens., 76 (10), 1123 –1134 (2010). http://dx.doi.org/10.14358/PERS.76.10.1123 Google Scholar

42. 

M. J. Starek et al., “Small-scale UAS for geoinformatics applications on an island campus,” in IEEE Int. Conf. on Ubiquitous Positioning, Indoor Navigation and Location-Based Service (UPINLBS), 120 –127 (2014). http://dx.doi.org/10.1109/UPINLBS.2014.7033718 Google Scholar

43. 

S. C. Popescu, “Estimating biomass of individual pine trees using airborne Lidar,” Biomass Bioenergy, 31 (9), 646 –655 (2007). http://dx.doi.org/10.1016/j.biombioe.2007.06.022 Google Scholar

44. 

L. Cao et al., “Using small-footprint discrete and full-waveform airborne LiDAR metrics to estimate total biomass and biomass components in subtropical forests,” Remote Sens., 6 (8), 7110 –7135 (2014). http://dx.doi.org/10.3390/rs6087110 Google Scholar

45. 

S. G. Zolkos, S. J. Goetz and R. Dubayah, “A meta-analysis of terrestrial aboveground biomass estimation using Lidar remote sensing,” Remote Sens. Environ., 128 289 –298 (2013). http://dx.doi.org/10.1016/j.rse.2012.10.017 Google Scholar

46. 

T. Hermosilla et al., “Deriving pseudo-vertical waveforms from small-footprint full-waveform LiDAR data,” Remote Sens. Lett., 5 (4), 332 –341 (2014). http://dx.doi.org/10.1080/2150704X.2014.903350 Google Scholar

47. 

J. B. Blair and M. A. Hofton, “Modeling laser altimeter return waveforms over complex vegetation using high-resolution elevation data,” Geophys. Res. Lett., 26 (16), 2509 –2512 (1999). http://dx.doi.org/10.1029/1999GL010484 GPRLAJ 0094-8276 Google Scholar

Biography

Tianxing Chu received his PhD in photogrammetry and remote sensing from Peking University, China. He is a postdoctoral research associate at Texas A&M University-Corpus Christi (TAMU-CC), Corpus Christi, Texas. His current research interests include applying remote sensing and geographical information system (GIS) techniques in the study of precision agriculture and unmanned aircraft system (UAS)-based remote sensing techniques for sustainable agriculture and precise vegetation monitoring, as well as ubiquitous navigation and positioning.

Ruizhi Chen received his MSc in computer science and engineering from Helsinki University of Technology and his PhD in geodesy from the University of Helsinki, both in Finland. He is a professor at Wuhan University, China. He has published 2 books, 5 book chapters, and more than 150 research articles. His current research interests include UAS-based remote sensing, precision agriculture, and mobile geospatial computing.

Juan A. Landivar received a doctorate in crop physiology from Mississippi State University in 1987. He is a professor and Texas A&M AgriLife Research and Extension Center director in Weslaco and Corpus Christi, Texas. His research interests include developing improved cropping systems for better land and water management decisions, and computer models to evaluate crop management decisions.

Murilo M. Maeda received his MSc and PhD degrees in agronomy from TAMU in 2012 and 2015, respectively. He is an assistant research scientist at Texas A&M AgriLife Research and Extension Center, Corpus Christi, Texas. His position focuses on the management of a cropping systems and remote sensing program for agricultural research applications and crop precision management.

Chenghai Yang received his PhD in agricultural engineering from the University of Idaho in 1994. He is an agricultural engineer with the U.S. Department of Agriculture, Agricultural Research Service’s Aerial Application Technology Research Unit, College Station, Texas, USA. His current research is focused on the development and evaluation of remote sensing technologies for detecting and mapping crop pests for precision chemical applications.

Michael J. Starek received his PhD in civil engineering from the University of Florida. He is an assistant professor at the School of Engineering and Computing Sciences, TAMU-CC. He was formerly a National Research Council Postdoctoral Fellow of the US Army Research Office in affiliation with North Carolina State University. His research focuses on the application of emergent remote sensing and geomatics techniques for measurement and study of natural and built system dynamics.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Tianxing Chu, Ruizhi Chen, Juan A. Landivar, Murilo M. Maeda, Chenghai Yang, and Michael J. Starek "Cotton growth modeling and assessment using unmanned aircraft system visual-band imagery," Journal of Applied Remote Sensing 10(3), 036018 (23 August 2016). https://doi.org/10.1117/1.JRS.10.036018
Published: 23 August 2016
Lens.org Logo
CITATIONS
Cited by 40 scholarly publications.
Advertisement
Advertisement
KEYWORDS
Clouds

LIDAR

Visual process modeling

Visualization

RGB color model

Atomic force microscopy

Data modeling


CHORUS Article. This article was made freely available starting 23 August 2017

Back to Top