Assessment of deep learning methods for classification of cereal crop growth stage pre and post canopy closure

Abstract. Growth stage (GS) is an important crop growth metric commonly used in commercial farms. We focus on wheat and barley GS classification based on in-field proximal images using convolutional neural networks (ConvNets). For comparison purposes, use of a conventional machine learning algorithm was also investigated. The research includes extensive data collection of images of wheat and barley crops over a 3-year period. During data collection, videos were recorded during field walks at two camera views: downward looking and 45 deg angled. The resulting dataset contains 110,000 images of wheat and 106,000 of barley taken over 34 and 33 GS classes, respectively. Three methods were investigated as candidate technologies for the problem of GS classification. These methods were: (I) feature extraction and support vector machine, (II) ConvNet with learning from scratch, and (III) ConvNet with transfer learning. The methods were assessed for classification accuracy using test images taken (a) in fields on days imagined in the training data (i.e., seen field-days GS classification) and (b) in fields on days not imagined in the training data (i.e., unseen field-days principal GS classification). Of the three methods investigated, method III achieved the best accuracy for both classification tasks. The model achieved 97.3% and 97.5% GS classification accuracy for seen field-day test data for wheat and barley, respectively. The model also achieved accuracies of 93.5% and 92.2% for the principal GS classification task for wheat and barley, respectively. We provide a number of key research contributions: the collection curation and exposure of a unique GS labeled proximal image dataset of wheat and barley crops, GS classification, and principal GS classification of cereal crops using three different machine learning methods as well as a comprehensive evaluation and comparison of the obtained results.


Introduction
It is projected that in the period 2005 to 2050, food production must increase by 100% to 110% to meet rising demand due to population growth. 1 Moreover, there is increasing pressure on producers to reduce the area of land cleared and the cost of food production. As a result, there is a need for improved crop production management and more efficient utilization of resources.
Enhanced food production requires better decision making for crop husbandry and automated crop growth monitoring. Remote sensing can provide useful data on crop growth at the sub-field level. However, remote sensing is currently limited in terms of spatial and temporal accuracy, particularly in regions that are often cloudy. 2 Recently, the use of in-field proximal images coupled with computer vision 3 techniques has shown promise for automatic crop growth monitoring. 4 Growth stage (GS) is a key metric for quantifying cereal crop growth in production fields. 5 GS indicates the development stage of the crop by means of a predefined numeric scale, such as Agriculture and Horticulture Development Board (AHDB), 6 Zadoks,7 or Biologische Bundesanstalt, Bundessortenamt and CHemical (BBCH). 8 The ability to routinely estimate the GS provides crucial input into crop growth models and help inform novel crop husbandry practices. Typically, GS is determined in the field by means of visual inspection by an agricultural scientist (agronomist), or operator, who has sufficient knowledge of GS metrics.
Cereal crop GS estimation can benefit from the application of image processing techniques in a number of ways. First, image data could be recorded at low cost without damaging the crops. Field GS surveys could be collected by cameras affixed to vehicles traversing the field for the purposes of input application, 9 by low flying drones, 10 or by ground-based robots. 11 A point GS estimate could be obtained from a smartphone. This GS information can then be utilized by the farmer for decision making in regard to field inputs.
The research reported herein addresses the problem of estimation of cereal crop GS based on in-field proximal images. The study investigates the use of machine learning algorithms for GS classification of wheat and barley from images. The work focuses on images that are collected from wheat and barley crops in downward and 45-deg-angled looking mode at a height of around 2m above the ground. Ground truth data are collected at field level and labeled using the Zadoks GS scale metric. 7 Due to the visual complexity of crop images and growth development stages in cereals, GS estimation by means of image processing is a challenging research problem. 12 Moreover, variations in seed rate, crop variety, soil density and dynamic weather conditions such as wind or changes in natural lighting add to the difficulty of GS estimation from images.
For this study, data were collected from fields in Ireland. Wheat image data were collected for two cultivars of Costello and JB Diego winter wheat during their growing season from early October to mid-August. Barley data were collected from two cultivars of Cassia and Infinity winter barley during their growing season from early November to end of July. Image data with frost or unwanted objects/particles that visually occluded the crops were manually removed from the dataset.
To the best of the authors' knowledge, this is the first paper to investigate GS classification of cereal crops for a wide range of GSs, including images pre and post canopy closure. In addition, the paper investigates classification accuracy of excluding test images taken on the same day and same field as training images. Herein, we refer to this as testing of unseen field-day data. Due to limitations in the number of GSs in the dataset, the GSs are classified into principal GS, rather than individual GS. In other words, the GSs are grouped into principal GS as classes. The impact of employing principal GSs and lack of images with matching GSs in training and test sets on classification accuracy is studied. Various experiments were carried out and the outcome of best performing algorithm is investigated for GS classification and principal GS classification of downward and 45-deg-angled looking images.
The remainder of this paper is structured as follows. Section 2 presents the background and existing approaches to the problem. Section 3 presents details of the collected image dataset for wheat and barley. The experimental methods and results are presented in Sec. 4. A comprehensive discussion and the conclusions of the work are presented in Secs. 5 and 6, respectively.

Background and Existing Work
A comprehensive survey in Ref. 4 presents image processing techniques reported in the literature for extracting key cereal crop growth metrics from proximal images. One of the dominant crop growth metrics is cereal GS. To date, little research has been done on automated estimation of GS.
An automated image-based scheme was proposed by 12 to detect two principal GSs of corn: emergence and three-leaf stage. The study involved a small number of training samples and employed an image segmentation method combined with affinity propagation clustering for classification. The work achieved a classification accuracy of 96.68% for classifying two GSs.
A study reported in Ref. 13 investigated estimation of two distinct GSs of six wheat cultivars. The authors employed scale invariant, low-level feature extraction, mid-level representation (bag-of-visual-words), and a support vector machine (SVM) for classifying two GSs of wheat. Their algorithm achieved on average 91% accuracy.
In a study described in Ref. 14, rice panicles were modeled from 2D rice images. The study targeted mainly one stage of growth when the panicle attributes were developed. Using a morphological operation, the grain area of rice panicles were extracted. The grain weight and the correlation between the grain area and weight parameters were determined. Their algorithm achieved 90% accuracy.
A drone-based approach was proposed in Ref. 15 for classifying four different GSs of rice. The targeted stages were the early phase of rice growth, the vegetative growth phase, the generative growth phase, and the harvest phase. The authors employed a color histogram (leaf color chart feature) and SVM for classifying GS. They achieved 93% accuracy for classifying four different GSs.
Corn sprout GS estimation was investigated in Ref. 16 using red, green, and blue images, over a diminutive period of 6 days growth. The algorithm consisted of cropping the plant region and using a region growing approach as a function of length and time. Moreover, the plant length was measured continuously in real time as ground truth. The authors reported measurement accuracy by comparing the result of image processing to manual measurement counterparts in centimetres. They achieved d ¼ 0.2 cm accuracy on average.
A recently published study by the authors of this paper 17 presented GS estimation of wheat and barley crops for prior canopy closure stages. The study used 138,000 images from 12 GSs of wheat and 11 GSs of barley in the dataset. The GS classification task was carried out employing three different machine learning methods: (a) a convolutional neural network (ConvNets) model with learning from scratch, (b) a ConvNets model with transfer learning, and (c) conventional SVM classifier. The authors reported classification accuracy of 99.8% on average while using ConvNet with transfer learning method. Although this work was promising, it was limited to GSs of prior canopy closure. Moreover, the classification results achieved for this research were based only on seen field-day data, i.e., the test images were taken on the same days and in the same fields as the training images.
The research reported herein addresses the problem of GS classification for a wide range of GSs. The study reports the result of principal GS classification of unseen field-day test data for both wheat and barley crops; the unseen field-day data are considered as an unbiased test set for the classifier. The highlights, which are achieved for the principal GS classification through an extensive series of experiments, add a remarkable value to the existing literature on automating crop GS classification.

Dataset
The aim of this study is to classify wheat and barley GS using images of crops and state-of-theart deep neural network models. 18 It has been shown that deep neural network models require very large image datasets for training to achieve high accuracy. 19 As part of this research, extensive data collection was undertaken for wheat and barley crops. Overall, there are 216,000 images in the dataset from 15 different fields within Ireland. The data collection protocol is presented in Sec. 3.1 and details of the wheat and barley dataset are provided in Secs. 3.2 and 3.3, respectively.

Data Collection Protocol
Cereal crop GS is categorized by means of pre-defined scales. Each scale assigns a value to a recognizable crop stage. The most frequently adopted scale is Zadoks. 7 The principal GSs of the Zadoks scale are listed in Table 1.
Ground truth was determined in the field by an agricultural scientist, or operator, who had sufficient knowledge of cereal GS metrics. GS was determined manually by comparing the plants to the objective visual features defined in the scale. Images were recorded with a DJI Osmo+ camera. 20 The DJI Osmo+ includes a camera, gimbal, and a supporting mobile device handle. The recording was captured with 1080 pixel quality and at 30 frames∕ sec. At each visit, the operator walked the field, along the tramlines for 3 to 6 min recording a video file of crops. Two camera poses were used: vertically downward looking at the field and at a 45-deg declination from the horizon. The camera was held parallel to the sowing rows of the field at a height of 2 m above the ground. In the post-processing stage, the video frames were extracted as image files for training and testing the network. A series of images were extracted and indexed sequentially. To ensure that no two images were the same, frames were extracted with a minimum of 120 ms between each. The data collected included the ground truth GS, crop cultivar, seed rate, sowing date, date of capture, field global positioning system (GPS), brightness level, and wind speed. The data were captured over 3 years of growing seasons from 2017 to 2019.

Wheat Dataset
The seen field-day wheat training dataset consists of 21 GS classes where each class includes 2000 images for training, 600 images for validation, and 1400 images for test purposes. These 21 classes include four classes in the seedling stage, four classes in the tillering stage, two classes in stem elongation, two classes in ear emergence, one in anthesis, three in milk development, two in dough development, and three in ripening. There are overall 84,000 wheat images in this dataset. The wheat training images are from five distinct fields in Ireland and include two different cultivars in Costello and JB Diego. The brightness range in the wheat training dataset varies between 73.0 and 156.2 (AV). There are five different seed rates in the wheat training data. The wind speed at wheat data capture time varied between 6 and 27 km∕h. Figure 1 shows sample of wheat images from these two cultivars and various GSs.
The unseen field-day dataset, which is separate from the training data, consists of 13 GS classes with 2000 images per class. These classes include two from the seedling stage, three from tillering, one from stem elongation, two from ear emergence, two from milk development, one from dough development, and two from ripening. Each class of test data includes 1000 downward and 1000 images of 45-deg-angled looking. Overall, 26,000 images of unseen field-day wheat data are in the test dataset. The data include brightness variation from 75.9 to 168.2 (AV) and three different seed rates. The wind speed at the time of capture wheat test data varied between 16 and 28 km∕h. The unseen field-day data are only used for testing, not training.
Details of the wheat dataset are provided in Table 2 and the seen and unseen field-day split is listed in Table 4(a). Information the about fields and their GPS can be found in Table 5.

Barley Dataset
The barley seen field-day dataset includes 20 GS classes where each class includes 2000 images for training, 600 images for validation, and 1400 images for test. These 20 classes include four classes in the seedling stage, four classes in the tillering stage, two in stem elongation, one in booting, two in ear emergence, one in milk development, three in the dough development, and three in ripening. There are 80,000 barley images overall in this dataset. The barley training, validation, and test images are from four fields and include two different cultivars of Cassia and Infinity. The brightness range in the barley training data varies between 76.4 and 159.6 (AV). There are four different seed rates in the barley training data. The wind speed at the time of capturing the barley training data varied between 9 and 27 km∕h. Figure 2 shows sample barley images from these two cultivars at various GSs. The unseen field-day barley dataset, which is separate from the seen field-day data. It includes 13 GS classes with 2000 images per class. These classes include two from seedling, four from tillering, two from stem elongation, two from ear emergence, one from dough development, and two from ripening. Each class of test data includes 1000 downward and 1000 images of 45-deg-angled looking. Overall, 26,000 images of unseen field-day barley data are in the test dataset, which was collected from three distinct fields in Ireland. These images include brightness variation from 81.2 to 157.8 (AV) and three different seed rates. The wind speed at the time of capturing barley test data varied between 14 and 23 km∕h.
The unseen field-day data are only used for testing and not training. Details of the barley dataset are provided in Table 3 and the seen/unseen field-day split is listed in Table 4(b). Information about the fields and their GPS can be found in Table 5.

Methods and Evaluation
In this section the methods for GS estimation are described and the achieved results are presented. Section 4.1 presents the conventional machine learning algorithm with a SVM classifier. The ConvNet with learning from scratch approach and the ConvNet with transfer learning are presented in Secs. 4.2 and 4.3, respectively.

SVM Classifier
GS classification of wheat and barley crops was investigated using feature extraction and an SVM classifier. 21 Blurry images were detected using a Laplacian Kernel and were removed from dataset below a threshold of 120. 22 Data were pre-processed by brightness correction. 23 The best results using the SVM classifier were obtained by training on a mix of downward and 45-deg-angled looking images in each class of data. Excess Green index features 24 were extracted from images. Data dimensionality was reduced by employing principal component analysis. The SVM classifier was equipped with a radial basis function kernel, 25 and regularization parameters of C ¼ 1.0 and γ ¼ 0.1. A five-fold cross validation scheme was applied and 1400 images per class were utilized for testing purposes. Moreover, for each crop (wheat/barley) 13 classes of unseen field-day test data were used for principal GS classification.
GS classification using the SVM classifier with input pre-processing and a mix of downward and 45-deg-angled looking images in each class, resulted in 63.8% and 59.8% accuracy for wheat and barley, respectively. Principal GS classification using the same classifier on unseen field-day test data resulted in 26.4% and 29.3% accuracy rates for wheat and barley, respectively. Table 6 presents a summary of the experimental results obtained using the SVM classifier.

ConvNet with Learning from Scratch
Two ConvNet models were trained from scratch for GS image classification and principal GS classification of wheat and barley crops.
The first ConvNet includes five-trainable layers including three Conv layers and two dense layers. The Conv layers (Conv2D, Conv2D-1, Conv2D-2) have 32, 64, and 64 filters respectively and the filter size was set to 3 × 3 and the dense-layers have 1024 and 21/20 neurons for wheat/ barley, respectively.
The second ConvNet is almost identical to the first ConvNet apart from two layers of batch normalization that are added to the network after the max-pooling layer of the first and the third trainable layers. The following paragraphs are summarized and the equations were removed. For all ConvNet experiments, an image size of 256 × 256 was employed. An image size of 125 × 125 was tested but the results were not satisfactory. The image pixel values were rescaled to ½0;1 interval. The training data were pre-processed using the hue, saturation, value color space, employing the brightness correction function.
Since data augmentation has proven effective in training deep learning algorithms. 26 Three different data augmentation schemes were applied to the input of the network. The data were augmented with various in-range brightness values. 27 To this end, while reading the images into the training data-generator, the brightness range was set to produce either darker images (setting uniform distribution values <1.0) or brighter images (by setting uniform distribution values over 1.0). See Eq. (1) for the transform equation and Table 7 for the brightness parameter setting. In this work, the brightness range is set to ½0.7; 1.3 for data brightness augmentation E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 1 ; 1 1 6 ; 1 9 5 (1) The network was made robust, 28 to 90-deg rotation ranges by data rotation augmentation. 27 This method randomly rotates the image clockwise by the given angle. The affine transformation for rotation can be found in Eq. (2). The rotation range parameter setting employed in this work is listed in Table 7.
E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 2 ; 1 1 6 ; 1 1 5 x 0 y 0 ¼ z x cosðθÞ −z y sinðθÞ z x sinðθÞ z y cosðθÞ x y : (2)  Zoom augmentation 27 was exerted on the training data by employing a scale range of ½0.7; 1.3. The affine transform for zoom augmentation complies with Eq. (2) and the parameter settings of horizontal and vertical scale were obtained from Table 7. This function randomly produces images that are zoomed in for values <1.0 (interpolates original image pixel values) and zoomed out for values greater than 1.0 (add new pixel values around the original image).
The input for each class of data includes 50% downward and 50% angled images. The test data were classified in the principal GS range without any pre-processing.
An extensive series of experiments were carried out to find the best performing ConvNet with learning from scratch model and input format for the GS classification and principal GS classification task. The results demonstrate an improvement when employing the ConvNet with batch normalization layers. Moreover, including input pre-processing and data augmentation was proven to play an important role for both the classification and principal GS classification tasks using this network, see Table 8.
The best average results for barley and wheat using the ConvNet learned from scratch including batch normalization, input pre-processing, and data augmentation is 95.9% GS classification accuracy, and principal GS classification accuracy of 75.3%.

ConvNet with Transfer Learning
The ConvNet with transfer learning approach seeks to transfer knowledge from a source task to a target task. The network's pre-trained parameters from the source task are re-purposed for a target task within a similar or related domain. The concept of transfer learning relaxes the urgent requirement of having the ConvNet trained on a large independent and identically distributed dataset.
The following paragraphs are summarized and the description of experiments are removed. The Visual Geometry Group ConvNets has provided a reliable base for numerous image recognition systems since its introduction in 2014. 29 In this work, Visual Geometry Group-19 was employed as a basis for transfer learning, with 19 weight layers of 16 Conv and three fully connected (FC) layers. The network is pre-trained on the ImageNet dataset and the knowledge can be transferred at any layer of the network for the new classification task. To acquire the best architecture for setting trainable and non-trainable layers for the classification problem at hand, five different experiments were tested. The experiment models of E1 to E5 are listed in Table 9, including non-trainable and trainable layers and parameters.
The input for each class of data includes 50% downward and 50% angled images. Similar to the data preparation of ConvNet with learning from scratch (presented in Sec. 4.2), pre- Table 8 Overall performance of the ConvNet trained from scratch for GS classification on test data and principal GS classification on unseen field-day test data for wheat and barley crops. There are three different experiments: whether (a) the network includes batch normalization layers, (b) the input data is a mix of downward and 45-deg-angled looking images, and (c) training includes pre-processing and data augmentation processing and data augmentation were applied to the training data of this network. The training data are augmented for brightness 27 in range ½0.7; 1.3, rotation in the range 90 deg 28 and zoom augmentation with scale in the range ½0.7; 1.3. 27 The test data were classified in the principal GS range without any pre-processing or data augmentation. Among the aforementioned experiments, Experiment E4 achieved the best GS classification accuracy, with 15 non-trainable Conv-layers, including the last Conv-layer and four FC-layers of 1024, 512, 256, 21/20 nodes as trainable layers. The input setting for this practice includes data pre-processing, data augmentation and a mix of downward and 45-deg-angled images in each class of data.
The result from Experiments E1 to E5 were investigated to choose the best transfer learning model, Fig. 3. The accuracies for experiment E1 are 76.2% and 73.4% for wheat and barley respectively, these increased to 99.1% and 99.7% in experiment E4. Moving deeper by training the network with another trainable ConvNet layer, accuracy drops slightly to 98.1% and 98.3% for wheat and barley, respectively. This is a costly drop in accuracy as the number of trainable parameters doubles from ∼3.5 to ∼6 million.  It is shown that experiment E4 resulted in the best performance with a reasonable number of trainable parameters. Hence it was used as the base model for training and testing the GS classification.
The results obtained for the various methods of GS classification and principal GS classification employing experiment E4 are presented in Table 10.
The first two rows of Table 10 present the results of training with single mode data (either angled or downward looking images) with no pre-processing or data augmentation. The result of GS classification using these inputs is fairly good with 93.6% and 92.4% accuracy for wheat and barley, respectively. However, principal GS classification using the aforementioned trained network does not yield good results for unseen field-day data; generating principal GS classification accuracy of 73.1% and 70.4% for wheat and barley, respectively.
The next series of experiments involved training the transfer learning network using preprocessed data with brightness correction. The network classifies GSs with almost the same accuracy rates as the previous experiment. However principal GS classification improved noticeably reaching 77.6% and 75.3% accuracy for wheat and barley, respectively.
Including input data augmentation as well as pre-processing brings about higher GS classification and principal GS classification accuracies. The results show 95.3% and 93.1% accuracy for wheat and barley GS classification. Moreover the principal GS classification progressed positively and achieved 83.8% and 85.2% accuracy for wheat and barley, respectively.
Finally, including downward and 45-deg-angled images in each class of data for training was considered. The input data were pre-processed and augmented as well. A significant improvement was noticed in both classification accuracy rates and the number of images from each class correctly classified in their corresponding principal GSs. Employing this input setting to transfer learning, the network achieved 97.3% and 97.5% GS classification accuracy rates and principal GS classification accuracies of 93.5% and 92.2% were achieved for wheat and barley, respectively, see Table 10.
For both GS classification and principal GS classification tasks, the network architecture of experiment E4 trained on a mix of downward and 45-deg-angled looking images in each class, including pre-processing and data augmentation, achieved the best results. The confusion matrices for principal GS classification of wheat and barley unseen field-days data are presented in Figs. 4(a) and 4(b), respectively.

Discussion
Of the three methods considered, the ConvNet with transfer learning including data pre-processing and a mix of downward and 45-deg-angled looking for training, resulted in the best GS   classification accuracy for both wheat and barley crops. As shown in Fig. 5, image pre-processing together with a mix of downward and 45-deg-angled looking images for training, produced the best classification accuracy for each method. The evaluation of principal GS classification shows that of the three methods, the ConvNet with transfer learning achieved the highest accuracy. The principal GS classification accuracies achieved for wheat and barley crops using this method was 93.5% and 92.2%, respectively. As shown in Fig. 6, image pre-processing together with a mix of downward and 45-deg-angled looking images as the input for training produced the best principal GS classification for each method.
The result of principal GS classification for 13-classes (26,000 images of unseen field-day) of wheat is presented in the confusion matrix, Fig. 4(a). The accuracy achieved for wheat principal GS classification was 93.5%.
Likewise, the result of principal GS classification for 13-classes (26,000 images of unseen field-day) of barley is presented in the confusion matrix in Fig. 4(b). The accuracy achieved for barley principal GS classification was 92.2%.

Conclusion
Evaluation of three different machine learning methods along with three methods of crop GS classification of wheat and barley are presented in this paper.
Of the three methods, the ConvNet with transfer learning including data pre-processing and a mix of downward and 45-deg-angled looking for training, resulted in the best GS classification accuracy for both wheat and barley crops. As shown in Fig. 5, image pre-processing together with a mix of downward and 45-deg-angled looking images for training, produced the best classification accuracy for each method. Fig. 6 Comparison of the principal GS classification accuracy for three different methods of (I) feature extraction and SVM classifier, (II) ConvNet with learning from scratch, and (III) ConvNet with transfer learning experiment E4, employed on wheat and barley unseen field-day data. The accuracy achieved for each method averaged over test downward and 45-deg-angled images and is reported in different practices of (a), (b), and (c) as follows: (a) experiment with no data pre-processing, (b) experiment with data pre-processing, and (c) experiment with data pre-processing and mix of downward and 45-deg-angled looking images.
Moreover, the evaluation of principal GS classification showed that of the three methods, the ConvNet with transfer learning achieved the highest accuracy. The principal GS classification accuracy achieved for wheat and barley crops using this method was 93.5% and 92.2%, respectively. As shown in Fig. 6, image pre-processing together with a mix of downward and 45-degangled looking images as the input for training produced the best principal GS classification for each method.
The results of classification of downward and 45-deg-angled images show that the ConvNets yielded higher classification accuracy for angled looking images, see Fig. 7(a). Principal GS classification also demonstrated a similar trend of yielding better principal GS classification accuracy for angled looking images while employing ConvNet models, see Fig. 7(b).
Detailed evaluation of principal GS classification results for various GSs of wheat shows that except for downward images of GS32, unseen field-day data were classified in their principal GSs with acceptable accuracy. Likewise, evaluation of principal GS classification results for various GSs of barley shows that downward images of GS32 has the worst of the principal GS classification results. GS32 is the stage in which the canopy closure happens and leaf area index (LAI) reaches its saturation point.
The key novel contributions of this work include development of a unique labeled dataset of proximal images of wheat and barley crop GSs, GS classification of cereal crops using ConvNet with transfer learning, ConvNet with learning from scratch, and SVM classifier. Moreover, this research is the first comparison of these methods for the problem of cereal GS classification.
In future work, the existing image dataset could be augmented by employing state-of-the art image synthesis algorithms, such as texture synthesis, image super resolution, 30 and generative adversarial networks. 31 Although our existing dataset is large enough for training neural networks, it only includes two variety of the crops. Perhaps a larger dataset including more crop varieties could further improve the principal GS classification.
The comparison of results with different data types to determine the best performing model shows that including images with two different camera views boosts the performance of principal GS classification dramatically. Hence, a more robust trained network may be obtained using training images from several camera view angles.
An unsupervised learning algorithm, such as an unsupervised deep learning algorithm, 32 could be used for GS classification. The trained network for GS classification of wheat and barley crops may be applicable to GS classification of other cereal crops with similar visual GSs, such as rye, triticale, and oats.

Data Availablity
The data that supports the findings of this study are available from "CONSUS Program and Origin Enterprises Plc" repository but restrictions apply to the availability of these data, which were used under license number (16/SPP/3296) for the current study and so are not publicly available. Data are, however, available from the authors upon reasonable request and with permission from "CONSUS Program and Origin Enterprises Plc" authorities. If you require any further information, please do not hesitate to contact the author by email.