25 January 2013 Discriminating crop and other canopies by overlapping binary image layers
Author Affiliations +
For optimal management of agricultural fields by remote sensing, discrimination of the crop canopy from weeds and other objects is essential. In a digital photograph, a rice canopy was discriminated from a variety of weed and tree canopies and other objects by overlapping binary image layers of red-green-blue and other color components indicating the pixels with target canopy-specific (intensity) values based on the ranges of means ±(3×) standard deviations. By overlapping and merging the binary image layers, the target canopy specificity improved to 0.0015 from 0.027 for the yellow 1× standard deviation binary image layer, which was the best among all combinations of color components and means ±(3×) standard deviations. The most target rice canopy-likely pixels were further identified by limiting the pixels at different luminosity values. The discriminatory power was also visually demonstrated in this manner.



Observation of changes in leaf spectral profile (color) is important for crop management.1 Spatial unevenness in leaf color abnormalities is common in actual agricultural fields, and observation is facilitated by remote sensing of the whole field. The precise observation of crop canopies, as an assembly of leaves, is becoming more feasible as a result of recent development in remote sensing technologies such as balloons with remotely controlled digital cameras in addition to the most advanced remote sensing sensors. A prerequisite for the precise observation of crop canopies is the ability to discriminate them from weeds and other plant canopies.2 Zheng et al.,3 succeeded in the distinction of crop leaves from bare soil in digital photographs of agricultural fields. Their achievements were based on the differences in color between the greenish plant leaves and the bare soil. Discrimination between crop canopies and other plant species was expected to be more difficult because the differences in color may be less significant than those between crop leaves and bare soil or other non-greenish objects. However, the ranges of (intensity) values of some color components may be crop-specific when the components of red-green-blue (RGB) and other color models are used.4 Therefore, overlapping pixels with (intensity) values within crop-specific intensity ranges as black pixels of binary image layers was expected to reveal both more crop-likely pixels and less crop-likely pixels. Hence, this study was conducted to examine the binarization and overlapping method to distinguish a rice canopy from canopies of weed and tree species and other objects in a digital photograph acquired in Shiga prefecture, Japan.





A photograph was acquired in Koka city, Shiga prefecture, Japan, (34° 55 N, 136° 09 E) at 14.41 on 16 Jun 2012 using a digital camera (Cyber-shot DSC T-700, Sony, Tokyo). A scene including paddy fields, various weed and tree canopies, and other objects was photographed. Eventually, 2048×1536 pixels were acquired as a JPEG file (Fig. 1).

Fig. 1

The photographed site in Shiga prefecture, Japan (a). The area surrounded by the red rhombus frame (b) was used for preparation of the mean ±(3×) SD binary image layers. The area outside of the rhombus frame was also used to find the most target rice canopy-specific color components.



Processing the Digital Image

Adobe Photoshop 7.0 was used as one of the tools to extract and combine information on the pixels in Fig. 1. After opening the photograph, a rhombus-shaped frame was prepared on a single paddy field [Fig. 1(b)]. The grayscale layers that show the intensity values of R, G, B, cyan (C), magenta (M), yellow (Y), key black (K), and lightness (L*) and the values of a* and b* were prepared.5 RGB, CMYK, and L*a*b* color models are unique.6 In each grayscale layer, (intensity) values of the color component for the pixels in the rhombus frame were reported. In the grayscale layer, the pixels with (intensity) values between mean ±1× or 3× standard deviation (SD) were identified to obtain a binary image layer. The pixels with values in the range were black-colored while the others were white. Then, the percentages of black pixels inside and outside of the rhombus frame were determined. These statistics were used to find the color component that specifically shows the rice canopy. In this study, target rice canopy-specific color components were chosen and the binary image layers were merged. Then, the pixels were indicated as white and gray pixels. The gray pixels had different intensity values so that the likeliness of the pixels to represent the rice canopy was shown. More and less target rice canopy-likely pixels were shown by setting a threshold value of gray intensity.


Results and Discussion

Figure 2 shows binary image layers for the color components. Each binary image layer indicates the pixels with (intensity) values between the mean ±(3×) the SD for the rhombus frame in Fig. 1. Among the color components, Y appears to be the most advantageous in specifically finding the pixels that represent the rice canopy. In the Y1×SD (1SD) binary image layer, most of the pixels outside of the rhombus frame were recognized to be different from those of the target rice canopy. A majority of the pixels of the rice canopy had Y values between 67 and 77. In the Y 1SD binary image layer, outside of the rhombus frame, the number of pixels with (intensity) values within the mean ±SD range was the smallest (2%). The 3SD binary image layers tend to have more black pixels both inside and outside the rhombus frame.

Fig. 2

Binary image layers representing pixels with (intensity) values between the mean ±(3×) SD in the rhombus frame [Fig. 1(b)] for each color component. The percentages indicate those of the black colored pixels inside and outside the rhombus frame, respectively. The range indicates the mean ±(3×) SD.


Figure 3 shows the target rice canopy specificity of the binary image layers in Fig. 2. According to the relationship between percentages of black-colored pixels inside and outside of the rhombus frame in Fig. 2, the Y 1SD layer was the most target canopy-specific. By setting a target canopy-specificity threshold (Fig. 3), 13 binary image layers were overlapped and merged to find the most target rice canopy-likely pixels (Fig. 4). The most target canopy-likely pixels were distributed in the area of the rhombus frame while the second-most target-likely pixels were mainly distributed in the spaces of a few other paddy fields. In the merged image (Fig. 4), the percentage of the darkest pixels (luminosity=0) inside of the rhombus frame was 27% while that outside was 0.04%. Thus, the ratio (0.04/27=0.0015) dropped from 0.027 (2/72, Fig. 2) for Y 1SD as the best target canopy-specific value among the single binary image layers (Figs. 2, 3). Thus, the target rice canopy specificity defined in Fig. 3 increased more than 10-fold by overlapping the binary image layers chosen in Fig. 3.

Fig. 3

Target rice canopy specificity of the mean ±(3×) SD ranges for the color components. The broken line was used as the threshold to choose the most target canopy-specific color components.


Fig. 4

The grayscale image generated by merging the selected rice canopy-specific binary image layers (left top) according to the criterion adopted in Fig. 3. The most target canopy-likely pixels were revealed by identifying the pixels with smaller luminosity values, which were therefore darker, than 200, 150, and 100.


The visually perceivable different distribution patterns of the black-colored pixels among the binary image layers of the color components (Fig. 2) are related to the absence of correlation among the (intensity) axes of the color components, R, G, B, C, M, Y, a*, and b*.4,7 In this study, the absence clearly favored the discrimination of the rice canopy. The similarity among the binary image layers of G, K, and L*, which have significant correlations among the intensity axes,8 evidences the advantage of the absence of correlation. A possible future development of the current method is to involve the reflectance of infrared light and hyperspectral imaging for preparation of binary image layers. This will increase the number of binary image layers available in this method and will thus maximize the chances to discriminate the target crop canopy from weeds and other objects that are common in actual agricultural fields.



The current method quite successfully discriminated the rice canopy from a large variety of greenish weed and tree canopies that generate a great diversity of color in the photographed scene. Overlapping and merging the binary image layers resulted in a marked improvement of the target canopy specificity to 0.0015 from 0.027 for the Y1SD binary image layer, which was the best among all binary image layers, supported by the absence of correlation among the (intensity) axes of the color components, R, G, B, C, M, Y, a*, and b*. The method is thus expected to aid in agricultural field management as photographs of the crop canopies can be used to confirm normal growth and/or detect abnormalities in leaf color. The current method is, therefore, worth considering, developing, and improving.


1. R. Doi, “Quantification of leaf greenness and leaf spectral profile in plant diagnosis using an optical scanner,” Cienc. Agrotec. 36(3), 309–317 (2012). http://dx.doi.org/10.1590/S1413-70542012000300006 Google Scholar

2. K. R. ThorpL. F. Tian, “A review on remote sensing of weeds in agriculture,” Precis. Agric. 5(5), 477–508 (2004).1385-2256 http://dx.doi.org/10.1007/s11119-004-5321-1 Google Scholar

3. L. ZhengD. ShiJ. Zhang, “Segmentation of green vegetation of crop canopy images based on mean shift and Fisher linear discriminant,” Pattern Recogn. Lett. 31(9), 920–925 (2010).PRLEDG0167-8655 http://dx.doi.org/10.1016/j.patrec.2010.01.016 Google Scholar

4. R. Doi, “Simple luminosity normalization of greenness, yellowness and redness/greenness for comparison of leaf spectral profiles in multi-temporally acquired remote sensing images,” J. Biosci. 37(4), 723–730 (2012).JOBSDN0250-5991 http://dx.doi.org/10.1007/s12038-012-9241-3 Google Scholar

5. Adobe Systems, Adobe Photoshop 7.0 Classroom in a Book, Adobe Press Inc., San Jose, CA (2002). Google Scholar

6. R. Doiet al., “Semiquantitative color profiling of soils over a land degradation gradient in Sakaerat, Thailand,” Environ. Monit. Assess. 170(1–4), 301–309 (2010).EMASDH0167-6369 http://dx.doi.org/10.1007/s10661-009-1233-x Google Scholar

7. D. J. HayesS. A. Sader, “Comparison of change-detection techniques for monitoring tropical forest clearing and vegetation regrowth in a time series,” Photogramm. Eng. Rem. Sens. 67(9), 1067–1075 (2001).PGMEA90099-1112 Google Scholar

8. K. Kirket al., “Estimation of leaf area index in cereal crops using red green images,” Biosystems. Eng. 104(3), 308–317 (2009).BEINBJ1537-5110 http://dx.doi.org/10.1016/j.biosystemseng.2009.07.001 Google Scholar

© 2013 Society of Photo-Optical Instrumentation Engineers (SPIE)
Ryoichi Doi, Ryoichi Doi, } "Discriminating crop and other canopies by overlapping binary image layers," Optical Engineering 52(2), 020502 (25 January 2013). https://doi.org/10.1117/1.OE.52.2.020502 . Submission:

Back to Top