In our study, a real-world application using the latest unmanned aerial vehicle (UAV) functionality is presented. Sugar beet is an important industrial crop in many countries. According to estimates, weeds in sugar beet fields dramatically reduce both the quantity and quality of sugar beet crops. Due to the spectral similarities between weeds and sugar beet seedlings, visual identification of weeds is extremely difficult in the sugar beet fields. In the present study, a lightweight, end-to-end trainable guided feature-based deep learning method, called DeepMultiFuse, has been developed to improve the weed segmentation performance using multispectral UAV images that aim to fulfill these requirements (to identify weed in sugar beet fields). The proposed architecture is composed of five basic concepts, including guided features, fusion module, dilation convolution operation, modified inception module, and gated encoder–decoder network extracting the object-level image representation for different scenes. The proposed network was trained on the generated dataset, including four multispectral orthomosaic reflectance maps using the RedEdge-M sensor in Rheinbach and three multispectral orthomosaic reflectance maps applying Sequoia sensor in Eschikon for mapping weed segmentation on the field. Experimental results demonstrated that the proposed network taking advantage of the feature fusion module-rich features outperforms state-of-the-art networks.
Estimating volumetric soil moisture (Mv) and surface roughness (S) are the key parameters for numerous agricultural and hydrological applications. Although these two parameters can be effectively retrieved from synthetic aperture radar (SAR) data, the presence of vegetation can negatively affect the results. A method was proposed to accurately estimate Mv and S over vegetated agricultural areas. The method was based on applying the machine learning inversion approach along with SAR data to invert a combination of the parameterized water cloud model (PWCM) and the calibrated integral equation model (CIEM). The soil backscattered component in water cloud model (WCM) was generated by CIEM to be applied to the WCM parameterization and dataset simulation. Three machine learning algorithms, including the support vector regression (SVR), multi-output SVR (MSVR), and artificial neural network (ANN), were employed to model the relationship between the simulated dataset variables. The genetic algorithm was also applied to optimize the models’ parameters. The inversion technique results demonstrated that the MSVR and ANN had the highest accuracy in estimating Mv and S due to their better structures. The SMAPVEX-16 in situ dataset, along with three Sentinel-1 images, was applied to evaluate the accuracy of the WCM parameterization and the proposed method for Mv and S estimation. The accuracies of the PWCM in the VV and VH polarizations of Sentinel-1 C-band data were reasonable for VWC < 2.5 kg / m2 [root-mean-square error (RMSE) = 1.44 and 1.77 dB, respectively]. Additionally, it was observed that the trained SVR, MSVR, and ANN had similar results for different VWC values. In summary, the proposed method had high potential in vegetated agricultural areas with VWC < 2.5 kg / m2, for which the RMSEs were 4 to 7 vol. % and 0.35 to 0.46 cm depending on the VWC values in retrieving Mv and S, respectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.