Thermal infrared (T IR) imagery is normally acquired at coarser pixel resolution than that of shortwave sensors on the same satellite platform. Often, T IR resolution is not suitable for monitoring crop conditions of individual fields or the impacts of land cover changes that are at significantly finer spatial scales. Consequently, thermal sharpening techniques have been developed to sharpen T IR imagery to shortwave band pixel resolutions. One of the most classic thermal sharpening technique is T sHARP . It uses a relationship between land surface temperature and normalized vegetation index (N DV I). However, there are several studies that prove that a single relationship between T IR and N DV I may only exist for a limited class of landscape. Our work hypothesis stated that it is possible to improve the spatial resolution of T IR imagery considering a relationship between vegetation and several soil spectral indexes and T IR as well the spatial context information. In this work, the potential of Superpixels (SP ) combined with Regression Random Forest (RRF ) is used to augmenting the spatial resolution of the Landsat 8 T IR (Band 10 and 11) imagery to their visible (V IS) spatial resolution. The SP allows to consider the contextual information over the land cover, and RF allows to integrate in a unique model the relationship between five spectral indices and T IR data. The results obtained by SP-RRF approach shows the potential of this methodology, compared with classical T sHARP method.
Recently, there has been a noteworthy increment of using images registered by unmanned aerial vehicles (UAV) in different remote sensing applications. Sensors boarded on UAVs has lower operational costs and complexity than other remote sensing platforms, quicker turnaround times as well as higher spatial resolution. Concerning this last aspect, particular attention has to be paid on the limitations of classical algorithms based on pixels when they are applied to high resolution images. The objective of this study is to investigate the capability of an OBIA methodology developed for the automatic generation of a digital terrain model of an agricultural area from Digital Elevation Model (DEM) and multispectral images registered by a Parrot Sequoia multispectral sensor board on a eBee SQ agricultural drone. The proposed methodology uses a superpixel approach for obtaining context and elevation information used for merging superpixels and at the same time eliminating objects such as trees in order to generate a Digital Terrain Model (DTM) of the analyzed area. Obtained results show the potential of the approach, in terms of accuracy, when it is compared with a DTM generated by manually eliminating objects.
Efficient water management in agriculture requires an accurate estimation of evapotranspiration (ET). There are available several balance energy surface models that provide a daily ET estimation (ETd) spatially and temporarily distributed for different crops over wide areas. These models need infrared thermal spectral band (gathered from remotely sensors) to estimate sensible heat flux from the surface temperature. However, this spectral band is not available for most current operational remote sensors. Even though the good results provided by machine learning (ML) methods in many different areas, few works have applied these approaches for forecasting distributed ETd on space and time when aforementioned information is missing. However, these methods do not exploit the land surface characteristics and the relationships among land covers producing estimation errors. In this work, we have developed and evaluated a methodology that provides spatial distributed estimates of ETd without thermal information by means of Convolutional Neural Networks.
Drought is one of the most complex natural hazards because of its slow onset and long-term impact; it has the potential to negatively affect many people. There are several advantages to using remote sensing to monitor drought, especially in developing countries with limited historical meteorological records and a low weather station density. In the present study, we assessed agricultural drought in the croplands of the BioBio Region in Chile. The vegetation condition index (VCI) allows identifying the temporal and spatial variations of vegetation conditions associated with stress because of rainfall deficit. The VCI was derived at a 250m spatial resolution for the 2000-2015 period with the Moderate Resolution Imaging Spectroradiometer (MODIS) MOD13Q1 product. We evaluated VCI for cropland areas using the land cover MCD12Q1 version 5.1 product and compared it to the in situ Standardized Precipitation Index (SPI) for six-time scales (1-6 months) from 26 weather stations. Results showed that the 3-month SPI (SPI-3), calculated for the modified growing season (Nov-Apr) instead of the regular growing season (Sept-Apr), has the best Pearson correlation with VCI values with an overall correlation of 0.63 and between 0.40 and 0.78 for the administrative units. These results show a very short-term vegetation response to rainfall deficit in September, which is reflected in the vegetation in November, and also explains to a large degree the variation in vegetation stress. It is shown that for the last 16 years in the BioBio Region we could identify the 2007/2008, 2008/2009, and 2014/2015 seasons as the three most important drought events; this is reflected in both the overall regional and administrative unit analyses. These results concur with drought emergencies declared by the regional government. Future studies are needed to associate the remote sensing values observed at high resolution (250m) with the measured crop yield to identify more detailed individual crop responses.
Lately, different methods for the fusion of multi-spectral and panchromatic images based on the Wavelet transform have been proposed. Even though, most of them provide satisfactory results, there is one, the algorithm a trous, which presents some advantages against the other fusion methods based on Wavelet transform. Thus its computation is very simple, it only implies elemental algebraic operations, such as products, differences and convolutions. Moreover it yields a better spatial and spectral quality than the others. On the other hand, it is well known that standard fusion methods do not allow to control the spatial and the spectral quality of the fused image. If the spectral quality is very high, this implies a low spatial quality and vice versa. In this sense, here, it is proposed a new version of a fusion method based on the Wavelet transform, computed through the algorithm à trous, which allows to customize the trade-off between the spectral and the spatial quality of the fused image by the evaluation of two quality indices: one spectral index, the ERGAS index, and other spatial one. For this last one, a new spatial index based on ERGAS concepts, translated to the spatial domain has been defined. Moreover, in this work, several different architectures for the computation of the investigated fusion method has been evaluated, in order to determine the optimize degradation level of the source image, required to perform the fusion process. The performances of the proposed fusion method have been compared with the fusion method based on Hierarchic Wavelet.
The main purpose of this work is to develop a new technique for image fusion based on the optimization of the Linear Mixture Model (LMM) through the algorithm known as Simulated Annealing (SA). The final result given by the algorithm is a fused image (FI) distinguished by a high spatial and spectral resolution. The algorithm proposed is being evaluated for the multispectral images registered by the Landsat 7 ETM+ sensor. In this study, the high spatial resolution image (HSRI) corresponds to the panchromatic image of this sensor, with a spatial resolution of 15m and the low spatial resolution image (LSRI) corresponds to the spectral bands TM1, TM2, TM3, TM4, TM5 and TM7, with a spatial resolution of 30m. As a result, it has been obtained images with a spectral resolution of the 6 bands and a spatial resolution of 15 m. The improvement in the quality of the fused images has allowed the identification of new, more homogeneous spectral classes.