Thermal infrared (T IR) imagery is normally acquired at coarser pixel resolution than that of shortwave sensors on the same satellite platform. Often, T IR resolution is not suitable for monitoring crop conditions of individual fields or the impacts of land cover changes that are at significantly finer spatial scales. Consequently, thermal sharpening techniques have been developed to sharpen T IR imagery to shortwave band pixel resolutions. One of the most classic thermal sharpening technique is T sHARP . It uses a relationship between land surface temperature and normalized vegetation index (N DV I). However, there are several studies that prove that a single relationship between T IR and N DV I may only exist for a limited class of landscape. Our work hypothesis stated that it is possible to improve the spatial resolution of T IR imagery considering a relationship between vegetation and several soil spectral indexes and T IR as well the spatial context information. In this work, the potential of Superpixels (SP ) combined with Regression Random Forest (RRF ) is used to augmenting the spatial resolution of the Landsat 8 T IR (Band 10 and 11) imagery to their visible (V IS) spatial resolution. The SP allows to consider the contextual information over the land cover, and RF allows to integrate in a unique model the relationship between five spectral indices and T IR data. The results obtained by SP-RRF approach shows the potential of this methodology, compared with classical T sHARP method.
Recently, there has been a noteworthy increment of using images registered by unmanned aerial vehicles (UAV) in different remote sensing applications. Sensors boarded on UAVs has lower operational costs and complexity than other remote sensing platforms, quicker turnaround times as well as higher spatial resolution. Concerning this last aspect, particular attention has to be paid on the limitations of classical algorithms based on pixels when they are applied to high resolution images. The objective of this study is to investigate the capability of an OBIA methodology developed for the automatic generation of a digital terrain model of an agricultural area from Digital Elevation Model (DEM) and multispectral images registered by a Parrot Sequoia multispectral sensor board on a eBee SQ agricultural drone. The proposed methodology uses a superpixel approach for obtaining context and elevation information used for merging superpixels and at the same time eliminating objects such as trees in order to generate a Digital Terrain Model (DTM) of the analyzed area. Obtained results show the potential of the approach, in terms of accuracy, when it is compared with a DTM generated by manually eliminating objects.
Efficient water management in agriculture requires an accurate estimation of evapotranspiration (ET). There are available several balance energy surface models that provide a daily ET estimation (ET<sub>d</sub>) spatially and temporarily distributed for different crops over wide areas. These models need infrared thermal spectral band (gathered from remotely sensors) to estimate sensible heat flux from the surface temperature. However, this spectral band is not available for most current operational remote sensors. Even though the good results provided by machine learning (ML) methods in many different areas, few works have applied these approaches for forecasting distributed ET<sub>d</sub> on space and time when aforementioned information is missing. However, these methods do not exploit the land surface characteristics and the relationships among land covers producing estimation errors. In this work, we have developed and evaluated a methodology that provides spatial distributed estimates of ET<sub>d</sub> without thermal information by means of Convolutional Neural Networks.
Image fusion is the process of combining information from two or more images into a single composite image that is
more informative for visual perception or additional processing. Pan-sharpening algorithms work either in the spatial or
in the transform domain and the most popular and effective methods include arithmetic combinations (Brovey
transform), the intensity-hue-saturation transform (IHS), principal component analysis (PCA) and different multiresolution
analysis-based methods, typically wavelet transforms. In recent years, a number of image fusion quality
assessment metrics have been proposed. Automatic quality assessment is necessary to evaluate the possible benefits of
fusion, to determine an optimal setting of parameters, as well as to compare results obtained with different algorithms to
check the improvement of spatial resolution while preserving the spectral content of the data. This work addresses the
challenging topic of the quality evaluation of pan-sharpening methods. In particular, a database with a synthetic image
and real GeoEye satellite data was created and several pan-sharpening methods were implemented and tested. Some
interesting results about the color and the spatial distortions of each method were presented and it was demonstrated that
some colors bands are more affected than others depending on the fusion techniques. After the evaluation of these fusion
algorithms, we can conclude that, in general, the à trous wavelet-based methods achieve the best spectral performance
while the IHS-based techniques attain the best spatial accuracy.