The performance of environmental monitoring heavily depends on the availability of consecutive observation data and it
turns out an increasing demand in remote sensing community for satellite image data in the sufficient resolution with
respect to both spatial and temporal requirements, which appear to be conflictive and hard to tune tradeoffs. Multiple
constellations could be a solution if without concerning cost, and thus it is so far interesting but very challenging to
develop a method which can simultaneously improve both spatial and temporal details. There are some research efforts
to deal with the problem from various aspects, a type of approaches is to enhance the spatial resolution using techniques
of super resolution, pan-sharpen etc. which can produce good visual effects, but mostly cannot preserve spectral
signatures and result in losing analytical value. Another type is to fill temporal frequency gaps by adopting time
interpolation, which actually doesn't increase informative context at all. In this paper we presented a novel method to
generate satellite images in higher spatial and temporal details, which further enables satellite image time series
simulation. Our method starts with a pair of high-low resolution data set, and then a spatial registration is done by
introducing LDA model to map high and low resolution pixels correspondingly. Afterwards, temporal change
information is captured through a comparison of low resolution time series data, and the temporal change is then
projected onto high resolution data plane and assigned to each high resolution pixel referring the predefined temporal
change patterns of each type of ground objects to generate a simulated high resolution data. A preliminary experiment
shows that our method can simulate a high resolution data with a good accuracy. We consider the contribution of our
method is to enable timely monitoring of temporal changes through analysis of low resolution images time series only,
and usage of costly high resolution data can be reduced as much as possible, and it presents an efficient solution with
great cost performance to build up an economically operational monitoring service for environment, agriculture, forest,
land use investigation, and other applications.
There is an increasing demand for satellite remote sensing data with both high spatial and temporal resolution in many applications. But it still is a challenge to simultaneously improve spatial resolution and temporal frequency due to the technical limits of current satellite observation systems. To this end, much R&D efforts have been ongoing for years and lead to some successes roughly in two aspects, one includes super resolution, pan-sharpen etc. methods which can effectively enhance the spatial resolution and generate good visual effects, but hardly preserve spectral signatures and result in inadequate analytical value, on the other hand, time interpolation is a straight forward method to increase temporal frequency, however it increase little informative contents in fact. In this paper we presented a novel method to simulate high resolution time series data by combing low resolution time series data and a very small number of high resolution data only. Our method starts with a pair of high and low resolution data set, and then a spatial registration is done by introducing LDA model to map high and low resolution pixels correspondingly. Afterwards, temporal change information is captured through a comparison of low resolution time series data, and then projected onto the high resolution data plane and assigned to each high resolution pixel according to the predefined temporal change patterns of each type of ground objects. Finally the simulated high resolution data is generated. A preliminary experiment shows that our method can simulate a high resolution data with a reasonable accuracy. The contribution of our method is to enable timely monitoring of temporal changes through analysis of time sequence of low resolution images only, and usage of costly high resolution data can be reduces as much as possible, and it presents a highly effective way to build up an economically operational monitoring solution for agriculture, forest, land use investigation, environment and etc. applications.
Self-calibration is a fundamental technology used to estimate the relative posture of the cameras for environment recognition in unmanned system. We focused on the issue of recognition accuracy decrease caused by the vibration of platform and conducted this research to achieve on-line self-calibration using feature point's registration and robust estimation of fundamental matrix. Three key factors in this respect are needed to be improved. Firstly, the feature mismatching exists resulting in the decrease of estimation accuracy of relative posture. The second, the conventional estimation method cannot satisfy both the estimation speed and calibration accuracy at the same tame. The third, some system intrinsic noises also lead greatly to the deviation of estimation results. In order to improve the calibration accuracy, estimation speed and system robustness for the practical implementation, we discuss and analyze the algorithms to make improvements on the stereo camera system to achieve on-line self-calibration. Based on the epipolar geometry and 3D images parallax, two geometry constraints are proposed to make the corresponding feature points search performed in a small search-range resulting in the improvement of matching accuracy and searching speed. Then, two conventional estimation algorithms are analyzed and evaluated for estimation accuracy and robustness. The third, Rigorous posture calculation method is proposed with consideration of the relative posture deviation of each separated parts in the stereo camera system. Validation experiments were performed with the stereo camera mounted on the Pen-Tilt Unit for accurate rotation control and the evaluation shows that our proposed method is fast and of high accuracy with high robustness for on-line self-calibration algorithm. Thus, as the main contribution, we proposed methods to solve the on-line self-calibration fast and accurately, envision the possibility for practical implementation on unmanned system as well as other environment recognition systems.
It’s essential but challenging to retrieve spectral features as detailed as possible in current satellite imagery industry. In
this research, based on the physical model of sensor response function, we present a method to recover the reflective
spectrum at the front end of sensor in an iterative way and to greatly enhance the spectral details of satellite imagery. Our
method is able to largely increase the cost-performance ratio of current satellite multispectral imagery and also reveals
great potentials of satellite imagery in various disciplines.
Information of disaster damage assessment is very significant to disaster mitigation, aid and post disaster redevelopment
planning. Remotely sensed data, especially very high resolution image data from aircraft and satellite have been long
recognized very essential and objective source for disaster mapping. However feature extraction from these data remains
a very challenge task currently. In this paper, we present a method to extract building damage caused by earthquake from
two pairs of Worldview-2 high resolution satellite image. Targeting at implementing a practically operational system,
we develop a novel framework integrating semi-automatic building extraction with machine learning mechanism to
maximize the automation level of system. We also present a rectilinear building model to deal with a wide variety of
rooftops. Through the study case of Haiti earthquake, we demonstrate our method is highly effective for detecting
building damage from high resolution satellite image.
Assessing the damage caused by natural disasters requires fast and reliable information. Satellite imagery, especially
high-resolution imagery, is recognized as an important source for wide-range and immediate data acquisition. Disaster
assessment using satellite imagery is required worldwide. To assess damage caused by an earthquake, house changes or
landslides are detected by comparing images taken before and after the earthquake. We have developed a method that
performs this comparison using vector data instead of raster data. The proposed method can detect house changes
without having to rely on various image acquisition situations and shapes of house shadows. We also developed a houseposition
detection method based on machine learning. It uses local features including not only pixel-by-pixel differences
but also the shape information of the object area. The result of the house-position detection method indicates the
likelihood of a house existing in that area, and changes to this likelihood between multi-temporal images indicate the
damaged house area.
We evaluated our method by testing it on two WorldView-2 panchromatic images taken before and after the 2010
earthquake in Haiti. The highly accurate results demonstrate the effectiveness of the proposed method.
We have developed a house detection method based on machine learning for classification of houses and non-houses. In
order to achieve precise classification, it is important to select features and to determine a dimensionality reduction
method and a learning method. We first applied Gabor wavelet filters to generate the feature vectors and then developed
a new method using the Adaboost algorithm to reduce the dimensionality of feature space. If a linear classifier made by
one element of a feature vector is considered as a weak classifier in Adaboost, higher contribution dimensions can be
selected. We used support vector machines (SVM) for the learning method. We evaluated our method by using
QuickBird panchromatic images. Despite the significant variations in house shape and rooftop color, and in background
clutter, our algorithm achieved high accuracy in house detection.
Proc. SPIE. 4898, Image Processing and Pattern Recognition in Remote Sensing
KEYWORDS: Image fusion, Data modeling, Satellites, 3D modeling, Satellite imaging, Earth observing sensors, 3D image processing, High resolution satellite images, Airborne laser technology, Data fusion
High-resolution satellite imagery has become widely available recently and it enables urban remote sensing to not only
classify land-use, but also map the details in urban environment. However, due to high object density and scene complexity, normally it is extremely difficult to automatically extract urban objects solely based on images. This paper describes our approach to detect buildings by fusing high-resolution IKONOS satellite images and airborne laser scanning data. With the high spatial resolution, rich spectral signature of IKONOS images and the very accurate positioning information of laser data, our data fusion methods show an efficient way to exploit the complementary characteristics of these two kinds of dataset for the purpose of building detection. In order to simplify the complexity of processing, a top to down strategy is generally applied to extract features of objects from coarsely to finely, and multiple cues are also derived and fused at different processing levels. The paper describes the developed framework and experimental results in detail, and also discusses both the advantage and deficiencies of the approach.