The COTDR sensors based on Rayleigh interference pattern demodulation can be used for the dynamic or static measurement of strain and temperature distribution. Conventionally, one-dimension cross-correlation methods are applied. The frequency shift value and thus the local strain or temperature are obtained by locating the resulting correlation peak in the frequency domain. However, these methods offer poor accuracy performance when the spectrum range for cross-correlation is comparable or smaller than the frequency shift induced by large strain or temperature changes. This substantially limits the dynamic range of the sensing system and deteriorates the quality of demodulated strain or temperature. In this paper, a spectral efficient Rayleigh interference pattern demodulation algorithm for COTDR sensors based on two-dimensional image cross-correlation technique is proposed. To approve the proposal, simulations have been built to generate measured and reference images of a 50 m sensing fiber based on a COTDR sensing system with the frequency tuning range from 200 to 1000 MHz. The simulation results indicate that the image distance width is inversely proportional to the demodulation error rate, the maximum measurable frequency shift can be up to 100% of the spectrum range. The strain resolution is decided by the spline interpolation after cross-correlation, and the distance width of the image doesn’t influence spatial resolution of system but will decrease the minimum measurable length that the strain induced on the fiber. The proposed two-dimensional image cross-correlation algorithm is potential to apply to distributed optical fiber sensors based on frequency demodulation, including φ-OTDR, OFDR, BOTDA.
To obtain sharp images of space targets, high-accuracy restoration of degraded images corrected by an adaptive optics (AO) system is necessary. Existing algorithms are mainly based on the physical constraints of both image and point-spread function (PSF), which are usually continuously estimated in an alternately iterative manner and take a long time to restore blurred images. We propose an end-to-end blind restoration method for ground-based space target images based on conditional generative adversarial network without estimating PSF. The whole network consists of two parts, generator network and discriminator network, which are used for learning the atmospheric degradation process and achieving the purpose of generating restored images. To train the network, a simulated AO image dataset containing 4800 sharp–blur image pairs is constructed by 80 three-dimensional models of space targets combined with degradation of atmospheric turbulence. Experimental results demonstrate that the proposed method not only enhances the restoration accuracy but also improves the restoration efficiency of single-frame object images.