PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Proceedings Volume 2011 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 820001 (2011) https://doi.org/10.1117/12.921662
This PDF file contains the front matter associated with SPIE Proceedings Volume 8200, including the Title Page, Copyright information, Table of Contents, and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2011 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 820002 (2011) https://doi.org/10.1117/12.902981
An adaptive optics scanning laser ophthalmoscope (AO-SLO) using a liquid-crystal spatial light modulator was
developed. For routine clinical applications, long-term stability of the AO system is very important because unavoidable
eye movement may degrade the instrument's performance. We studied the long-term performance of the aberration correction with healthy human eyes. Retinal image acquisition and AO data collection were performed simultaneously for periods of several minutes. We confirmed that, for more than 90% of the periods, the root-mean-square errors of residual wavefront were below the Marechal criterion. Drifts and microsaccades of fixational eye movement were examined using retinal images and residual aberrations. The results showed significant correlation between the transverse shift of retinal image and the low-order residual wavefront aberration during the drifts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2011 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 820003 (2011) https://doi.org/10.1117/12.903668
We demonstrate tomographic imaging of intracellular activity of living cells by a low-coherent quantitative phase
microscope. The intracellular organelles, such as the nucleus, nucleolus, and mitochondria, are moving around inside
living cells, driven by the cellular physiological activity. In order to visualize the intracellular motility in a label-free
manner we have developed a reflection-type quantitative phase microscope which employs the phase shifting
interferometric technique with a low-coherent light source. The phase shifting interferometry enables us to quantitatively
measure the intensity and phase of the optical field, and the low-coherence interferometry makes it possible to selectively
probe a specific sectioning plane in the cell volume. The results quantitatively revealed the depth-resolved fluctuations of
intracellular surfaces so that the plasma membrane and the membranes of intracellular organelles were independently
measured. The transversal and the vertical spatial resolutions were 0.56 μm and 0.93 μm, respectively, and the
mechanical sensitivity of the phase measurement was 1.2 nanometers. The mean-squared displacement was applied as a
statistical tool to analyze the temporal fluctuation of the intracellular organelles. To the best of our knowledge, our
system visualized depth-resolved intracellular organelles motion for the first time in sub-micrometer resolution without
contrast agents.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2011 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 820004 (2011) https://doi.org/10.1117/12.904756
This paper describes an experiment-based model to simulate 3D infrared target in sea background. The geometric model
of dynamic sea waves are based on gravity wave theory, while the sky model use SkyDome technology which takes the
sky as a dome covers the land. To create the infrared images of sea waves and sky, the radiance at the detector are
calculated. To acquire the radiance of target radiation, a spectrometer is settled to measure the radiant emitted by
interested targets. Meanwhile, the radiant of sky is also measured to provide reference data. Furthermore, the
spectrometer is used to measure the atmospheric transmission rate which is compared to the values calculated by
MODTRAN. The optical system is simulated based on OTF theory. The detector noise is expressed as an equivalent
Gaussian white noise. Finally, the simulation images are compared with the practical images. It has been proved that this
experiment-based model produces infrared images with high reality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2011 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 820005 (2011) https://doi.org/10.1117/12.904669
This paper introduces a novel method to adaptively diminish the effects of disturbance in the airborne camera shooting
traffic video. Based on the moving vector of the tracked vehicle, a search area in the next frame is predicted, which is the
area of interest (AOI) to the mean-shift method. Background color estimation is performed according to the previous
tracking, which is used to judge whether there is possible disturbance in the predicted search area in the next frame.
Without disturbance, the difference image of vehicle and background could be used as input features to the mean-shift
algorithm; with disturbance, the histogram of colors in the predict area is calculated to find the most and second
disturbing color. Experiments proved this method could diminish or eliminate the effects of homochromous disturbance
and lead to more precise and more robust tracking.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Guangting Liu, Dayuan Yan, Xiaoming Hu, Hao Zhang, Lei Zhu
Proceedings Volume 2011 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 820006 (2011) https://doi.org/10.1117/12.904656
Using thermal infrared imaging to obtain the temperature distribution of the train wheel sets is a new and promising
temperature measurement method. The accuracy and precision of the method are suffered from vignetting. In this paper,
the vignetting coefficient function has been derived and imaging system calibration is introduced to compensate the
measurement result error caused by the inconsistent of the optic center and the image center. The vignetting correction is
implemented on the hardware platform based on FPGA. Experimental results demonstrate that vignetting correction
enhance the accuracy of measurement system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2011 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 820007 (2011) https://doi.org/10.1117/12.904671
Point target detection is an important issue for the application of infrared search and track (IRST) system. To detect the
weak point target in infrared image, this paper introduces a new method based on two-steps gradient algorithm to
eliminate the impact of complicated background and extract possible targets. Firstly, the characteristics of point target are
analyzed based on 288x4 linear infrared FPAs. Secondly, the point targets in complicated infrared background are
classified into four groups. The detection function is derived for each group, and then the comprehensive detection
function is derived to extract the point target. Finally, a practical experiment is completed based on a 288x4 IRST system.
It is proved that the two-steps gradient algorithm can reduce the number of possible targets which is helpful to the
post-processing work.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2011 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 820008 (2011) https://doi.org/10.1117/12.904860
A video laparo endoscopic system was introduced in this paper, which included a camera, a white LED illuminator and a
high-definition image processing workstation. The camera adopted a CMOS image sensor to achieve high resolution imaging. Four LEDs were assembled directly to the front-end of the endoscope. Optical fiber coupling system was unnecessary. Image processing workstation was used to demosaic, enhance, and display the image on an LCD monitor in real time, and then capture and store the image. The image resolution reached 1280*1024. This system can be used directly in a laparoscope casing of 10mm in diameter. It is compact, low-cost, high-definition, and suitable for the single-port laparoscope surgery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2011 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 820009 (2011) https://doi.org/10.1117/12.907080
Wavefront coding technology can extend the depth of field of the iris imaging system, but the iris image obtained
through the system is coded and blurred and can't be used for the recognition algorithm directly. The paper presents a
fuzzy iris image restoration method used in the wavefront coding system. After the restoration, the images can be used
for the following processing. Firstly, the wavefront coded imaging system is simulated and the optical parameter is
analyzed, through the simulation we can get the system's point spread function (PSF). Secondly, using the blurred iris
image and PSF to do a blind restoration to estimate a appropriate PSFe. Finally, based on the return value PSFe of PSF,
applying the regularization filter on the blurred image. Experimental results show that the proposed method is simple and
has fast processing speed. Compared with the traditional restoration algorithms of Wiener filtering and Lucy-Richardson
filtering, the recovery image that got through the regularization filtering is the most similar with the original iris image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ning Zhang, Lijun Zhang, Xiaohua Liu, Weiliang Cao
Proceedings Volume 2011 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 82000A (2011) https://doi.org/10.1117/12.916689
Based on the spectrum rebuilding technology of spatially modulated imaging spectrometer, a real-time data gathering and
spectrum rebuilding system of all reflection Fourier Transform imaging spectrometer is built on FPGA. It integrates
interferogram sampling, spectrum rebuilding, data restoring, VGA display and data transmission on a single chip of
FPGA and can make them accomplished in real-time. This paper presents the key technology and the spectral calibration
study of this system. Using a mercury lamp as the calibration source, the calibration experiments have been carried on
this FPGA-based system. After analysis the experimental results, a function of pixel number-wave number has been established, and the wave number resolution has been got, which is same to the resolution got by a computer-based system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2011 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 82000B (2011) https://doi.org/10.1117/12.916690
Currently, underwater divers become one of the main forces for underwater rescue and exploration. In order to
extend the search distance of the underwater divers, miniaturized underwater optoelectronics imaging system
turns out to be the main direction of development in and abroad. On the basis of introduction to some
optoelectronic imaging advices for underwater divers, this paper designs a set of handheld underwater
observation instruments used for underwater rescue and exploration, using the range-gated imaging technology,
to eliminate the bad influence efficiently which is caused by the water backscatter. This paper gives some
introductions about principles, system construction, and control system part, and makes a brief analysis of the
characteristics and prospect of the system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2011 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 82000C (2011) https://doi.org/10.1117/12.907324
Here two imaging spectrometers, based on a different prism-grating-prism (PGP) dispersing component, are designed
and presented. One works at the visible near infrared (VNIR) waveband from 400nm to 1000nm with 1.7nm/pixel
spectral resolution, 85mm track length. As for the other, its spectral coverage, spectral resolution, and track length are
the short wavelength infrared (SWIR) waveband from 900nm to 1700nm, 3nm/pixel, and 108mm, respectively. Both of
the imaging spectrometers have advantages of fast speed (F/2.0), wide spectral range, low distortion, low cost, even
relative illumination, and compactness made them ideally suited for hyperspectral imaging remote sensing. Either of
them gains the preferable imaging quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2011 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 82000D (2011) https://doi.org/10.1117/12.907347
Based on the wave aberration theory, a new method of optical design of the planate symmetric Offner
type imaging spectrometer is performed. Astigmatism changing with the diffraction angle of the grating,
the meridional and saggital focusing characters are all studied. Determination of the initial
configurations and optimally design methods of two improved types of Offner imaging spectrometer
are discussed in detailed. A design example with the numerical aperture larger than 0.2, and the
entrance slit 30mm is given. Its spectral resolution is better than 2nm and MTF is above 0.7@20lp/mm.
The smile and keystone are less than 3% and 0.2% of the pixel respectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2011 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 82000E (2011) https://doi.org/10.1117/12.904862
Automatic peak detection is important for the application of Raman spectroscopy. However, the existence of noise
and baseline disturbances will greatly degrade the reliability and accuracy of the peak detection. In this paper we
proposed a hybrid wavelet-transform-based algorithm to improve the peak detection performance. Here, continuous wavelet transform method was used to robustly identify the spectral peaks, and to minimize the influence of noise and baseline disturbances. A localized curve-fitting method was used to obtain the accurate parameters of the peaks, such as location, width and intensity. The simulation and experiment proved that this method was robust against various disturbances and it could not only automatically detect the peaks but also obtain accurate parameters of the spectral peaks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2011 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 82000F (2011) https://doi.org/10.1117/12.906556
Registration of two three-dimensional (3-D) point sets is a fundamental problem of 3-D shape measurement and
modeling pipeline. This paper investigates the automatic pair-wise method to register partially overlapped range images
generated by self-developed fringe pattern profilometry (FPP) system. The method is based on the classic iterative
closest point (ICP) algorithm but combined with several extensions to adapt to the experimental data. Firstly, the distance
function for correspondence finding is modified to be the weighted linear combination of positions and Euclidean
invariant features for improving the probability of convergence. In addition, outliers can be discarded through robust
statistics and adaptive thresholding of weighted distances between corresponding point pairs. Both artificial and real data
are used to test the proposed method. In the ideal noise-free conditions, the experimental results illustrate that it
converges to the global minima. The experimental results also show that the proposed method increases the possibility of
global convergence when deal with partially overlapped range images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Kankan Zhao, Jiangtao Xi, Yanguang Yu, Joe F. Chicharo
Proceedings Volume 2011 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 82000G (2011) https://doi.org/10.1117/12.906570
3D face recognition technique has gained much more attention recently, and it is widely used in security system,
identification system, and access control system, etc. The core technique in 3D face recognition is to find out the
corresponding points in different 3D face images. The classic partial Iterative Closest Point (ICP) method is iteratively
align the two point sets based on repetitively calculate the closest points as the corresponding points in each iteration. After
several iterations, the corresponding points can be obtained accurately. However, if two 3D face images with different scale
are from the same person, the classic partial ICP does not work. In this paper we propose a modified partial Iterative
Closest Point (ICP) method in which the scaling effect is considered to achieve 3D face recognition. We design a 3x3
diagonal matrix as the scale matrix in each iteration of the classic partial ICP. The probing face image which is multiplied
by the scale matrix will keep the similar scale with the reference face image. Therefore, we can accurately determine the
corresponding points even the scales of probing image and reference image are different. 3D face images in our
experiments are acquired by a 3D data acquisition system based on Digital Fringe Projection Profilometry (DFPP). A 3D
database consists of 30 group images, three images with the same scale, which are from the same person with different
views, are included in each group. And in different groups, the scale of the 3 images may be different from other groups.
The experiment results show that our proposed method can achieve 3D face recognition, especially in the case that the
scales of probing image and referent image are different.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Zhaohui Wang, Zonghua Zhang, Tong Guo, Sixiang Zhang, Xiaotang Hu
Proceedings Volume 2011 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 82000H (2011) https://doi.org/10.1117/12.904843
This paper presents an absolute phase calculation method from one composite RGB fringe pattern image by using
Wavelet transform algorithm and the optimum fringe number selection. Three fringe patterns having optimum fringe
numbers are projected simultaneously onto an object surface via the red, green and blue channels of a DLP (Digital Light
Processing) projector. From a different viewpoint, a CCD camera captures the deformed fringe patterns with respect to
the object shape to get a composite RGB image. After compensating for the crosstalk and chromatic aberration between
color channels, three fringe patterns are extracted from the composite color image. Wavelet Transform algorithm is
studied to calculate wrapped phase from one fringe pattern. Therefore, three wrapped maps are obtained from the three
extracted fringe patterns. An absolute phase map is calculated pixel by pixel after applying the optimum three-fringe numbers selection method to the three obtained wrapped phase maps. Simulated and experimental data demonstrate the algorithm's validity of calculating the absolute phase and shape information. The proposed method can measure 3D shape information of moving objects since the system needs only one RGB fringe pattern image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Shujun Huang, Zonghua Zhang, Tong Guo, Sixiang Zhang, Xiaotang Hu
Proceedings Volume 2011 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 82000I (2011) https://doi.org/10.1117/12.904846
3D fringe projection measurement techniques are increasingly important in production for automation, quality control,
reversal engineering, and biomedical engineering because of the advantages of non-contact operation, full-field
acquisition and automatic data processing. With the advent of DLP (Digital Light Processing) projectors, digital fringe
pattern projection techniques have been widely studied in academia and applied to industries. The experimental data
from living profile of fringe patterns show that the obtained intensity has some fluctuation, which cause the calculated
phase data inaccuracy. This paper presents one software method to eliminate the fluctuation between fringe patterns.
Four-step phase-shifting algorithm is used to calculate the wrapped phase data, so four fringe pattern images having pi/2
shift in between need to be captured. Because of the fluctuation of intensity, the captured fringe patterns have an up or
down shift among the four images. By considering the histogram of each fringe pattern, we present one compensation
method to eliminate the fluctuation between fringe patterns. Simulated data are first tested by generating fringe patterns
with fluctuation. Then experimental data from a 3D imaging system demonstrate the validity for calculating the phase
and shape information with high accuracy. The results show that the proposed method eliminates the fluctuation between
fringe pattern images to give accurate shape data information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2011 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 82000J (2011) https://doi.org/10.1117/12.903902
Compressed sensing or compressive sampling (CS) is a new framework for simultaneous data sampling and compression
which was proposed by Candes, Donoho, and Tao several years ago. Ever since the advent of a single-pixel camera, one
of the CS applications - compressive imaging (CI, also referred as feature-specific imaging) has aroused more interest of
numerous researchers. However, it is still a challenging problem to choose a simple and efficient measurement matrix in
such a hardware system, especially for large scale image. In this paper, we propose a new measurement matrix whose
rows are the odd rows of N order Hadamard matrix and discuss the validity of the matrix theoretically. The advantage of
the matrix is its universality and easy implementation in the optical domain owing to its integer-valued elements. In
addition, we demonstrate the validity of the matrix through the reconstruction of natural images using Orthogonal
Matching Pursuit (OMP) algorithm. Due to the limitation of the memory of the hardware system and personal computer
which is used to simulate the process, it is impossible to create such a large matrix that is used to conduct large scale
images. In order to solve the problem, the block-wise notion is introduced to conduct large scale images and the
experiments results present the validity of this method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Yuheng Chen, Jiankang Zhou, Xinhua Chen, Yiqun Ji, Weimin Shen
Proceedings Volume 2011 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 82000K (2011) https://doi.org/10.1117/12.903709
The developed spaceborne camera is an exclusive functional load of a micro satellite.
The signal-to-noise ratio (SNR) reflects its radiance response and is the parameter that directly
associates with the quality of its acquired images. The SNR determination task of the spaceborne
camera mainly consists of two parts: As is reported before firstly the spatial environment is
simulated and the atmosphere transmission mode is built with MODTRAN to calculate and predict the SNR of the speceborne camera under aerial working condition. In this paper, the in-lab measuring experiment is carried out to measure the theoretical imaging performance of the camera before its aerial use. An integrating sphere is utilized to supply well-proportioned illumination, and a number of images are acquired by the spaceborne camera under different luminance conditions. The images are processed in use of certain algorithm and a special filter to extract the noise. The SNRs corresponding to different illumination conditions are calculated so that full-scale radiance response feature of the camera can be gained. The dynamic range is another parameter that characterizes the imaging capacity of a camera. The relationship between dynamic range and SNR of a camera is to be explored in this paper. Different dynamic configurations are set and the SNRs of different dynamic range configurations are tested, which experimentally reveals the dynamic range's influence on SNR.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2011 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 82000L (2011) https://doi.org/10.1117/12.904882
Phase calculation-based 3D imaging systems have been widely studied because of the advantages of non-contact
operation, full-field, fast acquisition and automatic data processing. A vital step is calibration, which builds up the
relationship between phase map and range image. The existing calibration methods are complicated because of using a
precise translating stage or a 3D gauge block. Recently, we presented a simple method to convert phase into depth data
by using a polynomial function and a plate having discrete markers on the surface with known distance in between.
However, the initial position of all the markers needs to be determined manually and the X, Y coordinates are not
calibrated. This paper presents a complete calibration method of phase calculation-based 3D imaging systems by using a
plate having discrete markers on the surface with known distance in between. The absolute phase of each pixel can be
calculated by projecting fringe pattern onto the plate. Each marker position can be determined by an automatic extraction
algorithm, so the relative depth of each pixel to a chosen reference plane can be obtained. Therefore, coefficient set of
the polynomial function for each pixel are determined by using the obtained absolute phase and depth data. In the
meanwhile, pixel positions and the X, Y coordinates can be established by the parameters of the CCD camera.
Experimental results and performance evaluation show that the proposed calibration method can easily build up the relationship between absolute phase map and range image in a simple way.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2011 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 82000M (2011) https://doi.org/10.1117/12.906667
As a kind of photogrammetric platform unmanned aerial vehicles (UAVs) usually employ amateur digital cameras, the
photogrammetric data availability depend on the integrated sensors calibration result and the flight performance, which
are great limitations to the UAV systems used in aerial photogrammetry. In this work, a specific workflow for the UAV
system calibration in survey field is introduced, first a calibration field built in the survey region by using artificial
targets as ground control points, furthermore, a specific flight planning is developed, then we take a fix-wing UAV to
implement the survey work. The results for the bundle adjustment are less than 0.5m. which is better than estimated the
camera interior orientation parameters(IOP) in the lab. Also the study described which source of errors influenced the
UAV photogrammetry and bring forward improvement advice.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Chang Qi, Longling Feng, Yimin Feng, Benguo Wang, Bo Chen, Xiong Luo
Proceedings Volume 2011 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 82000N (2011) https://doi.org/10.1117/12.904831
In this paper, we describe a new design of semi-active laser guided decoding system based on FPGA. The system is
designed to receive the echo pulse through a 4-qundrant laser sensor and process the digital laser pulses through analogdigital
conversion, which are some fixed interval encoding signal reflected from the target. The decoding system improve
the adjustment accuracy of the laser seeker, which reach 0.1us about 50 times in theory compared to current technology.
State machine is used to address complicated laser signal, including how to hand of the identification and trapping of the
correct laser pulses, how to hand of a state of anti-jamming, how to hand of capturing the laser pulses again after signal
missing. And a real-time gate signal is adopted to enhance the ability of anti-interference[1]. Processed signal is used to
monitor and control, the processed result is transmitted to a electric actuator and subsequently be sent to laser seeker as a
guidance signal or to a PC to simulate and monitor or to be stored in Flash that can extract whenever needed. The whole
performance of the system is oprated in laboratory and decoding model is simulated with Modelsim module in detail. The
testing results show that the design can reduce the span of decoding time effectively, identify and track the echo pulses
from target correctly even to the missed codes or interferential codes. The feasible technique wins the time for laser
guidance, and has some theoretical and military value.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2011 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 82000O (2011) https://doi.org/10.1117/12.904958
With the rapid development of the electronic technology, multimedia technology and mobile communication technology, video monitoring system is going to the embedded, digital and wireless direction. In this paper, a solution of wireless video monitoring system based on WCDMA is proposed. This solution makes full use of the advantages of 3G, which have Extensive coverage network and wide bandwidth. It can capture the video streaming from the chip's video port, real-time encode the image data by the high speed DSP, and have enough bandwidth to transmit the monitoring image through WCDMA wireless network. The experiments demonstrate that the system has the advantages of high stability, good image quality, good transmission performance, and in addition, the system has been widely used, not be restricted by geographical position since it adopts wireless transmission. So, it is suitable used in sparsely populated, harsh environment scenario.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2011 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 82000P (2011) https://doi.org/10.1117/12.904910
The method to evaluate the grade of the pork based on hyperspectral imaging techniques was studied. Principal
component analysis (PCA) was performed on the hyperspectral image data to extract the principal components which
were used as the inputs of the evaluation model. By comparing the different discriminating rates in the calibration set and
the validation set under different information, the choice of the components can be optimized. Experimental results
showed that the classification evaluation model was the optimal when the principal of component (PC) of spectra was 3,
while the corresponding discriminating rate was 89.1% in the calibration set and 84.9% in the validation set. It was also
good when the PC of images was 9, while the corresponding discriminating rate was 97.2% in the calibration set and
91.1% in the validation set. The evaluation model based on both information of spectra and images was built, in which
the corresponding PCs of spectra and images were used as the inputs. This model performed very well in grade classification evaluation, and the discriminating rates of calibration set and validation set were 99.5% and 92.7%, respectively, which were better than the two evaluation models based on single information of spectra or images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2011 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 82000Q (2011) https://doi.org/10.1117/12.902739
Stripe images of electronic speckle shearography pattern interferometry, in which stripe distribution are
correlated with vertical micro distortion or micro vibration of objects, are severely disturbed by noises, and so
denoising stripe images of electronic speckle shearography pattern interferometry is necessary to extract
useful stripe distribution information. Denoising methods and flow for stripe images of electronic speckle
shearography pattern interferometry are analyzed in this paper to get the stripe distribution correlated with
vertical micro distortion or micro vibration of objects. The noises in the stripe images of electronic speckle
shearography pattern interferometry are comprised of speckle noise and other random noises induced by
environmental disturb and instrumental performance, so it's difficult to use familiar filters, such as
mean-value filter, medium-value filter and adaptive filter, etc, to remove all noises in the stripe images. The
combined filter composed of mean-value filter and wavelet filter is designed to denoise stripe images. The aim
of mean-value filter is to remove random noises induced by environmental disturb and instrumental
performance, and then the wavelet filter, in which the Meyer wavelet is adopted, is designed to remove
speckle noise in the stripe images. The final stripe distribution images after denoising and binarization are
listed to prove the denoising validity of combined filter based on wavelet transform.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2011 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 82000R (2011) https://doi.org/10.1117/12.903444
This paper proposes a novel algorithm for distinguishing scenery information from cloud noise in the low-level and
high-level detail coefficients using the wavelet decomposition. Also this paper shows approximate coefficients only
containing the scenery information, and high-level detail coefficients mainly including the cloud noise and the partial
scenery information. Usually cloud is brighter than the scene illumination. Therefore the appropriate brightness threshold
is setup for processing high-level detail coefficients aimed at the elimination of cloud noise. Simultaneously to remove
the residual cloud at the low frequency component and improve the clarity of the scenery image, the paper further
decomposes the detail coefficients based on the frequency. For example, the low-level detail coefficients are
decomposed further once or twice by wavelet packets. So we can remove remaining cloud decomposed effectively at the
low frequency, and through assigning the appropriate weight to the detail coefficient, achieve the goal for enhancing
scenery information and improving the image clarity. Considering influence of the parameter changes on the algorithm
performance, we use the entropy as the criterion for choosing the optimal parameters step by step. We have
demonstrated that this algorithm using the entropy as criterion is feasible. The experimental results are superior to
homomorphism filtering and the Retinex algorithm in many aspects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2011 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 82000S (2011) https://doi.org/10.1117/12.903779
In order to improve the image contrast and strengthen the details of high dynamic range (HDR) infrared (IR) image, a
detail enhancement method based on histogram statistical stretching (HSS) and gradient filtering (GF) is proposed. First,
the outliers in the HDR image are clipped by the proposed histogram statistical strategy and the clipped histogram is then
extended to a new grayscale range to acquire a better contrast of view. The details in the HDR image are extracted by
using the GF and its result is then adjusted by using the HSS to enhance the low-contrast detail perception. Finally, the
GF result is superposed with the HSS result in a proper way to generate the final detail-enhanced image. The
contribution and innovation made is threefold. A new technique for visualization of HDR image especially tailored to IR
image is proposed. The effectiveness and convenience are shown by analyzing the experimental images that represent the
typical and common IR scenes. Last, the performance is quantitatively assessed compared with other well-established
methods. The simulation and experimental results approve its low cost, low complexity and promising outlook for
real-time processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2011 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 82000T (2011) https://doi.org/10.1117/12.903780
A large number of stars in the drift-scanning star image have interfered with the detection of small target, this paper
proposes an adaptive linear filtering method to achieve the small target detection by suppressing the stars. Firstly, the
characteristics of stars, interest target and noise three different representative objects in the star image are analyzed, then
the standardized linear filter is constructed to suppress the stars. For the purpose of decreasing the influence region of
stars filtering uniformly, a gradient linear filter is constructed to modify the stars suppression method with the
standardized linear filter. Then the filter parameter selection method is given. Finally, a multi-frame target track
experiment on the real drift-scanning data is made to testify the validity of the proposed method. With the processing
results of different methods, it has been showed that the proposed method for suppressing stars with different length and
lean angle has a better effect, higher robustness and easier application than the others.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Xiaoming Chen, Shusheng Yu, Yujue Li, Chao Di, Yi Cao
Proceedings Volume 2011 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 82000U (2011) https://doi.org/10.1117/12.903843
The contrast enhancement of infrared image is useful and important to the infrared image system. The current techniques
of local enhancing exists either over-enhancing or high complexity problems. In this paper, we propose a novel contrast
enhancement algorithm which combines histogram equalization based methods (HEBM) and an improved unsharp
masking based methods (UMBM). This proposed algorithm uses HEBM to achieve global contrast enhancement and
UMBM to achieve local contrast enhancement. Some elaborate strategies are applied to the algorithm to avoid the overenhancement
and magnification of noise when contrast is enhanced. The article is organized as follows. First, we review
the techniques developed in the literature for contrast enhancement. After then, we introduce the new algorithm in
details. The performance of the proposed method is studied on experimental IR data and compared with those yielded by
two well established algorithms. The developed algorithm has good performance in global contrast and local contrast
enhancement with noise and artifact suppression.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2011 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 82000V (2011) https://doi.org/10.1117/12.904078
We present a method of target detection against strong light based on gate viewing. In this method, a nanosecond-scale
gate shutter is used to control the exposure time of CCD and reduce the collection of obtrusive light, and a
nanosecond-scale pulsed laser is used to illuminate targets and increase signal energy. By matching them, the ratio
between signal and obtrusive light will be significantly improved to detect targets against light disturbance. We have
analyzed the method in theory, and performed it in experiment. In addition, a stroboscopic time sequence is used, and the
setting of temporal parameters is also discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2011 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 82000W (2011) https://doi.org/10.1117/12.904345
Based on the theory of the coherent detection of laser, a signal model
of Synthetic Aperture Lidar is set up. As the wavelength of laser light from the SAL is
shorter than that of microwave from the SAR about several times, considering the
parts of Taylor Series must be enough, compute and analysis the influence of
vibration error on the phase of SAL echo signal. In order to validate the signal model
and the theory computation, numerical simulation on a strip-mode is carried out. On
the reference of the national military standard for environment, in the condition of the
frequency below 500MHz, simulate and show the influence of the vibration
parameters such as amplitude, frequency or initial phase on the SAL imaging, and the
image with the vibration of ideal point target is given. The simulation results show
that the influence of vibration on the azimuth resolution is severity, but that on the
range resolution is trivial, this is by the reason of the period of vibration is different
from the period of laser pulse, and in one laser pulse period, the vibration is
considered immovability, therefore the influence on the range compress is nonentity,
and the range resolution is steady. In addition, simulate the condition of the same
amplitude and frequency but different initial phase, find that the influence of different
initial phase on SAL imaging is different and serious, as the initial phase is stochastic,
the influence of vibration on SAL imaging is erratically.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2011 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 82000X (2011) https://doi.org/10.1117/12.904352
How to simulate the decay pattern is crucial during lifetime inversion while utilizing intensity images acquired at
increasing delays in time gated fluorescence lifetime imaging microscopy (FLIM) method. A relatively novel
understanding of fluorescence decay pattern theory and stimulation algorithms of time gated FLIM method have been
analyzed in this paper comprehensively. Main lifetime computing algorithms can be classified as exponential pattern
retrieve and polynomial fitting procedure. Especially, a novel lifetime computing method based on bi-exponential decay
has been discussed. In experiment, we have validated the proposed algorithms utilizing synthetic images. Performances
like calculating precision and computing speed of the algorithms above have also been compared.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2011 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 82000Y (2011) https://doi.org/10.1117/12.904623
High speed railway develops rapidly in recent years in China. Wheel set is the major running components of a train.
Online measurement of wheel set wear parameters is important for the train safety. A method of wheel sets' wear online
measurement based on structure light imaging and image analysis was proposed. A new image segmentation algorithm
of wheel set based on region growing was put forward based on characteristics of wheel set images. According to the
characteristics of wheel set images acquired in different circumstance, proper seed and appropriate growth standard was
determined. After growth processing, the whole tread profile images were extracted. By processing number of images,
the proposed algorithm eliminated effectively the interferences in the image acquisition and extracted whole wheel set
profile curve from varied background.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2011 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 82000Z (2011) https://doi.org/10.1117/12.904626
Eye-safety analysis is very important in the civil application of infrared laser imaging. In order to meet the requirements of human eye-safety protection, a model of eye-safety analysis is established based on ANSI Z136.1 Standard. When the value of maximum permissible exposure is given, one can estimate laser power and beam divergence angle under desired eye-safety distances from the model. We have simulated the model by MATLAB and given the laser power curve at different distances. Based on the curve, the eye-safety parameters of laser imaging systems can be easily designed. Therefore, the proposed model of eye-safety analysis can help to standardize infrared laser imaging products.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2011 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 820010 (2011) https://doi.org/10.1117/12.904647
Recent years, the MEMS-based optical readout infrared imaging technology is becoming a research hotspot.
Studies show that the MEMS-based optical readout infrared imager features a high frame rate. Considering the
high data Throughput and computing complexity of denoising algorithm It's difficult to ensure real-time of the
image processing. In order to improve processing speed and achieve real-time, we conducted a study of denoising
algorithm based on parallel computing using FPGA (Field Programmable Gate Array). In the paper, we analyze the
imaging characteristics of MEMS-based optical readout infrared imager and design parallel computing methods for
real-time denoising using the hardware description language. The experiment shows that the parallel computing
denoising algorithm can improve infrared image processing speed to meet real-time requirement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2011 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 820011 (2011) https://doi.org/10.1117/12.904678
Laser image often contains mixed noise mainly including salt and pepper noise and Gaussian noise. Not only the quality
of laser image is made deteriorated but also the important details of laser image are flooded by it. A new mixed noise
filtering method for laser image is proposed in this paper to filter out mixed noise effectively meanwhile ensure laser
image details clear and completed. In the first space, salt and pepper noise in laser image is filtered out in spatial domain.
A Novel Adaptive Switching Median (NASM) filter based on local salt and pepper noise density which determines filter
window of every salt and pepper noise point adaptively is introduced here. Secondly, image is transferred to wavelet
domain to filter out Gaussian noise. Local Adaptive BayesShrink Threshold (LABT) wavelet denoising on basis of
Gaussian Mixture Model (GMM) is used in this step where wavelet coefficients are modeled by GMM and LABT is
used for adjusting the threshold adaptively which utilizes local relevant characters of sub band wavelet coefficients.
Experimental results show that new method can remove mixed noise effectively meanwhile protect details of laser image
well thus getting better filtering performance compared with other filters for laser image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2011 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 820012 (2011) https://doi.org/10.1117/12.904785
The research on the mix blurred images restoration of defocus and motion were scarcity nowadays but these images were
familiar in the actual work much more. Characteristics of the combined PSF which introduced the blur were investigated
thoroughly based on the research of defocus blur and motion blur separately anciently. Defocus radius could be
calculated by using the autocorrelation of the derivative image; motion angle could be captured by Radon translation and
motion blurred extent could be computed by correlation alone the angle. Then the restoration could be developed using
MPMAP Super-Resolution algorism with these three parameters. Validity of this method had been proved by the
simulated mix blurred image and actual ones. This method would be used in the actual work after optimization in the
future.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2011 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 820013 (2011) https://doi.org/10.1117/12.904786
Motion blurred images could be restored by Super-Resolution arithmetic which took the angle calculated by Radon
transformation of the spectrum of original images and the extent calculated by autocorrelation as parameters necessarily
even though the blur of images was very severity. Unfortunately the noise of blurred images would be amplified while
we captured useful information, which influenced the observation of restored images seriously. An enhancement
arithmetic was proposed in this discourse to improve the quality of the low signal-noise ratio images obtained through
restoration arithmetic. The main purpose of the arithmetic was to eliminate unwanted noises and remain desired signals.
The arithmetic was based on the principle of the least square error method, which fitted discrete pixels to continuous
piecewise curves. The interval of each row and column was subdivided into several subintervals to predigest the fitting
of pixels. Then a curve was used to fit the pixels within the subinterval. A weighting technique with a linear weighting
factor was proposed to merge two adjacent lines together. A series of experiments were carried out to research the effects
of the arithmetic, and the signal-noise ratio showed that the proposed arithmetic could achieve high quality enhancement images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2011 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 820014 (2011) https://doi.org/10.1117/12.904805
This paper presents a novel method of 3D shape measurements of specular surfaces, based on fringe reflection technique
and Fourier transform profilometry. A simple measurement system is set up. In this system, an LCD screen is used to
display orthogonal composite fringe patterns. The fringe patterns reflected by a standard plane mirror and a measured
specular surface are captured by a CCD camera. The phase distributions of the fringe patterns would be extracted by
using Fourier transform profilometry. Relations between the surface gradients and phase changes in horizontal and
vertical directions have been established. Then the shape of the measured surface can be reconstructed from gradient
vector field by full-field least square method. The experiment of a concave mirror with a 100-mm diameter proves the
validity of this method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2011 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 820015 (2011) https://doi.org/10.1117/12.904809
Lidar reflective tomography imaging has been shown a technology to obtain high resolution image of remote object, and
has great application value in the field of space object surveillance and identification. In this paper, the fundamental of
lidar reflective tomography imaging is given, some key issues of ground-based lidar reflective tomography imaging of
space object are analyzed: the long-range detection and high range resolution is obtained by using chirp transmit signals,
the usefulness of these range data for use in reflective topographic reconstruction of space object image is discussed;
atmospheric turbulence effects on the reflective tomography is analyzed, which shows that 10.6μm lidar reflective
tomography is not sensitive to atmospheric turbulence under certain receiving aperture; in order to resolve the question
of incomplete detection angles in space target reflective tomography, regularization method is applied, the homotopy
parameter is adopted in order to fix the weight coefficient efficiently, accuracy of image reconstruction has been
improved. At last, the simulation results of satellite model validate the feasibility of this technical scheme.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2011 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 820016 (2011) https://doi.org/10.1117/12.904812
Optic fiber coils are the hearts of fiber optic gyroscopes (FOGs). To detect the irresistible errors during the process of
winding of optical fibers, such as gaps, climbs and partial rises between fibers, when fiber optic winding machines are
operated, and to enable fully automated winding, we researched and designed this vision-based error detection system
for optic fiber winding, on the basis of digital image collection and process[1]. When a Fiber-optic winding machine is
operated, background light is used as illumination system to strength the contrast of images between fibers and
background. Then microscope and CCD as imaging system and image collecting system are used to receive the analog
images of fibers. After that analog images are shifted into digital imagines, which can be processed and analyzed by
computers. Canny edge detection and a contour-tracing algorithm are used as the main image processing method. The
distances between the fiber peaks were then measured and compared with the desired values. If these values fall outside
of a predetermined tolerance zone, an error is detected and classified either as a gap, climb or rise. we used OpenCV and
MATLAB database as basic function library and used VC++6.0 as the platform to show the results. The test results
showed that the system was useful, and the edge detection and contour-tracing algorithm were effective, because of the
high rate of accuracy. At the same time, the results of error detection are correct.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2011 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 820017 (2011) https://doi.org/10.1117/12.904837
We describe a high-speed, high-resolution, and real-time scanning measurement system consisting of a linear laser, a
smart camera, a PC and the corresponding software. The smart camera with high-speed processing capability could
process the image of the object to be measured which is illuminated by the laser to get the data about the shape of the
object's cross-section profile in real time. We just need to transport the measured data rather than the huge number of
original image to PC for archiving or other application. By the relative motion between the system and the object, we can
get a series of data about the whole object's profile which can be reconstructed in the PC by corresponding application
software. The system was designed to be installed on the vehicles. With the moving of the vehicle we can get the shape
of the road.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2011 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 820018 (2011) https://doi.org/10.1117/12.904877
A color image segmentation algorithm which integrates watershed with automatic seeded region growing and
merging is proposed in the paper. Firstly, the image was transformed from RGB color space to HSI space. Next,
watershed algorithm was applied to the image to obtain initial segmentation effect. And then, Based on the results of
watershed segmentation, some regions in the image were selected as seeded regions automatically for seeds growth
algorithm by making use of color differences and relative Euclidean distance. Finally, a region combining algorithm was
executed to avoid excessive segmentation. The proposed method combines the advantages of watershed and region
growing approach, and in accord with the human vision segmentation strategy. This algorithm was applied to segment
some images, the experimental results confirm its effectiveness and efficiency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2011 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 82001A (2011) https://doi.org/10.1117/12.905013
Using high-speed visual equipment is an effective method to locate mobile targets. Under the circumstance of high
sensitivity(500Hz), except for the Gaussian noise, atmospheric instability has also an important impact on the image
quality. To solve the problem, a method is proposed in this paper based on image power spectrum to analyze and
evaluate the Gaussian noise, atmospheric noise meanwhile combined with wavelet denoising to remove the noise aiming
at the images acquired by DALSA's 8192*32 high-sensitivity camera. Firstly, image databases are established based on
the outdoor working conditions, including normal images, Gaussian noise images loaded with different simulated
characteristics and atmospheric noise images in different simulated frequencies. Power spectrum ratio of all the images
in the databases is calculated, and the image power spectrum critical value is determined. Then the evaluation and
classification of the image noise is got according to the databases and the threshold. wavelet denoising is introduced to
remove the noise subsequently. Finally, the comparison of power spectrum between the image untreated and treated is
made to evaluate the effect of the method above. Experimental results show that the way can evaluate and remove the
noise of image effectively for high-speed visual images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2011 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 82001B (2011) https://doi.org/10.1117/12.905885
Sun is used as light source for spectrum analysis of atmosphere material with light through atmosphere. The stronger
sunlight enters the detector, the more accuracy can be achieved. However, due to the inhomogeneity of the atmosphere,
the gray image of sun is not only irregular, even the interference of clouds will divide sun into different parts. A novel
method for gray centroid of sunlight based on image recognition is presented. This method aiming at accurately
obtaining the strongest sunlight through the atmosphere for detector uses the gray image of sun to calculate the actual
gray centroid of sunlight.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2011 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 82001C (2011) https://doi.org/10.1117/12.906224
BRDF (Bidirectional Reflective Distribution Function)is broadly used in many fields, such as physics, heat
transformation, remote sensing, and computer graphics. Traditional methods to measure BRDF are expensive for most
peoples, and image based approach becomes a novel direction. Until now, for such an image based system, at least a
video camera and a still camera are indispensible, and the operations are not easily carried out under a convenient
condition. In this paper, a method using only one still camera is proposed, with the help of a light source, a cylinder
support, and a sphere. The material to be measured is painted on the sphere, putting on the cylinder support painted with
BRDF- known material. Around the cylinder support, a simple control points nets are distributed. In the measurement
process, the light source and the support are fixed, operators goes around the sphere to obtain pictures at different view
angles and the rest work is finished automatically by a set of programs. The pictures are first processed by a
photogrammetric program to get the geometry in the scene, including the positions, directions, and the shapes of light
source, the support, the sphere, and the cameras. The BRDF samples are calculated from the image intensity and the
obtained geometric relations, which are approximated by a multivariable spline to get a full BRDF description. Three
different materials are tested with the method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2011 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 82001D (2011) https://doi.org/10.1117/12.906528
Fused-taper optical fiber devices are the most typical and basic optical passive devices. Electro-thermal fiber welding
and tapering technique is one of new optical fiber fusion welding technologies and the heater's shape and temperature
distribution has an important influence on the quality of optical fiber fusion. With help of Finite Element Analysis
software, the temperature field distribution character of the specially shaped heater and surrounding temperature field are
analyzed. According to the different kinds of microstructure optical fiber, adjust electrical parameter timely, then, better
quality of optical fiber fusion will be got. In the paper, it suggests a vision measurement on the specially shaped heater's
surface parameter which is based on image processing technology. This method chooses CCD as the image sensor.
Firstly, median-filter of the 3x3 window in the detecting system is chosen to do the image pretreatment. Secondly, canny
operator whose anti-noise capacity is strong is chosen as the edge detection operator in the system. Thirdly, with the
method of least square, subarea curve fitting and straight line fitting are done to specially shaped component. The
experiment results show that the morphology parameters of specially shaped heater can be calculated accurately. And the
parameters can provide important information for studying on the heater temperature field distribution and the
performance of the optical fiber fusion welding technology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2011 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 82001E (2011) https://doi.org/10.1117/12.906691
This paper has studied a measurement method of geometric parameters of fiber which is ellipse twin-core fiber based on
machine vision. Firstly, testing system does grey histogram of fiber end image and does binary image to extract cladding
and air holes and then obtains coordinates of radius and center of cladding and air holes through Hough transform.
Secondly, it obtains a image of two ellipse cores by increasing threshold and making fiber end binary, extracts contour of
two ellipse cores applying edge chain code method, and then calculates coordinates of semi-minor axis, semi-major axis
and center of two ellipse cores by using least squares fitting. The measuring accuracy of the method achieves 1 μm, it
can meet the requirement of performance test and analysis of this microstructure fiber.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2011 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 82001F (2011) https://doi.org/10.1117/12.906699
It will be a great convenience to compress images on a high proportion with the retention of details for transfering, storing, and processing images. In view of that, a new pretreatment technique - wavelet coefficients partition (WCP) algorithm was presented in this paper, which can subdivide the wavelet decomposing high frequency coefficient into high-frequency information and low-frequency information by setting threshold and iterative calculation. The low-frequency information can be compressed by 9/7 wavelet based on the promotion way. On the other hand, the detail information of the high frequency can be processed by the second generation Bandelet. Meanwhile, the specific optimization algorithm was also given in this article, through which the calculation can be reduced by three-quarters; and comparing with the original image, the PSNR value of the same bit rate decompressed image can be improved to the maximum of 10 db, the SSIM value tends to be 1. The experimental results show that the compression method is easy to run, and the visual effect of the restoring image is better than any of those that have been compressed by wavelet transform, JEPG2000, Bandelets and other conventional methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2011 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 82001G (2011) https://doi.org/10.1117/12.907036
In this paper, a structure tensor based approach is proposed for multi-focus image fusion within the wavelet
framework. Structure tensor is employed to extract local features in detail sub-bands. A nonlinear flow based
on the trace of the structure tensor matrix is applied to matrix element before calculating the eigenvalues. The
source data with larger eigenvalue contains more geometric features. An adaptive weight function is constructed
to yield new detail coefficients of the fused image. Experimental results show that the proposed scheme improves
performance compared to some related wavelet approaches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2011 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 82001H (2011) https://doi.org/10.1117/12.907052
Multiple targets tracking technology in infrared image is the core of multi-sensors information fusion. As to the features
of infrared imaging and the difficulties in multi-target tracking, a kind of multi-target tracking in infrared image based
on joint probabilistic data association (JPDA) algorithm is proposed in this paper. Detection algorithm is applied in
infrared images to gain the initial information and observation information of targets. Designed tracking thresholds and
set up the tracking beginning. Calculate the probabilities of measurements in track gate associated with target tracks, the
probability is used to produce a weight innovation of measurements which is used to a target state estimating filter,
complete the target track predict and filter update. Theoretical analyses and experimental results prove that the
algorithm in the paper can be used to solve the distribution problem of multi-observation and multi-target tracks. Even
two targets with cross bonding or overlapping can be stably and effectively tracked and remove the clutter from detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2011 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 82001I (2011) https://doi.org/10.1117/12.907217
Binocular vision is one of the key technology in three-dimensional reconstructed of scene of three-dimensional machine
vision. Important information of three-dimensional image could be acquired by binocular vision. When use it, we first
get two or more pictures by camera, then we could get three-dimensional imformation included in these pictures by
geometry and other relationship. In order to measurement accuracy of image acquisition system improved, image
acquisition system of binocular vision about scene three-dimensional reconstruction is studyed in this article. Base on
parallax principle and human eye binocular imaging, image acquired system between double optical path and double
CCD mothd is comed up with. Experiment could obtain the best angle of double optical path optical axis and the best
operating distance of double optical path. Then, through the bset angle of optical axis of double optical path and the best
operating distance of double optical path, the centre distance of double CCD could be made sure. The two images of the
same scene with different viewpoints is shoot by double CCD. This two images could establish well foundation for three-dimensional reconstructed of image processing in the later period. Through the experimental data shows the rationality of this method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2011 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 82001J (2011) https://doi.org/10.1117/12.907235
The paper presents a novel region merging method based on the interactive information from users. An image firstly is
partitioned into homogeneous regions by using an initial segmentation and the regions will be label by taking an
interactive scheme. In this scheme, the users only roughly specify the position and main features of the object and
background, then any region will belong to non-label region or label region i.e. object or background. A similarity rule is
used to guide the merging process with the help of the users' markers. And then the object of interest is extracted from
the image. Experiment results show that the proposed method is efficient for us to extract the object of interest from the
complex background.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Yao Yi, Liangcai Cao, Wei Guo, Yaping Luo, Qingsheng He, Guofan Jin
Proceedings Volume 2011 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 82001K (2011) https://doi.org/10.1117/12.907279
Sweat pores and other level 3 features have been proven to provide more discriminatory information about fingerprint
characteristics, which is useful for personal identification especially in law enforcement applications. With the advent of
high resolution (≥1000 ppi) fingerprint scanning equipment, sweat pores are attracting increasing attention in automatic
fingerprint identification system (AFIS), where the extraction of pores is a critical step. This paper presents a scale
parameter-estimating method in filtering-based pore extraction procedure. Pores are manually extracted from a 1000 ppi
grey-level fingerprint image. The size and orientation of each detected pore are extracted together with local ridge width
and orientation. The quantitative relation between the pore parameters (size and orientation) and local image parameters
(ridge width and orientation) is statistically obtained. The pores are extracted by filtering fingerprint image with the new pore model, whose parameters are determined by local image parameters and the statistically established relation. Experiments conducted on high resolution fingerprints indicate that the new pore model gives good performance in pore extraction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2011 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 82001L (2011) https://doi.org/10.1117/12.910623
During optical remote sensing imaging procedure, the relative motion between the sensor and the target may corrupt
image quality seriously. The precondition of restoring the degraded image is to estimate point spread function (PSF) of
the imaging system as precisely as possible. Because of the complexity of the degradation process, the transfer function
of the degraded system is often completely or partly unclear, which makes it quite difficult to identify the analytic model
of PSF precisely. Inspired by the similarity between the quantum process and imaging process in the probability and
statistics fields, one reformed multilayer quantum neural network (QNN) is proposed to estimate PSF of the degraded
imaging system. Different from the conventional artificial neural network (ANN), an improved quantum neuron model is
used in the hidden layer instead, which introduces a 2-bit controlled NOT quantum gate to control output and 4 texture
and edge features as the input vectors. The supervised back-propagation learning rule is adopted to train network based
on training sets from the historical images. Test results show that this method owns excellent features of high precision,
fast convergence and strong generalization ability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Xu-fen Xie, Wei Zhang, Ming Zhao, Xi-yang Zhi, Fu-gang Wang
Proceedings Volume 2011 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 82001M (2011) https://doi.org/10.1117/12.919796
An algorithm of sequence arrangement of wavelet transform (SAWT) for infrared focal-plane arrays (IRFPA)
nonuniformity correction (NUC) aiming to remote sensing image was proposed. Firstly, distribution characteristics of
pixel sequence of remote sensing image sequence were analyzed in wavelet space and scale space. Secondly, a
reconstruction algorithm using mean value of approximation sequence arrangement of wavelet transform for NUC was
proposed. Finally, nonuniformity of real infrared image sequence was correct by SAWT, and correction effect by SAWT
compared with the algorithms of Kalman filter and Wiener filter. Results show that visual effect of SAWT is better than
the other two algorithms with image sequence data quantity reduction, and the residual nonuniformity of sum image of
sequence is 7-9 orders of magnitude lower than the other two algorithms, and the roughness of uniform area of image is
less 0.0158-0.0544 than other algorithms. SAWT is also effective for NUC in IRFPA in less data quantity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2011 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 82001N (2011) https://doi.org/10.1117/12.920629
According to the abnormal spectrum produced by Oil micro-leakage in China's Gobi and sparse vegetated region, six
types of spectrum data, which were used as the reference spectrum, were established for the database of exploring oil and
gas. The USGS and JPL spectrum data, the spectrum data of alteration mineral in the gas field, the carbonation and clay
mineral spectrum data and the hyperspectral spectrum data were contained in the database. The spectral characteristic
information was extracted and integrated into the database. A series of interfaces were provided to users to allow the
users to add their own spectrum features of the oil and gas areas, which will enhance the scalability of the feature
database. The typical altered mineral spectrums produced by oil micro-leakage in China's Gobi and sparse vegetated
regions were comprehensively covered in the database, which will enrich China's spectral library and is with the
guidance of the oil and gas exploration by aerospace and aviation hyperspectral remote sensing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.