PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 7850, including the Title Page, Copyright Information, Table of Contents, Introduction, and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As a result of unexpected attitude variation or random shaking of the camera carrier, the image sequences would become
blurred and instable. High performance stabilization platform adopting precise gyros is usually dear and complex.
Electronic image stabilization technique is cheap and low power dissipation, but the intricacy of algorithm substantial
increase considering camera intentional motion and moving objects existing in the video. It is difficult to be satisfied
with both precision and real-time aspect simultaneously. Design a cheap stabilization platform using a small MEMS
IMU, limiting the range of variation of the camera attitude, and avoiding scene sphere observed instantaneous severe
change; the perspective collineation from three-dimensional space to two-dimensional image plane is analyzed, and pixel
coordinate conversion model related to the camera attitude variation is deduced and simplified. Then utilizing the IMU
signal to compensate the frame rotation, and realizing stable video output.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper presents a low-power, inexpensive and portable endoscopic imaging system. A 1.3 million pixels CMOS
sensor is considered as an image capture. The sensor and the lens system are designed to minify the cannula diameter of
the endoscope and therefore minimize the incision size for insertion. LVDS is used for image data transmission between
the sensor and CPU to realize a long distance, high speed and low noise system. An ARM 920T based microcontroller is
employed as the control core for the image transmission module, display module and other modules. The camera
interface and LCD controller are integrated in the microcontroller and both have a dedicated DMA supports to transmit
image data though AHB to or from frame buffer located in system memory without CPU intervention. The image is
displayed on an 8 inch LCD screen with 800 × 600 resolution and 16 bits of color depth. With the maximum capture and
display rate of 15 fps, this system can provide a clear image enough for laparoscopy or industrial application. And with
integrated camera, light source and video display function, it can also be used as a portable, miniature and inexpensive
endoscope.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Photoacoustic imaging is a promising technique in practical medicine to image biological tissue's function and diagnose
internal organs. In this paper, a new photoacoustic imaging modality for imaging internal organs was presented. A laser
which wavelength is tunable was coupled into a multimode optical fiber. And the fiber was inserted into the inner tract of
the samples to deliver light for exciting photoacoustic signals. The outgoing PA signal was detected by a focal ultrasound
transducer with long focal length which was placed on the surface of the samples. By transducer scanning, we obtained a
2D cross-section photoacoustic image. Finally, we evaluate this system's performance and demonstrate its capabilities by
imaging phantoms with complex structure.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In weak spectrum measurement, the dark current of CCD could be very large and thus affect the proper application. It's
essential to suppress the dark current and other noises to get high quality spectral data. In this paper we proposed a
modulation and correlation method to eliminate it for weak spectrum measurement. In this design, only a compact
camera shutter is to be mounted on the spectrometer, which will facilitate the independent control of CCD integral-time
and light passing or blocking status. To measure the weak spectrum, the shutter is modulated by a pseudorandom binary
sequence under fixed CCD integral-time, and the correlation processing is performed on the captured spectra to produce
high quality spectrum, where dark current will be totally eliminated and random noises be suppressed. The system setup
and the brief analysis of this method are introduced. Computer simulation and preliminary weak spectrum measurement
results are also provided in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A fiber optic image inverter is a special type of fiber optic faceplate that rotate an image through a 180 degrees, and the
development of fiber optic image inverter is used to concentrated on the heat twisting process experiment based on an
existing glass system for production of fiber optic faceplate, but the conventional high expansion fiber optic faceplate
billets are hardly capable of withstanding a fast heat twisting operation without the billets broken which occurred during
the twisting operation, due to the thermal shock resistance property is defective in these conventional high expansion
fiber optic faceplate, so a long time heat twist operation cycle was normally adopted in most high expansion fiber optic
image inverter products manufacturers, caused a low yield efficiency and other shortages. In response, a program of
design and fabrication of fiber optic image inverters which originally initiated to develop a new high expansion fiber
optic glasses system with improved thermal shock resistance property, this program has yielded a new high numerical
aperture fiber optic glasses system which is capable of experience a fast heat twist operation process, and has been
demonstrated to produce a high numerical aperture fiber optic image inverter with a higher transmission property and
improved contrast transfer characteristic. In this paper we review the fundamental and design principle, fabrication
experiments process, and properties and performance of the fiber optic image inverters from a pilot run.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper renders a configurable distributed high performance computing(HPC) framework for TDI-CCD imaging
simulation. It uses strategy pattern to adapt multi-algorithms. Thus, this framework help to decrease the simulation time
with low expense.
Imaging simulation for TDI-CCD mounted on satellite contains four processes: 1) atmosphere leads degradation, 2)
optical system leads degradation, 3) electronic system of TDI-CCD leads degradation and re-sampling process, 4) data
integration. Process 1) to 3) utilize diversity data-intensity algorithms such as FFT, convolution and LaGrange Interpol
etc., which requires powerful CPU. Even uses Intel Xeon X5550 processor, regular series process method takes more
than 30 hours for a simulation whose result image size is 1500 * 1462. With literature study, there isn't any mature
distributing HPC framework in this field. Here we developed a distribute computing framework for TDI-CCD imaging
simulation, which is based on WCF[1], uses Client/Server (C/S) layer and invokes the free CPU resources in LAN. The
server pushes the process 1) to 3) tasks to those free computing capacity. Ultimately we rendered the HPC in low cost.
In the computing experiment with 4 symmetric nodes and 1 server , this framework reduced about 74% simulation time.
Adding more asymmetric nodes to the computing network, the time decreased namely. In conclusion, this framework
could provide unlimited computation capacity in condition that the network and task management server are affordable.
And this is the brand new HPC solution for TDI-CCD imaging simulation and similar applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Time complexity is one of the biggest problems for fractal image compression algorithm which can bring about high
compression ratio. However, there is inherently data parallelism for fractal image compression algorithm. Naturally,
parallel computation scheme would be used to deal with it. This paper uses "equal division load" balancing algorithm to
design parallel fractal coding algorithm and implement the fractal image compression. "Equal division load" balancing
algorithm distributes computation tasks to all processors equally. Load in every node is divided into smaller tasks based
on all power of nodes on network, and then these smaller tasks are sent to corresponding nodes to balance the load
among nodes. Analysis shows that the algorithm greatly reduces the component task execution time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Guassian distribution model is often used to characterize the statistical behavior of image or other multimedia signal,
and applied in fitting probability density functions of a signal. But, in practically, the probability density function of data
source may be inherently non-Gaussian. As the distribution family covers most of the common distribution types and the
frequency curves provided by the family are as wide as in general use, this paper considers Johnson distribution family to
estimate the unknown parameters and approximate the empirical distribution. The method uses the moments to initialize
the parameters of the distribution family, and then calculates parameters by using EM algorithm. The experiment results
show that the fitted model could depicts quite successfully the both Gaussian and non-Gaussian probability density
function of image intensity, and comparatively the method has low computing complexity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a windowed phase correlation algorithm for subpixel motion estimation. Motion estimation methods
are crucial for video coding, image stabilization, image deblurring, and micro-mechanical motion compensation, etc.
Conventional phase-based correlation algorithms usually have reduced precision or bias error due to aliasing and edge
effects in real sampled imaging system. A window function is applied to images in the spatial domain before Fourier
transformation to suppress frequency leakage. Further more, unreliable frequencies due to aliasing errors are masked out
in the frequency domain. Experiments show that the proposed approach yields improved accuracy and superior precision
for motion estimation in the presence of aliasing and edge effects in real image systems compared to conventional phasebased
correlation algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A method for the measurement of the thickness of transparent oil film on water based on laser trigonometry is introduced,
and the oil film thickness can be obtained using the displacement of imaging spot and the configuration parameter of the
imaging system. So calibration is needed to achieve the geometrical parameters of the imaging system. A simple
experimental calibration is performed, a series of corresponding thickness and displacement data can be obtained and a
calibration curve is fitted, and the system parameters the object distance of the imaging system and the incident angle of
the laser beam are acquired. The experiment is conducted with block gauge, diesel oil and lubricant oil. The research
results verify the feasibility of the method presented in this paper, which is applicable to dynamic on-line measurement
of oil film thickness of oil spill on sea surface.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An algorithm quantifying edge difference between model seal (MS) and sample seal (SS) was proposed to verify
Chinese seal imprint in bank check. Differences between MS and deliberately faked SS may be slight, while there exists
differences between MS and genuine SS due to the variety of imprinting conditions. To evaluate similarities between MS
and SS, edge difference was quantified as two parameters, the distance between non-overlapped corresponding edges
and the length of each piece of non-overlapped seal edges. According to the two quantified parameters, SS was verified
as true, false or doubtful. 2000 seal imprints (1000 genuine and 1000 fake) were verified for experiments. Results
showed that all the fake seal imprints were verified accurately, even when their differences against MS were minute. 27
genuine seal imprints were misclassified as doubtful due to some serious distortions. The false-acceptance error rate was
0%. The false-rejection error rate was 2.7%, and the correct recognition rate was 98.65%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper based on Abbe-Porter spatial filtering experiment method, extracted and studied the sweat in the surface
of transparent chip and CD, and compared with the traditional optical method. The results of the experiment show
using the optical filtering method can obtain desired results. Compared with chemical reagents appear method,
Abbe-Porter spatial filtering experiment method belongs to nondestructive test method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Several composite camera systems were made for wide coverage by using 3 or 4 oblique cameras. A virtual projecting
center and image was used for geometrical correction and mosaic with different projecting angles and different spatial
resolutions caused by oblique cameras. An imaging method based axis-shift theory is proposed to acquire wide coverage
images by several upright cameras. Four upright camera lenses have the same wide angle of view. The optic axis of lens
is not on the center of CCD, and each CCD in each camera covers only one part of the whole focus plane. Oblique
deformation caused by oblique camera would be avoided by this axis-shift imaging method. The principle and
parameters are given and discussed. A prototype camera system is constructed by common DLSR (digital single lens
reflex) cameras. The angle of view could exceed 80 degrees along the flight direction when the focal length is 24mm,
and the ratio of base line to height could exceed 0.7 when longitudinal overlap is 60%. Some original and mosaic images
captured by this prototype system in some ground and airborne experiments are given at last. Experimental results of
image test show that the upright imaging method can effectively avoid the oblique deformation and meet the geometrical
precision of image mosaic.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The huge data volume of hyperspectral image challenges its transportation and store. It is necessary to find an effective
method to compress the hyperspectral image. Through analysis and comparison of current various algorithms, a mixed
compression algorithm based on prediction, integer wavelet transform and embedded zero-tree wavelet (EZW) is
proposed in this paper. We adopt a high-powered Digital Signal Processor (DSP) of TMS320DM642 to realize the
proposed algorithm. Through modifying the mixed algorithm and optimizing its algorithmic language, the processing
efficiency of the program was significantly improved, compared the non-optimized one. Our experiment show that the
mixed algorithm based on DSP runs much faster than the algorithm on personal computer. The proposed method can
achieve the nearly real-time compression with excellent image quality and compression performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Because of the non-uniformity of sensitive materials, there are serious non-uniformity of the infrared focal plane
arrays (IFFPA). In practical applications, the two-point correction (or multiple-point) is widely used, but the
non-uniformity caused by the time drift results in greatly reduced infrared imaging. Currently, the correction based on the
sense attracts the interests of researchers, which requires the target motion scenes, and narrows the range of applications
and produces the misconvergence phenomenon. The paper proposed a new non-uniformity correction algorithm based on
the two-point correction combined with the information of scene, which can eliminate the non-uniformity of the IFFPA,
time drift, and restrain salt & pepper noise too.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Region labeling for binary image is an important part of image processing. For the special use of small and multi-objects
labeling, a new region labeling algorithm based on boundary tracking is proposed in this paper. Experiments prove that
our algorithm is feasible and efficient, and even faster than some of other algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An embedded three-dimensional (3-D) profilometry system based on a combination of gray-code and phase shifting (GCPS) method
is proposed. This system consists of a digital-micromirror-device (DMD) based video projector, a high-speed CCD camera and an
embedded digital signal processing hardware system based on DSP. In this technique, seven gray-code patterns and three sinusoidal
fringe patterns with 120-deg phase shift are integrated in red, green and blue channels to form four color fringe patterns. When the
four color fringe patterns are sent to the DMD based projector without color filter, the previous gray-code patterns and three sinusoidal
fringe patterns are repeatedly projected to an object surface in gray-scale sequentially. These fringe patterns deformed by the object
surface are captured by a high-speed CCD camera synchronized with the projector. An embedded hardware system is developed for
synchronization between the camera and the projector and taking full advantage of DSP parallel processing capability for real-time
phase retrieve and 3-D reconstruction. Since the number of projected images of GCPS is reduced from 11 to 4, the measurement speed
is enhanced dramatically. Experimental results demonstrated the feasibility of the proposed technique for high-speed 3-D shape
measurement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
After giving the detection principle and geometric model of airborne scanning lidar briefly, this paper analyzed the
existent error sources in detection process, and established the correction equation of position error. The simplified
models of the attitude changes and aircraft oscillation in frequency are developed. Based on computer simulation of their
simplified models, the outcome of simulation is analyzed. Finally a few elementary conclusions are obtained and some
suggestions are offered for improving detection accuracy and for error compensation of airborne scanning lidar data. The
results are of reference value in research on the improvement of detection accuracy of airborne scanning lidar.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The stereo visual odometer in vision based on the navigation system is proposed in the paper. The stereo visual odometer
can obtain the motion data to implement the position and attitude estimation of ALV(Autonomous Land Vehicle). Two
key technology in the stereo vision odometer are dissertated. The first is using SIFT(Scale Invariant Feature Transform)
to extract suitable feature, match points pairs in the feature, and track the feature of fore and after frames of the same
point on the object. The second is using matching and tracking to obtain the different 3-D coordinate of the feature of the
point on the object, and to compute the motion parameters by motion estimate. The unknown outdoor environment is
adopted in the experiment. The results show that the stereo vision odometer is more accurate, and the measurement error
dose not increase with the movement distance increasing. It can be used as an important supplement of conventional
odometer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The segmentation of moving object in video sequences is attracting more and
more attention because of its important role in various camera video applications, such as video
surveillance, traffic monitoring, people tracking. and so on. Conventional segmentation algorithms
can be divided into two classes. One class is based on spatial homogeneity, which results in the
promising output. However, the computation is too complex and heavy to be unsuitable to
real-time applications. The other class utilizes change detection as the segmentation standard to
extract the moving object. Typical approaches include frame difference, background subtraction
and optical flow. A novel algorithm based on adaptive symmetrical difference and background
subtraction is proposed. Firstly, the moving object mask is detected through the adaptive
symmetrical difference, and the contour of the mask is extracted. And then, the adaptive
background subtraction is carried out in the acquired region to extract the accurate moving object.
Morphological operation and shadow cancellation are adopted to refine the result. Experimental
results show that the algorithm is robust and effective in improving the segmentation accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Interactive projection systems based on CCD/CMOS have been greatly developed in recent years. They can locate and
trace the movement of a pen equipped with an infrared LED, and displays the user's handwriting or react to the user's
operation in real time. However, a major shortcoming is that the location device and the projector are independent with
each other, including both the optical system and the control system. This requires construction of two optical systems,
calibration of the differences between the projector view and the camera view, and also synchronization between two
control systems, etc.
In this paper, we introduced a two-dimensional location method based on digital micro-mirror device (DMD). The DMD
is used as the display device and the position detector in turn. By serially flipping the micro-mirrors on the DMD
according to a specially designed scheme and monitoring the reflected light energy, the image spot of the infrared LED
can be quickly located. By using this method, the same optical system as well as the DMD can be multiplexed for
projection and location, which will reduce the complexity and cost of the whole system. Furthermore, this method can
also achieve high positioning accuracy and sampling rates. The results of location experiments are given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the system based on the streak tube imaging lidar (STIL), the streak image on the salt screen captured by the CCD
camera not only includes the range information, but also provides the material attribute, the angle information of the
target and so on, that is the intensity information. It is generally assumed that the image brightness on the salt screen
reflect the laser intensity of the target. However, the brightness not only relates to the density of the electron beam, but
also relates to the accelerating voltage. The relative intensity of the streak image will be distorted for reasons of the
different accelerating voltage resulted by the different coming time. An amended method that there is a weighted
processing for the intensity information based on the range information was proposed, a research on the reconstruction of
the intensity image was processed, and some effects were achieved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sniper tactics is widely used in modern warfare, which puts forward the urgent requirement of counter sniper detection
devices. This paper proposed the anti-sniper detection system based on a dual-thermal imaging system. Combining the
infrared characteristics of the muzzle flash and bullet trajectory of binocular infrared images obtained by the dual-infrared
imaging system, the exact location of the sniper was analyzed and calculated. This paper mainly focuses on the system
design method, which includes the structure and parameter selection. It also analyzes the exact location calculation method
based on the binocular stereo vision and image analysis, and give the fusion result as the sniper's position.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this study, the potential of visible and near-infrared spectral imaging as a technique of document inspection was
examined. Doubtful documents are often found in economic cases, distinguished between original and added strokes and
detected blurry characters are very useful for judgment. Burned, covered and rinsed documents in which the characters
can't be identified with naked eyes were experimentally studied with a visible spectral imaging technique. Meanwhile,
the same color inks were detected by both visible and near-infrared imaging spectrometers. Classification of spectral
images was carried out in specialist spectral imaging software packager Misystem provided by Institute of Forensic
Science. The technique significantly improved the detection of many documents, especially those that might be
considered of poor quality or borderline characters. The visible spectral imaging was successful in detecting the burnt
Chinese characters produced using pencils. It was possible to form spectral images showing the strokes even covered by
Chinese ink by means of imaging at characteristic frequencies. As inks have very different spectral from the clothes,
contribution and contrast of the rinsed lines and illegible seal words on clothes were clearly enhanced. By examining the
spectral images from the inks, it was possible to determine whether the same color inks were written by the different
pens. The results also show that the near-infrared spectrometer is better than visible one in distinguishing the same inks.
In blind testing, spectral imaging was shown to achieve an average 85.1% chance of success. The results reveal the wide
applications of spectral imaging in document evidence analysis. The potential of this technique in forensic science will
be more apparent along with the further and deeper studies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have to make the integral sphere source trace to the standard lamp, when we calibrate the
imaging spectrometer. But the standard lamp need to be calibrated by the blackbody firstly. In this
paper, we have the integral sphere source trace to the blackbody directly, using the exclusive
spectral response of a spectrometer. During the experiments, two spectral responses of the
spectrometer (200nm~1100nm) within 700nm~900nm are obtained separately by measuring the
spectral radiances of the blackbody at 1000°C and the integral sphere source at some color
temperature. We get the equivalent color temperature of the integral sphere source, making use of
the consistency of the two spectral responses in theory. After that, we can compute the spectral
radiance at the exit of integral sphere by Plank formula. So we not only complete the calibration of
the integral sphere source, but also have the integral sphere source trace to the blackbody
simultaneously. From the result analysis,we think this method is exact enough for the calibration
about the spectral radiation of the integral sphere source.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper shows how a web camera can be used to realize a low-cost multichannel fiber-optic spectrometer suitable for
educational purposes as well as for quality control purposes in small and medium enterprises. Our key idea is to arrange
N input optical fibers in a line and use an external dispersive element to separate incoming optical beams into their
associated spectral components in a two-dimensional (2-D) space. As a web camera comes with a plastic lens, each set of
spectral components is imaged onto the 2-D image sensor of the web camera. For our demonstration, we build a 5-channel web-camera based fiber-optic optical spectrometer and simply calibrate it by using eight lightsources with
known peak wavelengths. In this way, it functions as a 5-channel wavelength meter in a 380-700 nm wavelength range
with a calculated wavelength resolution of 0.67 nm/pixel. Experimental results show that peak operating wavelengths of
a light emitting diode (λp = 525 nm) and a laser pointer (λp = 655 nm) can be measured with a ±2.5 nm wavelength
accuracy. Total cost of our 5-channel fiber-optic spectrometer is ~USD92.50.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Theory about the Thermal Infrared Imaging Fourier Transform Spectrometer has been discussed,
and then we found the Interference efficiency is an important factor related to SNR of Thermal
Infrared Imaging Fourier Transform. The Interference efficiency involved in transverse shear
splitting. After study of this kind of beam splitting, some formulas about Thermal Infrared
Imaging Fourier Transform Spectrometer has been found, then the simulation modes were done.
At the end, Interference efficiency of Imaging Fourier Transform Spectrometer was calculated.
The relationship between interference efficiency and SNR was simply given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An airport runway centerline location method is proposed for extracting airport runway in images from one-off aerial
imaging system. One-off aerial imaging system captures image at an altitude about one kilometer or below, thus detailed
feature of the scenery reveals itself clearly. The proposed method relies on this precondition to detect and locate
centerline of airport runway. This method has four steps: edge detection, dominating line orientation extraction, distance
histogram building and centerline location. A salient edge detection method is developed with Sobel detector, which
could detect edges of runway strips at the disturbance of edges features from surrounding objects. Then, a traditional
Hough transform is performed to build a Hough map, within which the dominating line orientation is extracted. After
getting the dominating line orientation, a reference straight line is chosen for building distance histogram. This distance
histogram is a one-dimensional one, built up with the distance of all edge pixels in the edge map to the reference line.
Airport centerline has a three-peak pattern in the one-dimensional distance histogram, and the center peak is
corresponding to the centerline of airport runway. Experiments with simulated images show this method could location
airport runway centerline effectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
During the past decades, signal processing architecture, which is based on FPGA, conventional DSP processor and host
computer, is popular for infrared or other electro-optical systems. With the increasing processing requirement, the former
architecture starts to show its limitation in several respects. This paper elaborates a solution based on FPGA for
panoramic imaging system as our first step of upgrading the processing module to System-on-Chip (SoC) solution.
Firstly, we compare this new architecture with the traditional to show its superiority mainly in the video processing
ability, reduction in the development workload and miniaturization of the system architecture. Afterwards, this paper
provides in-depth description of this imaging system, including the system architecture and its function, and addresses
several related issues followed by the future development. FPGA has developed so rapidly during the past years, not
only in silicon device but also in the design flow and tools. In the end, we briefly present our future system development
and introduce those new design tools to make up the limitation of the traditional FPGA design methodology. The
advanced design flow through Simulink® and Xilinx® System Generator (Sysgen) has been elaborated, which enables
engineers to develop sophisticated DSP algorithms and implement them in FPGA more efficiently. It is believed that this
new design approach can shorten system design cycle by allowing rapid prototyping and refining design process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Earth's digital elevation which impacts space camera imaging has prepared and imaging has analysed. Based on
matching error that TDI CCD integral series request of the speed of image motion, statistical experimental
methods-Monte Carlo method is used to calculate the distribution histogram of Earth's elevation in image motion
compensated model which includes satellite attitude changes, orbital angular rate changes, latitude, longitude and the
orbital inclination changes. And then, elevation information of the earth's surface from SRTM is read. Earth
elevation map which produced for aerospace electronic cameras is compressed and spliced. It can get elevation data
from flash according to the shooting point of latitude and longitude. If elevation data between two data, the ways of
searching data uses linear interpolation. Linear interpolation can better meet the rugged mountains and hills changing
requests. At last, the deviant framework and camera controller are used to test the character of deviant angle errors,
TDI CCD camera simulation system with the material point corresponding to imaging point model is used to analyze
the imaging's MTF and mutual correlation similarity measure, simulation system use adding cumulation which TDI
CCD imaging exceeded the corresponding pixel horizontal and vertical offset to simulate camera imaging when
stability of satellite attitude changes. This process is practicality. It can effectively control the camera memory space,
and meet a very good precision TDI CCD camera in the request matches the speed of image motion and imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As we know, the existence of image motion has a bad effect on the image quality of satellite-borne TDI CCD camera.
Although many theories on image motion are proposed to cope with this problem, few simulations are done to justify the
proposed theories on ground. And thus, in this paper, a ground-based physical simulation system for TDI CCD imaging is
developed and specified, which consists of a physical simulation subsystem for precise satellite attitude control based on a
3-axis air bearing table, and an imaging and simulation subsystem utilizing area-array CCD to simulate TDI CCD. The
designed system could realize not only a precise simulation of satellite attitude control, whose point accuracy is above 0.1°
and steady accuracy above 0.01°/s, but also an imaging simulation of 16-stage TDI CCD with 0.1s its integration time. This
paper also gives a mathematical model of image motion of this system analogous with satellite-borne TDI CCD, and
detailed descriptions on the principle utilizing area-array CCD to simulate TDI CCD. It is shown that experiment results
are in accordance with mathematical simulation, and that the image quality deteriorate seriously when the correspondence
between the image velocity and signal charges transfer velocity is broken out, which suggest not only the validity of the
system design but also the validity of the proposed image motion theory of TDI CCD.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The coupling gain coefficient g is redefined and deduced based on coupling theory, the variant of coupling gain
coefficient g for different ΓL and r is analyzed. A new optical system is proposed for image edge-enhancement. It
recycles the back signal to amplify the edge signal, which has the advantages of high throughput efficiency and
brightness. The optical system is designed and built, and the edge-enhanced image of hand bone is captured
electronically by CCD camera. The principle of optical correlation is demonstrated, 3-D correlation distribution of letter
H with and without edge-enhancement is simulated, the discrimination capability Iac and the full-width at half
maximum intensity (FWHM) are compared for two kinds of correlators. The analysis shows that edge-enhancement
preprocessing can improve the performance of correlator effectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Images obtained by an aberration-free system are defocused blur due to motion in depth and/or zooming. The
precondition of restoring the degraded image is to estimate point spread function (PSF) of the imaging system as
precisely as possible. But it is difficult to identify the analytic model of PSF precisely due to the complexity of the
degradation process. Inspired by the similarity between the quantum process and imaging process in the probability and
statistics fields, one reformed multilayer quantum neural network (QNN) is proposed to estimate PSF of the defocus
blurred image. Different from the conventional artificial neural network (ANN), an improved quantum neuron model is
used in the hidden layer instead, which introduces a 2-bit controlled NOT quantum gate to control output and adopts 2
texture and edge features as the input vectors. The supervised back-propagation learning rule is adopted to train network
based on training sets from the historical images. Test results show that this method owns excellent features of high
precision and strong generalization ability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
When capturing a dark scene with a high bright object, the monitoring camera will be saturated in some regions and the
details will be lost in and near these saturated regions because of the glare vision. This work aims at developing a
real-time night monitoring system. The system can decrease the influence of the glare vision and gain more details from
the ordinary camera when exposing a high-contrast scene like a car with its headlight on during night. The system is
made up of spatial light modulator (The liquid crystal on silicon: LCoS), image sensor (CCD), imaging lens and DSP.
LCoS, a reflective liquid crystal, can modular the intensity of reflective light at every pixel as a digital device. Through
modulation function of LCoS, CCD is exposed with sub-region. With the control of DSP, the light intensity is decreased
to minimum in the glare regions, and the light intensity is negative feedback modulated based on PID theory in other
regions. So that more details of the object will be imaging on CCD and the glare protection of monitoring system is
achieved. In experiments, the feedback is controlled by the embedded system based on TI DM642. Experiments shows:
this feedback modulation method not only reduces the glare vision to improve image quality, but also enhances the
dynamic range of image. The high-quality and high dynamic range image is real-time captured at 30hz. The modulation
depth of LCoS determines how strong the glare can be removed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a novel fast block matching algorithm based on high-accuracy Gyro for steadying shaking image. It
acquires motion vector from Gyro firstly. Then determines searching initial position and divides image motion into three
modes of small, medium and large using the motion vector from Gyro. Finally, fast block matching algorithm is
designed by improving four types of templates (square, diamond, hexagon, octagon). Experimental result shows that the
algorithm can speed up 50% over common method (such as NTSS, FSS, DS) and maintain the same accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The opto-electronic conversion function (OECF) is defined as relationship between
input luminance and digital output levels for an opto-electronic digital image capture
system. It is a fundamental parameter to evaluate the performance of digital
still-picture camera. An experiment device was set up to measure OECFs by using
test charts with twelve neutral patches stepped in different visual density increments
and a integrating sphere uniform illuminator. To determine the camera OECFs,
images of the test charts were captured under controlled conditions for computer to
calculate. For each trial, the mean digital output level shall be determined from a 64
by 64 pixel area located at the same relative position in each image. Several digital
still-picture cameras were selected as test samples and their OECFs were different
under a larger range of illumination. Besides, the dynamic range, incremental gain
and SNR also have been calculated using the OECF test charts images data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An LED-array-based range imaging system is proposed for three-dimensional (3-D) shape measurement. The range image
is obtained by time-division electronic scanning of the LED Time-of-Flight (TOF) range finders in array, and no
complex mechanical scanning is needed. By combining with a low cost CCD/CMOS sensor for capturing the twodimensional
(2-D) image, the proposed range imaging system can be used to accomplish a high quality 3-D imaging. A
sophisticated co-lens optical path is designed to assure the natural registration between the range image and 2-D image.
Experimental tests for evaluation of the imaging system performance are described. It was found that the 3-D images can
be acquired at a rate of 10 frames per second with a depth resolution better than 5mm in the range of 50 - 1000mm,
which is sufficient for many practical applications, including the obstacle detection in robotics, machine automation, 3-D
vision, virtual reality games and 3-D video.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the key procedures in color image compression is to extract its region of interests (ROIs) and evaluate different
compression ratios. A new non-uniform color image compression algorithm with high efficiency is proposed in this
paper by using a biology-motivated selective attention model for the effective extraction of ROIs in natural images.
When the ROIs have been extracted and labeled in the image, the subsequent work is to encode the ROIs and other
regions with different compression ratios via popular JPEG algorithm. Furthermore, experiment results and quantitative
and qualitative analysis in the paper show perfect performance when comparing with other traditional color image
compression approaches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A mobile robot is an automatic machine that is capable of movement in a given environment. Many techniques
of automatic control are proposed. A line tracer is one of the most popular robots. The line tracer goes along a
white line on the floor. The authors developed a mobile robot which moves to indicated point automatically.
All you have to do is to indicate a goal point. In this paper, we propose an automatic mobile robot system
controlled by an invisible marker and remote indication using the augmented reality technology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The authors have researched multimedia system and support system for nursing studies on and practices of
reminiscence therapy and life review therapy. The concept of the life review is presented by Butler in 1963.
The process of thinking back on one's life and communicating about one's life to another person is called life
review. There is a famous episode concerning the memory. It is called as Proustian effects. This effect is
mentioned on the Proust's novel as an episode that a story teller reminds his old memory when he dipped a
madeleine in tea. So many scientists research why smells trigger the memory. The authors pay attention to the
relation between smells and memory although the reason is not evident yet. Then we have tried to add an
olfactory display to the multimedia system so that the smells become a trigger of reminding buried memories.
An olfactory display is a device that delivers smells to the nose. It provides us with special effects, for example
to emit smell as if you were there or to give a trigger for reminding us of memories. The authors have developed
a tabletop display system connected with the olfactory display. For delivering a flavor to user's nose, the system
needs to recognition and measure positions of user's face and nose. In this paper, the authors describe an
olfactory display which enables to detect the nose position for an effective delivery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The outstanding feature of the laser range profile (LRP) is that it can obtain the 3-D shape and the range information of
the target by one pulse without optical scanning system. In this paper, Laser range profile theory and simulation is
investigated. Laser range profile simulation is studied based on the theory of beam scattering by rough object, pulse
wave scattering theory and the radar equation, the calculated formula of laser range profile is obtained. This equation is,
in part, dependent upon the target's scattering strength which is quantified by its radar cross section. As examples, the
laser range profile simulations are done for the sphere. It is indicated that the influence of pulse width, beam parameters,
transmit-receive angle and target roughness on the simulation results is also analyzed. The peak position of laser range
profile curve act in accord with radial dimension, and the peak value is contain the information of geometric shape, viz.
the peak length of laser range profile curve act in accord with radius for the sphere, the shape of range profile curve act
in accord with lateral surface profile. This paper is offer theory bases and simulation method for abstraction and
identification target feature on laser waveband.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In conventional imaging laser radar, the resolution of target is constrained by the diffraction-limited, which includes the
beamwidth of the laser in the target plane and the telescope's aperture. Synthetic aperture imaging Ladar (SAIL) is an
imaging technique which employs aperture synthesis with coherent laser radar, the resolution is determined by the total
frequency spread of the source and is independent of range, so can achieve fine resolution in long range. Ray tracing is
utilized here to obtain two-dimensional scattering properties from three-dimensional geometric model of actual target,
and range-doppler algorithm is used for synthetic aperture process in laser image simulation. The results show that the
SAIL can support better resolution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a hiding method is proposed to store vast quantities of image information based on PCA. A sequence
eigenimages of objects are obtained by using PCA method. Then, wavelet coefficients of eigenimages are embedded into
the wavelet domain of the carrying image. When the hiding information is extracted, we can take advantage of the
decomposition coefficients to reconstruct the objects. The proposed method does not store the object images directly, but
store the eigenimages which contain information of the objects. The experimental results show that the features of object
are effectively embedded into the carrying image, and the proposed algorithm has a high capacity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
When digital images are mosaicked, brightness difference between to-be-mosaicked images will result in mosaic artifacts
in the final mosaicked image due to non-uniformity caused by optical system vignetting as well as gain changes
automatically caused by scene changes. An brightness adaptive algorithm for image mosaic seamless fusion has been
studied in this paper. The process of the method are as follows; 1) estimating the visibility of stitching traces according to
brightness differences between to-be-mosaicked images; 2) adjusting the brightness of the images to be mosaicked in
order to reduce the brightness difference until the mosaic aitifacts can not be perceived by human visual
system; 3)blending the images to be mosaicked based on mul-tiscale analysis method. The experiment indicates that the
method is adaptive to adjust brightness for seamless blending based on multi-scale analysis, and the mosaiced image
quality can meet the requirements of human vision.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Assessment of image visual quality is of fundamental importance to numerous image and video processing applications.
Visual information fidelity is a novel criterion that is based on modeling of natural scene statistics, image distortion and
the human visual distortion. Traditionally, image QA algorithms interpret image quality as fidelity or similarity with a
"reference" or "perfect" image. We apply the VIF method on image enhancement effect which takes distorted image as
"reference" image instead of "perfect" image to assess the quality of enhanced image. It provides clear advantages over
the traditional approaches because VIF index is combined with HVS features under certain conditions. In particular, it
can be measured only rely on the original image and enhanced image. We validate the performance of our method with
an extensive subjective study to show that it outperforms current methods in our testing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The signal processing flow for the MTF test bench that is based on Fourier analysis method is
presented.
The signal processing flow mainly consists of three parts that are Fourier analyzing, background
correction and system attenuation elimination. The center of the pinhole area is recognized
automatically and the line spread functions (LSF) of both sagittal and tangential directions are
calculated. Second-order fast Fourier transform is executed so that a primary two-direction MTF result
is gained. Either auto Fourier-domain background correction or manual time-domain background
correction is executed. The attenuation of the tested MTF result due to the influence of the detector and
pinhole is eliminated finally.
A commercially available 50-mm plano-convex audit lens is tested as the sample to validate the
accuracy of the signal processing flow of the MTF test bench. The test error is below 0.01 under
200lp/mm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The accuracy of the Shack-Hartmann wavefront sensor (SHWS) for measuring the distortion wavefront is mainly
dependent upon the measurement accuracy of the centroid of the focal spot. Many methods have been presented to
improve the accuracy of the wavefront centroid measurement by weakening the influence of various noises, such as the
photo noise, the read-out noise, the background noise, unevenness and instability of the light source, etc. In general,
these methods mainly use the first moment centroid algorithm to calculate the centroid in the whole sub-aperture. In this
paper, we present an improved centroid measurement approach that calculate the centroid of the focal spot more
precisely using the higher moment centroid method in an optimized detection window. Based on the improved method,
the effects of various noises out of the optimized detection window are almost eliminated; furthermore, the noise
influences in the optimized detection window are also weakened due to the more contributions of the focal spot intensity.
The experimental results demonstrate that the precision and repeatability of focal spot centroid are more prominent than
the results obtained via other commonly centroid methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Realistic image rendition is to reproduce the human perception of natural scenes. Retinex is a classical algorithm that
simultaneously provides high dynamic range compression contrast and color constancy of an image. In this paper, we
discuss a design of a digital signal processor (DSP) implementation of the single scale monochromatic Retinex algorithm.
The target processor is Texas Instruments TMS320DM642, a 32-bit fix point DSP which is clocked at 600 MHz. This
DSP hardware platform designed is of powerful consumption and video image processing capability. We give an
overview of the DSP hardware and software, and discuss some feasible optimizations to achieve a real-time version of
the Retinex algorithm. In the end, the performance of the algorithm executing on DSP platform is shown.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Due to some significant advantages such as high space resolution, three-dimensional imagery (including intensity image
and range image) acquiring, and so on, an imaging laser radar is helpful to improve the correct recognition ratio being as
a sensor in a target recognition system. A chirped amplitude modulation imaging ladar is based on the frequency
modulation/continuous wave (FM/cw) technique. The target range is calculated by measuring the frequency difference
between projected and returned laser signal. The design of a signal processing system for a FM/cw imaging ladar is
introduced in this paper, which includes an acquiring block, a memory block, a communication block, and a FFT
processor. The performance of this system is analyzed in detail in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Spatial heterodyne spectrometers have been used in multiple scientific studies since their invention and early
development. Broadband spatial heterodyne spectrometers also have the advantages of large etendue, high spectral
resolving powers, and high data collection rates as traditional spatial heterodyne spectrometer. Basic theory, design and
performance parameters, breadboard experiment for a broadband, high-resolution spatial heterodyne spectrometer are
reported. The experimental spatial heterodyne spectrometer achieves a design resolution 0.39cm-1. Firstly, it is
demonstrated that broadband spatial heterodyne spectrometer have the advantages of wide spectral coverage and high
spectral resolving power simultaneously; secondly, the effects of optical defects on the system are discussed; thirdly,
Two dimension interference data procession also is mentioned.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The advantages and disadvantages of traditional evaluation methods for laser jamming effect are analyzed. Process of
fuzzy evaluation is presented. Combined with different parameters of CCD imaging performance, fuzzy synthetic
evaluation method is applied. In the method, several performance evaluation parameters are calculated. Then single
factor evaluation result is obtained and evaluation parameters are normalized respectively. Through fuzzy relation matrix,
single factor evaluation results are synthesized and analyzed. Under different jamming conditions, the fuzzy synthetic
evaluation results are obtained. Experimental results show that this method considers different parameters of laser
dazzled images and can reflect the effect of laser jamming to CCD imaging performance effectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The situation of researches on laser jamming to imaging detectors is analyzed and saturation effect of CCD under laser
illumination is introduced. Combined with characteristics of CCD, diffraction limited point spread function (PSF) is
applied to analyze the laser dazzled image and process of simulation is shown. Then simulated dazzled image is obtained
and it is compared with actual image. With different laser powers, imaging performance parameters of simulated images
are analyzed, such as Peak signal to noise ratio (PSNR), gray variation, definition, uniformity and so on. Experimental
results show the feasibility and validity of laser jamming simulation. Moreover, changes of imaging performance
parameters can provide useful references for further evaluation of laser jamming effect.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposed an interactive image segmentation algorithm that can tolerate slightly incorrect user constraints.
Interactive image segmentation was formulated as a constrained spectral graph partitioning problem. Furthermore, it was
proven to equal to a supervised classification problem, where the feature space was formed by rows of the eigenvector
matrix that was computed by spectral graph analysis. ν-SVM (support vector machine) was preferred as the classifier.
Some incorrect labels in user constraints were tolerated by being identified as margin errors in ν-SVM. Comparison with
other algorithms on real color images was reported.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Far-range photogrammetry is widely used in the location determination problem in some dangerous situation. In this
paper we discussed the camera calibration problem which can be used in outdoors. Location determination based on
stereo vision sensors requires the knowledge of the camera parameters, such as camera position, orientation, lens
distortion, focal length etc. with high precision. Most of the existed method of camera calibration is placing many land
markers whose position is known accurately. But due to large distance and other practical problems we can not place the
land markers with high precision. This paper shows that if we don't know the position of the land marker, we also can
get the extrinsic camera parameters with essential matrix. The real parameters of the camera and the computed
parameters of the camera give rise to the geometric error. We develop and present theoretical analysis of the geometric
error and how to get the extrinsic camera parameters with high precision in large scale measurement. Experimental
results of the project which is used to measure the drop point of a high speed object testify the method we proposed with
high precision compared with traditional methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multilinear CCD Sensor was often used on space cameras to obtain multispectral images with each line representing
different band channels. However images of different band channels obtained at the same time didn't coincide as there
were spaces between lines. Pixel numbers to be adjusted between images of different channels varied when the space
camera worked by swaying forward and backward or adjusted row transfer period to compensate image movement. An
automatic multispectral images synthesis algorithm of space camera was put forward on the basis of analysis of such
phenomenon. In this algorithm a new evaluation function was used to determine pixel numbers to be adjusted and the
image regions of each band channel to be clipped. In this way images of different band channels could be synthesized
automatically to obtain an accurate colorful image. This algorithm can be used to dispose a large mount of images from
space camera directly without any manual disposal so that efficiency could be improved remarkably. In validation
experiments the automatic multispectral images synthesis algorithm was applied in synthesis of images obtained from
outside scene experiment of a multispectral space camera. Result of validation experiments proved that the automatic
multispectral images synthesis algorithm can realize accurate multispectral images synthesis of space camera and the
efficiency can be improved markedly.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The monocular stereo vision system, consisting of single camera with controllable focal length, can be used in 3D
reconstruction. Applying the system for 3D reconstruction, must consider effects caused by digital camera. There are two
possible methods to make the monocular stereo vision system. First one the distance between the target object and the
camera image plane is constant and lens moves. The second method assumes that the lens position is constant and the
image plane moves in respect to the target. In this paper mathematical modeling of two approaches is presented. We
focus on iso-disparity surfaces to define the discretization effect on the reconstructed space. These models are
implemented and simulated on Matlab. The analysis is used to define application constrains and limitations of these
methods. The results can be also used to enhance the accuracy of depth measurement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An eight-channel imaging spectrometer based on narrowband multi-spectral imaging technology is presented. After
acquiring eight images in real time, the spectrometer is used to process images, and finally the color image of object is
compounded. Focus is on the methods of image registration and spectral construction. The experiment indicates that
point mapping and cubic spline interpolation are effective, and the color composition image is close to the real one. The
system has the advantages of high spatial resolution, strong real-time character, hence it can be widely used in the field
of moving target recognition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to improve image encryption strength, an image encryption method based on parasitic audio watermark was
proposed in this paper, which relies on double messages such as image domain and speech domain to do image
encryption protection. The method utilizes unique Chinese phonetics synthesis algorithm to complete audio synthesis
with embedded text, then separate this sentence information into prosodic phrase, obtains complete element set of initial
consonant and compound vowel that reflects audio feature of statement. By sampling and scrambling the initial
consonant and compound vowel element, synthesizing them with image watermark, and embedding the compound into
the image to be encrypted in frequency domain, the processed image contains image watermark information and
parasitizes audio feature information. After watermark extraction, using the same phonetics synthesis algorithm the audio
information is synthesized and compared with the original. Experiments show that any decryption method in image
domain or speech domain could not break encryption protection and image gains higher encryption strength and security
level by double encryption.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A simple and affordable 3D scanner is designed in this paper. Three-dimensional digital models are playing an
increasingly important role in many fields, such as computer animate, industrial design, artistic design and heritage
conservation. For many complex shapes, optical measurement systems are indispensable to acquiring the 3D information.
In the field of computer animate, such an optical measurement device is too expensive to be widely adopted, and on the
other hand, the precision is not as critical a factor in that situation. In this paper, a new cheap 3D measurement system
is implemented based on modified weak structured light, using only a video camera, a light source and a straight stick
rotating on a fixed axis. For an ordinary weak structured light configuration, one or two reference planes are required,
and the shadows on these planes must be tracked in the scanning process, which destroy the convenience of this method.
In the modified system, reference planes are unnecessary, and size range of the scanned objects is expanded widely. A
new calibration procedure is also realized for the proposed method, and points cloud is obtained by analyzing the shadow
strips on the object. A two-stage ICP algorithm is used to merge the points cloud from different viewpoints to get a full
description of the object, and after a series of operations, a NURBS surface model is generated in the end. A complex toy
bear is used to verify the efficiency of the method, and errors range from 0.7783mm to 1.4326mm comparing with the
ground truth measurement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The article study L-system theory to modeling a visualization system which can expresses plants'
growth and blossom by the Delphi language. This is according to growth process in the topology
evolution and fractal geometry shape of plant, which extracts plant's growth rules to establish blossom
models. The simulation is aim at modeling dynamic procedures, which can produces the lifelike plant
images and demonstrates animations of growth processes. This new model emphasizes various parts of
plant between space's and time's relationships. This mathematical models use biology to produce plant
compartments of blossoms on growth of plants with correct images which ranges from time to time,
and provides the lifelike continual growth sequence and through the natural principles to imitates and
controls plants' blossoms and plant's diseases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Because of the more in-depth scientific research, remote sensing images often contain huge amounts of information.
Therefore, remote sensing images always have features with multi-dimensions details and huge size. In order to obtain
the ground information more accurately from the images, the remote sensing image processing would have several steps
in the aim of better image restore and the image information refining.
Frequently, processing for this type of images has faced to some difficult issues, such as calculating slowly or consuming
huge in resources. For this reason, the parallel computing rendering in remote sensing image processing is essentially
necessary. The parallel computing method approached in this paper does not require the original algorithm rewriting.
Under a distributed framework, the method allocated the original algorithm efficiently to the multiple computing cores of
the processing computer. Because this method has fully use the computing resources, so the calculating time would be
reduced linearly with the number of computing threads. What's more, the method can also truly guarantee the integrity
of the remote sensing image data.
For the purpose of validating the feasibility of the method, this paper put the parallel computing method on application,
in which the method rendering into a radiation simulation of remote sensing image processing. We conducted several
experiments and got the statistical results. We integrated the parallel computing into the core of the original algorithm -
the wide huge size convolution. The experimental results showed that the computing efficiency improved linearly. The
number of computer calculating core was proportionally related to the reduced rate of computing time. At the same time,
the computing results were identical to the original results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Some classical image quality metrics are often used as system performance evaluation function, which is the optimized
object of the control algorithm, when the Adaptive Optics (AO) system without a wave-front sensor is used to correct
extended object imaging. However those metrics do not consider the existence of imaging noise. Practically, the
observed object images are degraded not only by the atmospheric turbulence but also imaging system noise. The noise in
image will affect the value of image quality metric and further affect the correction capability of AO system. An AO
system with Stochastic Parallel Gradient Descent (SPGD) algorithm and a 61-element deformable mirror is simulated to
restore the image of a turbulence-degraded extended object and the metric based on the frequency spectrum entropy is
acted as the optimized object by control algorithm. Based on the simulation mode, the correction capability of the AO
system is investigated through wave-front aberrations under different turbulence strength with different noise. Numerical
simulation results verify the metric based on the Frequency Spectrum Entropy (FSE) is effective when the noise of
imaging system is considered and the correction capability of the AO system is improved obviously.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Using conventional camera to capture natural scenes with high dynamic range generally results in saturation as well as
underexposure, because of their limited dynamic range. And moreover, the image of conventional RGB camera with
RGB color filter lacks color accuracy. We present a promising solution - a high dynamic range multispectral camera
placing a Liquid Crystal Tunable Filter (LCTF) between lens and gray level imaging sensor. For each bands, gray level
images with different exposures are acquired separately and are combined into a multispectral high dynamic range image
afterwards. The high dynamic range multispectral image has higher color accuracy and greater dynamic range than the
images of the traditional RGB camera.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The droplet shape in the process of droplet dropping can embodies the fundamental intrinsic properties of droplet, which
can be used to identify the characteristics of liquid types. A new method is proposed by using Moment characters-
Fourier descriptor to analysis the information of droplet dropping image. Data of the droplet profile changing is collected
through the CCD camera. Moment characters sequence is calculated to represent the shape. The sequence is given
Fourier counterchanges and normalized. The feature descriptor carries the liquid image features and unique information
of liquid.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Texture blending is an important technique for generating a photorealistic appearance of a physical model or scene. In
this paper, we present an efficient texture blending algorithm that can be utilized to register and merge multiple
texture-mapped range images of physical objects acquired from different view points, resulting in a 3-D photorealistic
model. The technique details with respect to the proposed algorithm are described and verified by experiment results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Short throw interactive projection systems are widely used in education, training, commerce, and entertainment in
recent years. Different interactive techniques have been developed. And among them, the infrared location technique is
one of the attracting methods, because of the advantages such as independent of whiteboard and low cost, and so on.
However, the main defect is that the infrared pen point is easy to be blocked by user's hand.
In this paper, we introduced our recent progress on indirect measurement of the pen point used in a short throw
interactive projection system. Two infrared LEDs are fixed along the pen's body near the tail end. By separately
measuring the position of the two LEDs, the location of the front end pen point can be calculated. Such placement can
effectively avoid the LEDs from being blocked by user's hand. The mathematical model of this measurement scheme is
given separately in the situations of infinite focal length and short focal length of the camera. Errors are analyzed by both
analytical and numerical method. We used our position sensitive detector (PSD) based location system to test the effect
of this method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The interactive projection systems have been widely used in people's life. Currently the major type is based on
interactive whiteboard (IWB). In recent years, a new type based on CCD/CMOS sensor is greatly developed. Compared
to IWB, CCD/CMOS implements non-contact sensing, which can use any surface as the projection screen. This makes
them more flexible in many applications. However, the main defect is that the location accuracy and tracing speed are
limited by the resolution and frame rate of the CCD/CMOS.
In this paper, we introduced our recent progress on constructing a new type of non-contact interactive projection
system by using a two-dimensional position sensitive detector (PSD). The PSD is an analog optoelectronic position
sensor utilizing photodiode surface resistance, which provides continuous position measuring and features high position
resolution (better than 1.5μm) and high speed response (less than 1μs). By using the PSD, both high positioning
resolution and high tracing speed can be easily achieved. A specially designed pen equipped with infrared LEDs is used
as a cooperative target. A high precision signal processing system is designed and optimized. The nonlinearity of the
PSD as well as the aberration of the camera lens is carefully measured and calibrated. Several anti-interference methods
and algorithms are studied. Experimental results show that the positioning error is about 2mm over a 1200mm×1000mm
projection screen, and the sampling rate is at least 100Hz.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The oil contamination level testing is important for its using and maintenance which is the basement of the oil
contamination control is required higher by the developing device system, and the testing method is urgently needed to
be studied for improving the process method and the maintenance quality of the contaminated oil. To classify the level of
particles contamination in lubricant, CCD imaging technology is used to capture microscopic digital image of the oil
particles sample . The digital image was processed and segmented in order that the computer can recognize and
understand the particle targets by using image testing algorithm to measure the sizes, amounts and distributions of
particles. The oil contamination level can be measured effectively by the economical and convenient method in which
there is little air bubble and bead leading to false particle targets. To improve the influence produced by the false particle
targets, One method is that a series of dynamical image samples from the contaminated oil in the multi-period and the
multi-state are captured and used to test the particle targets, and the further method is to employ the fuzzy measurement
using Gaussian subjection function, which describes the distribution of the standard evidences and the distribution of the
testing data, and the testing probabilities of the evidence are weighed by the matching degree of the two distributions,
which is used to classify the oil particles contamination level .The test shows that the oil particles contamination level
diagnosis reliability is improved and the diagnosis uncertainty is reduced. This method combining with other testing
methods by using the multi-information fusion method will be further studied later.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Based on the Kirchhoff's approximation, analytical expressions for pulse scattering mutual coherence function (MCF)
and the Double Frequency Scattering Section (DFSS) from moving rough random surface are presented. From
expressions, we find that the MCF and DFSS have related with the coherence bandwidth frequency difference, and the
speed of the rough surface. Some important scattering characteristics of calculations based on the analytical solutions
will be further discussed in details.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Auto white balance (AWB) is an important technique for digital cameras. Human vision system has the ability
to recognize the original color of an object in a scene illuminated by a light source that has a different color
temperature from D65-the standard sun light. However, recorded images or video clips, can only record the
original information incident into the sensor. Therefore, those recorded will appear different from the real scene
observed by the human. Auto white balance is a technique to solve this problem. Traditional methods such as
gray world assumption, white point estimation, may fail for scenes with large color patches. In this paper, an
AWB method based on color temperature estimation clustering is presented and discussed. First, the method
gives a list of several lighting conditions that are common for daily life, which are represented by their color
temperatures, and thresholds for each color temperature to determine whether a light source is this kind of
illumination; second, an image to be white balanced are divided into N blocks (N is determined empirically).
For each block, the gray world assumption method is used to calculate the color cast, which can be used to
estimate the color temperature of that block. Third, each calculated color temperature are compared with
the color temperatures in the given illumination list. If the color temperature of a block is not within any of
the thresholds in the given list, that block is discarded. Fourth, the remaining blocks are given a majority
selection, the color temperature having the most blocks are considered as the color temperature of the light
source. Experimental results show that the proposed method works well for most commonly used light sources.
The color casts are removed and the final images look natural.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We focus on the method of image registration and fusion and introduce all kinds of existing registration and fusion
algorithm based on lifting wavelet in detail in this paper. Based on the characteristics of the infrared and visible images,
this paper presents a registration approach using lifting wavelet transform to extract edge feature points, and improve the
fusion algorithm based on features of human vision system (HVS). The methods refer much other knowledge such as
lifting scheme, edge detection, affine transformation, HVS and fusion rules. A fast multi-resolution image fusion method
based on visual features for infrared and visible images is proposed in this paper. The source images are decomposed
using CDF9/7 lifting wavelet transform, respectively. Then it calculated the visual features of each sub-image and chose
the different rule of fusion based on visual features. Finally, a fused image is reconstructed by using inverse lifting
wavelet transform. Experimental results demonstrate that the proposed method has apparent advantage in information
preservation and resolution even if the source images have low signal to noise ratio (SNR), and the algorithm is more
effective in computational speed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Joint transform optical correlator (JTOC) is an effective motion detection tool, and the quality of spectrogram has
great influence on the detection accuracy. In this paper, we constructed simulation software for JTOC and used
two images with known displacement as the experimental objects; we gradually increased the noise in the
spectrogram, and then compared the detection data under noise conditions with the real data to test the degree of
influence of the noise on the detection accuracy. The test results show that when the noise variance is small, the
influence of noise is very little; when the noise variance is more than 0.8, the influence of noise increases
gradually; and when the noise variance exceeds 1.29, the noise will directly cause failure of joint transform
optical correlator.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to lead the laser beam transmit in the atmosphere convergently, an experiment of laser focus at the distance of
450m and 300m has been operated in the outdoor place. The actual manipulations are as follows: Firstly, the laser was
collimated by a beam expander, then the near-parallel laser beam was transmitted with a Galileo telescope system, and
the distance between the concave lens and the convex lens can be tuned through a precise displacement platform, so the
focus of the system changed due to the tiny displacement of the concave lens. Secondly, the average power of the laser
spot can be measured using power meter, the power is 47.67mW and the standard deviation is 0.67mW while the focal
length is 450m. Thirdly, the energy distribution was found through the laser beam analyzer. The spot images were saved
using the beam analyzer, then the saved image can be processed with Matlab software afterwards. The function named
EDGE and Sobel operator was used in the pre-processing of the saved image, then method of median filter was used in
the course of image de-noising and 53H filter was adopted in the signal analysis. The diameter of laser spot was obtained
by the method above, the diameter is 5.56mm and the standard deviation is 0.24mm. The spot center excursion is
0.56mm, it is 10.43% of the total diameter of the laser spot. At last, the key factors of the energy dissipation in the
focusing system can be summarized as follows: restriction of the diffraction limit, attenuation in the atmosphere,
geometrical aberration of optical system, and the diffraction limit and the geometrical aberration are significant in the
three factors above, so we can reduce the impact of the both factors during the design of optical system. The reliable
referenced data of the system design can be acquired through the primary experiment research.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image processing is necessary for three-dimensional information recovering of stereovision measurement system and it
is always bottleneck for real-time applications. In order to accelerate system computational power, the design of SOPC
system which can fulfills image processing tasks parallel is discussed. As a part of high-speed stereovision measurement
system, the application specific SOPC is designed as an embedded PCI board card of hosts PC. This paper focuses on
three aspects. Firstly, Principles of SOPC system designing and SOPC features selecting are analyzed with measuring
requirements under consideration. Then the realization of SOPC system is described in detail. The embedded processor,
special IPs (Intelligent Properties), several custom logic modules are included in a single FPGA. All units are seamlessly
integrated into the overall system using the system builder interface. The parallel processing is illustrated by examples.
In the end, simulation and debugging results of SOPC system are introduced. Elements that influence running time are
analyzed and final results are given. Experiment and test results show that all the functions needed were realized with
much higher efficiency and processing speed in our SOPC system than conventional software.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is important to accurately obtain micro displacement in industry, especially in numerical controlled
machine. But traditional methods encountered some difficulties in high-precision measurement. A new
approach based on moiré fringes digital image processing technique (DIP) is proposed in this paper. A
smartly designed experiment is done to grasp moiré fringes from two same gratings, so complicated
equipments are not necessary which has obvious advantages. A CCD is used to acquire digital images.
Then the images are done by digital image processing, including filtering and gray-scale transformation,
fringes identification. A smart way to calibrate the distance represented by each pixel is given in this
paper with DIP technique. The distance of a certain fringe between two images is obtained to display
the micro displacement of any object. The result of this approach is compared with a higher accurate
micro displacement, their similarity identify the correct of this method. We are sure that the result will
be more satisfactory if higher accurate equipment is applied in inspection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work we present a novel vision-based pipeline for automated skeleton detection and centreline
extraction of neuronal dendrite from optical microscopy image stacks. The proposed pipeline is an integrated solution
that merges image stacks pre-processing, the seed points detection, ridge traversal procedure, minimum spanning tree
optimization and tree trimming into to a unified framework to deal with the challenge problem. In image stacks preprocessing,
we first apply a curvelet transform based shrinkage and cycle spinning technique to remove the noise. This is
followed by the adaptive threshold method to compute the result of neuronal object segmentation, and the 3D distance
transformation is performed to get the distance map. According to the eigenvalues and eigenvectors of the Hessian
matrix, the skeleton seed points are detected. Staring from the seed points, the initial centrelines are obtained using ridge
traversal procedure. After that, we use minimum spanning tree to organize the geometrical structure of the skeleton
points, and then we use graph trimming post-processing to compute the final centreline. Experimental results on different
datasets demonstrate that our approach has high reliability, good robustness and requires less user interaction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Laser polarimetric imaging has great potential application to classify targets which could not be realized by intensity
imaging. A polarimetric imaging system is built to acquire two kinds of images, intensity and polarization degree coded
images, simultaneously. By fusing the intensity and polarization degree images with pseudo-color encoding technique,
we have achieved the classification of different kinds of targets with similar characteristics. Preliminary results show that
higher contrast and better resolution images classified with polarimetric technique could be obtained after speckle
reduction. Coherent speckle noise can be reduced with lowpass filter by treating it as high frequency noise. And lowpass
filter outperforms the normally used median filter in speckle reduction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.