PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Proceedings Volume 2015 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 962201 (2015) https://doi.org/10.1117/12.2208364
This PDF file contains the front matter associated with SPIE Proceedings Volume 9622 including the Title Page, Copyright information, Table of Contents, Introduction, and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2015 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 962202 (2015) https://doi.org/10.1117/12.2184677
Three-dimensional super-resolution range-gated imaging has been developed for high-resolution 3D remote sensing with two range-intensity correlation algorithms under specific shapes of range-intensity profiles (RIP). However, pulsed lasers have a minimum pulse width which limits range resolution improvement. Here a spatial difference shaping method is proposed to break the resolution limitation. This method establishes a shaping filter, and the pre-reshaping gate images are reshaped by spatial difference and yield new gate images with the laser pulse width equivalently narrowed as half value which improves the range resolution. Furthermore, the boundary blurring caused by non-rectangular laser pulses are also eliminated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2015 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 962203 (2015) https://doi.org/10.1117/12.2193264
Retinex is a luminance perceptual algorithm based on color consistency. It has a good performance in color enhancement. But in some cases, the traditional Retinex algorithms, both Single-Scale Retinex(SSR) and Multi-Scale Retinex(MSR) in RGB color space, do not work well and will cause color deviation. To solve this problem, we present improved SSR and MSR algorithms. Compared to other Retinex algorithms, we implement Retinex algorithms in HSI(Hue, Saturation, Intensity) color space, and use a parameter αto improve quality of the image. Moreover, the algorithms presented in this paper has a good performance in image defogging. Contrasted with traditional Retinex algorithms, we use intensity channel to obtain reflection information of an image. The intensity channel is processed using a Gaussian center-surround image filter to get light information, which should be removed from intensity channel. After that, we subtract the light information from intensity channel to obtain the reflection image, which only includes the attribute of the objects in image. Using the reflection image and a parameter α, which is an arbitrary scale factor set manually, we improve the intensity channel, and complete the color enhancement. Our experiments show that this approach works well compared with existing methods for color enhancement. Besides a better performance in color deviation problem and image defogging, a visible improvement in the image quality for human contrast perception is also observed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2015 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 962204 (2015) https://doi.org/10.1117/12.2193471
Dynamic Light Scattering is used for measuring particle size distribution of nano-particle under Brownian motion. Signal is detected through a photomultiplier and processed by correlation analysis, and results are inverted at last. Method by using CCD camera can record the procedure of motion. However, there are several weaknesses such as low refresh speed and noise from CCD camera, and this method depends on particle size and detecting angle. A simulation of nano-particle under Brownian motion is proposed to record dynamic images, studies contrast of dynamic images which can represent speed of diffusion, and its characteristic under different conditions. The results show that through contrast of dynamic images diffusion coefficient can be obtained, which is independent on density of scattering volume.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2015 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 962205 (2015) https://doi.org/10.1117/12.2193567
A hybrid phase retrieval (HPR) method using a combination of linear phase retrieval (LPR) and iterative phase retrieval (IPR) is proposed for real-time wavefront sensing. Only the intensity information of a single defocused image is required by HPR. Low-order aberrations are estimated by classical LPR algorithm but using a “segmented” detector to provide design flexibility and better sensing accuracy. High-order aberrations are estimated by a kind of modified Gerchberg-Saxton (MGS) algorithm which uses the LPR result as a prior knowledge to significantly speed the convergence. The performance of HPR is tested under various seeing conditions by simulation. For atmospheric aberrations with D/r0=3, HPR containing LPR and ten-iteration IPR can achieve an averaged Strehl ratio of 0.88.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2015 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 962206 (2015) https://doi.org/10.1117/12.2193251
Although the current imaging sensors can achieve 12 or higher precision, the current display devices and the commonly used digital image formats are still only 8 bits. This mismatch causes significant waste of the sensor precision and loss of information when storing and displaying the images. For better usage of the precision-budget, tone mapping operators have to be used to map the high-precision data into low-precision digital images adaptively. In this paper, the classic histogram equalization tone mapping operator is reexamined in the sense of optimization. We point out that the traditional histogram equalization technique and its variants are fundamentally improper by suffering from local optimum problems. To overcome this drawback, we remodel the histogram equalization tone mapping task based on graphic theory which achieves the global optimal solutions. Another advantage of the graphic-based modeling is that the tone-continuity is also modeled as a vital constraint in our approach which suppress the annoying boundary artifacts of the traditional approaches. In addition, we propose a novel dynamic programming technique to solve the histogram equalization problem in real time. Experimental results shows that the proposed tone-preserved global optimal histogram equalization technique outperforms the traditional approaches by exhibiting more subtle details in the foreground while preserving the smoothness of the background.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2015 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 962207 (2015) https://doi.org/10.1117/12.2190522
The spectrometers capture large amount of raw and 3-dimensional (3D) spatial-spectral scene information with 2- dimensional (2D) focal plane arrays(FPA). In many applications, including imaging system and video cameras, the Nyquist rate is so high that too many samples result, making compression a precondition to storage or transmission. Compressive sensing theory employs non-adaptive linear projections that preserve the structure of the signal, the signal is then reconstructed from these projections using an optimization process. This article overview the fundamental spectral imagers based on compressive sensing, the coded aperture snapshot spectral imagers (CASSI) and high-resolution imagers via moving random exposure. Besides that, the article propose a new method to implement spectral imagers with linear detector imager systems based on spectrum compressed. The article describes the system introduction and code process, and it illustrates results with real data and imagery. Simulations are shown to illustrate the performance improvement attained by the new model and complexity of the imaging system greatly reduced by using linear detector.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2015 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 962208 (2015) https://doi.org/10.1117/12.2190775
We designed and fabricated a dual-gate photosensitive TFT with active amorphous silicon thickness of 240nm and W/L ratio of 250μm/20μm by using a conventional six-mask photography microfabrication process. A single-pixel sensor was tested under different light conditions to mimic the real situation of X-ray exposure via the scintillator. The results demonstrate the capability of using dual-gate photosensitive TFT to acquire an X-ray image indirectly.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2015 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 962209 (2015) https://doi.org/10.1117/12.2191841
Terahertz (THz) imaging is a hot topic in the current imaging technology. THz imaging has the advantage to penetrate most of non-metal and non-polar materials for the detection of concealed objects, while it is harmless to biological organism. Continuous wave terahertz (THz) imaging is enable to offer a safe and noninvasive imaging for the investigated objects. In this paper, THz real-time polarization imaging system is demonstrated based on the SIFIR-50 THz laser as a radiation source and a NEC Terahertz Imager as an array detector. The experimental system employs two wire grid polarizers to acquire the intensity images in four different directions. The polarization information of the measured object is obtained based on the Stokes-Mueller matrix. Imaging experiments on the currency with water mark and the hollowed-out metal ring have been done. Their polarization images are acquired and analyzed. The results show that the extracted polarization images include the valuable information which can effectively detect and recognize the different kinds of objects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2015 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 96220A (2015) https://doi.org/10.1117/12.2192948
In ultrasound imaging system, the wave emission and data acquisition is time consuming, which can be solved by adopting the plane wave as the transmitted signal, and the compressed sensing (CS) theory for data acquisition and image reconstruction. To overcome the very high computation complexity caused by introducing CS into ultrasound imaging, in this paper, we propose an improvement of the fast iterative shrinkage-thresholding algorithm (FISTA) to achieve the fast reconstruction of the ultrasound imaging, in which a modified setting is done with the parameter of step size for each iteration. Further, the GPU strategy is designed for the proposed algorithm, to guarantee the real time implementation of imaging. The simulation results show that the GPU-based image reconstruction algorithm can achieve the fast ultrasound imaging without damaging the quality of image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2015 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 96220B (2015) https://doi.org/10.1117/12.2185000
To meet the requirement of fine vegetation classification in hyperspectral remote sensing applications, an improved method based on C5.0 decision tree of multiple combined classifiers is proposed. It consists of 2 classification stages: rough classification and fine classification. During the first stage, experimental model is used to estimate vegetation biochemistry parameters. Then 3 supervised classifiers, namely Spectral Angle Mapping, Minimum Distance, and Maximum Likelihood, combined by C5.0 decision tree, are used to realize the final fine classification. Experiments show that comparing with the traditional mono-classification algorithms, the proposed method can reduce the classification error effectively and more suitable for the vegetation investigation in the hyperspectral remote sensing applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2015 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 96220C (2015) https://doi.org/10.1117/12.2191108
Underwater imaging poses significant challenges due to random dynamic distortions caused by reflection and refraction of light through the water waves. Moving object detection in a turbulent medium further imposes complexity in the imaging. In this paper, a new approach is proposed for turbulence compensation of a distorted underwater video while keeping the real motions unharmed. First, a geometrically stable frame is created from the distorted video that contains no moving objects. Then, a robust non-rigid image registration technique is used to estimate the motion vector fields of the distorted frames against the stable frame. The difference images of the distorted frames with respect to the stable frame, and the estimated motion vector fields are used to detect the real motion regions and to generate a mask for each frame to extract those regions. This proposed method is compared with an earlier method through both qualitative and quantitative analysis. Simulation experiments show that the proposed method provides better corrections to the effects of underwater turbulence whilst accurately preserving the moving objects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2015 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 96220D (2015) https://doi.org/10.1117/12.2191635
High resolution is important to earth remote sensors, while the vibration of the platforms of the remote sensors is a major factor restricting high resolution imaging. The image-motion prediction and real-time compensation are key technologies to solve this problem. For the reason that the traditional autocorrelation image algorithm cannot meet the demand for the simple scene image stabilization, this paper proposes to utilize soft-sensor technology in image-motion prediction, and focus on the research of algorithm optimization in imaging image-motion prediction. Simulations results indicate that the improving lucky image-motion stabilization algorithm combining the Back Propagation Network (BP NN) and support vector machine (SVM) is the most suitable for the simple scene image stabilization. The relative error of the image-motion prediction based the soft-sensor technology is below 5%, the training computing speed of the mathematical predication model is as fast as the real-time image stabilization in aerial photography.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2015 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 96220E (2015) https://doi.org/10.1117/12.2193140
Film processing procedures by means of Roll-to-Roll (R2R) for barrier coatings can often result in PV barrier films being manufactured with significant quantities of defects, which results in lower efficiency and a short life span. In order to improve the process yield and product efficiency, it is desirable to develop an inspection system that can detect transparent barrier film defects in the production line during film processing. Off-line detection of defects in transparent PV barrier films is difficult and time consuming. Consequently, implementing an accurate in-situ defects inspection system in the production environment is even more challenging, since the requirements on positioning, fast measurement, long term stability and robustness against environmental disturbance are demanding. This paper reports on the development and deployment of two in-situ PV barrier films defect detection systems, one based on wavelength scanning interferometry (WSI) and the other on White Light Channeled Spectral Interferometry (WLCSI), and the integration into an R2R film processing line at the Centre for Process Innovation (CPI). The paper outlines the environmental vibration strategy for both systems, and the developed auto-focusing methodology for WSI. The systems have been tested and characterised and initial results compared to laboratory-based instrumentation are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2015 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 96220F (2015) https://doi.org/10.1117/12.2193303
Recognizing a target object across non-overlapping distributed cameras is known in the computer vision community as the problem of person re-identification. In this paper, a multi-patch matching method for person reidentification is presented. Starting from the assumption that: the appearance (clothes) of a person does not change during the time of passing in different cameras field of view , which means the regions with the same color in target image will be identical while crossing cameras. First, we extract distinctive features in the training procedure, where each image target is devised into small patches, the SIFT features and LAB color histograms are computed for each patch. Then we use the KNN approach to detect group of patches with high similarity in the target image and then we use a bi-directional weighted group matching mechanism for the re-identification. Experiments on a challenging VIPeR dataset show that the performances of the proposed method outperform several baselines and state of the art approaches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2015 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 96220G (2015) https://doi.org/10.1117/12.2190602
Transmissive diffractive membrane optic can be used in space optical telescope to reduce the size and mass of imaging system. Based on the international research results about transmissive diffractive membrane, a 4-level diffractive substrate with 100mm apertures was designed and transmissive diffractive membrane was fabricated by spin coating. High-precision support structure for diffractive membrane with surface precision 0.12λ RMS (λ=632.8nm) was introduced, and that can meet the diffractive imaging requirements. Diffraction efficiency of the diffractive membrane supported by support structure was tested, and the test results showed that diffraction efficiency was >50%. The step figure test results illustrated the etched deep precision was less the 10nm. The imaging wavefront test result demonstrated a wavefront error of about 38 nm RMS. The transmissive diffractive membrane optic can be very useful for large aperture imaging system to realize low mass and low cost.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2015 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 96220H (2015) https://doi.org/10.1117/12.2190642
Designing of a novel depth camera is presented, which targets close-range (20-60cm) natural human-computer interaction especially for mobile terminals. In order to achieve high precision through the working range, a two-stepping method is employed to match the near infrared intensity image to absolute depth in real-time. First, we use structured light achieved by an 808nm laser diode and a Dammann grating to coarsely quantize the output space of depth values into discrete bins. Then use a learning-based classification forest algorithm to predict the depth distribution over these bins for each pixel in the image. The quantitative experimental results show that this depth camera has 1% precision over range of 20-60cm, which show that the camera suit resource-limited and low-cost application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2015 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 96220I (2015) https://doi.org/10.1117/12.2190815
The purposes of spacecraft vacuum thermal test are to characterize the thermal control systems of the spacecraft and its component in its cruise configuration and to allow for early retirement of risks associated with mission-specific and novel thermal designs. The orbit heat flux is simulating by infrared lamp, infrared cage or electric heater. As infrared cage and electric heater do not emit visible light, or infrared lamp just emits limited visible light test, ordinary camera could not operate due to low luminous density in test. Moreover, some special instruments such as satellite-borne infrared sensors are sensitive to visible light and it couldn’t compensate light during test. For improving the ability of fine monitoring on spacecraft and exhibition of test progress in condition of ultra-low luminous density, night vision imaging system is designed and integrated by BISEE. System is consist of high-gain image intensifier ICCD camera, assistant luminance system, glare protect system, thermal control system and computer control system. The multi-frame accumulation target detect technology is adopted for high quality image recognition in captive test. Optical system, mechanical system and electrical system are designed and integrated highly adaptable to vacuum environment. Molybdenum/Polyimide thin film electrical heater controls the temperature of ICCD camera. The results of performance validation test shown that system could operate under vacuum thermal environment of 1.33×10-3Pa vacuum degree and 100K shroud temperature in the space environment simulator, and its working temperature is maintains at 5℃ during two-day test. The night vision imaging system could obtain video quality of 60lp/mm resolving power.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2015 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 96220J (2015) https://doi.org/10.1117/12.2192576
Simultaneous imaging polarimetry can realize real-time polarization imaging of the dynamic scene, which has wide application prospect. This paper first briefly illustrates the design of the double separate Wollaston Prism simultaneous imaging polarimetry, and then emphases are put on the polarization information processing methods and software system design for the designed polarimetry. Polarization information processing methods consist of adaptive image segmentation, high-accuracy image registration, instrument matrix calibration. Morphological image processing was used for image segmentation by taking dilation of an image; The accuracy of image registration can reach 0.1 pixel based on the spatial and frequency domain cross-correlation; Instrument matrix calibration adopted four-point calibration method. The software system was implemented under Windows environment based on C++ programming language, which realized synchronous polarization images acquisition and preservation, image processing and polarization information extraction and display. Polarization data obtained with the designed polarimetry shows that: the polarization information processing methods and its software system effectively performs live realize polarization measurement of the four Stokes parameters of a scene. The polarization information processing methods effectively improved the polarization detection accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2015 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 96220K (2015) https://doi.org/10.1117/12.2193349
Smartphones are widely used at present. Most smartphones have cameras and kinds of sensors, such as gyroscope, accelerometer and magnet meter. Indoor navigation based on smartphone is very important and valuable. According to the features of the smartphone and indoor navigation, a new indoor integrated navigation method is proposed, which uses MEMS (Micro-Electro-Mechanical Systems) IMU (Inertial Measurement Unit), camera and magnet meter of smartphone. The proposed navigation method mainly involves data acquisition, camera calibration, image measurement, IMU calibration, initial alignment, strapdown integral, zero velocity update and integrated navigation. Synchronous data acquisition of the sensors (gyroscope, accelerometer and magnet meter) and the camera is the base of the indoor navigation on the smartphone. A camera data acquisition method is introduced, which uses the camera class of Android to record images and time of smartphone camera. Two kinds of sensor data acquisition methods are introduced and compared. The first method records sensor data and time with the SensorManager of Android. The second method realizes open, close, data receiving and saving functions in C language, and calls the sensor functions in Java language with JNI interface. A data acquisition software is developed with JDK (Java Development Kit), Android ADT (Android Development Tools) and NDK (Native Development Kit). The software can record camera data, sensor data and time at the same time. Data acquisition experiments have been done with the developed software and Sumsang Note 2 smartphone. The experimental results show that the first method of sensor data acquisition is convenient but lost the sensor data sometimes, the second method is much better in real-time performance and much less in data losing. A checkerboard image is recorded, and the corner points of the checkerboard are detected with the Harris method. The sensor data of gyroscope, accelerometer and magnet meter have been recorded about 30 minutes. The bias stability and noise feature of the sensors have been analyzed. Besides the indoor integrated navigation, the integrated navigation and synchronous data acquisition method can be applied to outdoor navigation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2015 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 96220L (2015) https://doi.org/10.1117/12.2193435
In recent years, morbidity and mortality of the cardiovascular or cerebrovascular disease, which threaten human health greatly, increased year by year. Heart rate is an important index of these diseases. To address this status, the paper puts forward a kind of simple structure, easy operation, suitable for large populations of daily monitoring non-contact heart rate measurement. In the method we use imaging equipment video sensitive areas. The changes of light intensity reflected through the image grayscale average. The light change is caused by changes in blood volume. We video the people face which include the sensitive areas (ROI), and use high-speed processing circuit to save the video as AVI format into memory. After processing the whole video of a period of time, we draw curve of each color channel with frame number as horizontal axis. Then get heart rate from the curve. We use independent component analysis (ICA) to restrain noise of sports interference, realized the accurate extraction of heart rate signal under the motion state. We design an algorithm, based on high-speed processing circuit, for face recognition and tracking to automatically get face region. We do grayscale average processing to the recognized image, get RGB three grayscale curves, and extract a clearer pulse wave curves through independent component analysis, and then we get the heart rate under the motion state. At last, by means of compare our system with Fingertip Pulse Oximeter, result show the system can realize a more accurate measurement, the error is less than 3 pats per minute.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2015 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 96220M (2015) https://doi.org/10.1117/12.2185535
In this paper, a new multispectral imager video electronics system is introduced. The system has an imaging function of visible spectrum (VIS), near infrared spectrum (NIR), short wave infrared spectrum (SWIR), medium wave infrared spectrum (MWIR) and long wave infrared spectrum (LWIR). It is comprised of three video processors and an information processor. Three video processors are VIS-NIR processor, SWIR-MWIR processor and LWIR processor. The VIS-NIR processor uses time delay and integration charge coupled devices (TDICCD) as detector, samples and quantifies CCD signal under the mode of correlated double sampling (CDS), corrects image data by using large-scale field programmable gate array (FPGA). The application methods of SWIR-MWIR processor and LWIR processor are similar. Information processor is the most important part of the video electronics systems. It is responsible for receiving remote control command from other equipments, transmitting telemetric data, controlling the three video processors working synchronously, encoding and transmitting the image data from the video processor. Besides the introduction of system’s functions and system composition framework, detailed implementation methods of some important components will be described in this paper as well. The experimental result shows that all main technical indexes meet the design requirement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2015 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 96220N (2015) https://doi.org/10.1117/12.2193241
The liquid lens has a broad application prospect in zoom and image systems for its quick corresponding speed, small volume and low consumption. However, the range of focal length is small, because the refractive index of conductive liquid has a little difference to that of non-conductive liquid in the liquid lens. In order to increase the zoom range of the liquid lens, a composite lens made up of a liquid lens and a solid lens is presented in this paper. The focal length variation property of the composite lens has been investigated, and the corresponding composite lens parameters are optimized. The zoom of composite lens with different slope solid interfaces has been studied.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2015 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 96220O (2015) https://doi.org/10.1117/12.2193356
In this paper, an algorithm is designed for shooting games under strong background light. Six LEDs are uniformly distributed on the edge of a game machine screen. They are located at the four corners and in the middle of the top and the bottom edges. Three LEDs are enlightened in the odd frames, and the other three are enlightened in the even frames. A simulator is furnished with one camera, which is used to obtain the image of the LEDs by applying inter-frame difference between the even and odd frames. In the resulting images, six LED are six bright spots. To obtain the LEDs’ coordinates rapidly, we proposed a method based on the area of the bright spots. After calibrating the camera based on a pinhole model, four equations can be found using the relationship between the image coordinate system and the world coordinate system with perspective transformation. The center point of the image of LEDs is supposed to be at the virtual shooting point. The perspective transformation matrix is applied to the coordinate of the center point. Then we can obtain the virtual shooting point’s coordinate in the world coordinate system. When a game player shoots a target about two meters away, using the method discussed in this paper, the calculated coordinate error is less than ten mm. We can obtain 65 coordinate results per second, which meets the requirement of a real-time system. It proves the algorithm is reliable and effective.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2015 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 96220P (2015) https://doi.org/10.1117/12.2192967
In this paper, we introduce a visual pattern degradation based full-reference (FR) image quality assessment (IQA) method. Researches on visual recognition indicate that the human visual system (HVS) is highly adaptive to extract visual structures for scene understanding. Existing structure degradation based IQA methods mainly take local luminance contrast to represent structure, and measure quality as degradation on luminance contrast. In this paper, we suggest that structure includes not only luminance contrast but also orientation information. Therefore, we analyze the orientation characteristic for structure description. Inspired by the orientation selectivity mechanism in the primary visual cortex, we introduce a novel visual pattern to represent the structure of a local region. Then, the quality is measured as the degradations on both luminance contrast and visual pattern. Experimental results on Five benchmark databases demonstrate that the proposed visual pattern can effectively represent visual structure and the proposed IQA method performs better than the existing IQA metrics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2015 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 96220Q (2015) https://doi.org/10.1117/12.2182313
An AEC and AGC(Automatic Exposure Control and Automatic Gain Control) system was designed for a CCD aerial camera with fixed aperture and electronic shutter. The normal AEC and AGE algorithm is not suitable to the aerial camera since the camera always takes high-resolution photographs in high-speed moving. The AEC and AGE system adjusts electronic shutter and camera gain automatically according to the target brightness and the moving speed of the aircraft. An automatic Gamma correction is used before the image is output so that the image is better for watching and analyzing by human eyes. The AEC and AGC system could avoid underexposure, overexposure, or image blurring caused by fast moving or environment vibration. A series of tests proved that the system meet the requirements of the camera system with its fast adjusting speed, high adaptability, high reliability in severe complex environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2015 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 96220R (2015) https://doi.org/10.1117/12.2182659
Introduces a remote weapon station basic composition and the main advantage, analysis of target based on image automatic recognition system for remote weapon station of practical significance, the system elaborated the image based automatic target recognition system in the photoelectric stabilized technology, multi-sensor image fusion technology, integrated control target image enhancement, target behavior risk analysis technology, intelligent based on the character of the image automatic target recognition algorithm research, micro sensor technology as the key technology of the development in the field of demand.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hai ying Liu, Peng Wang, Hai bin Zhu, Yan Li, Shao jun Zhang
Proceedings Volume 2015 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 96220S (2015) https://doi.org/10.1117/12.2184669
It has long been difficulties in aerial photograph to stitch multi-route images into a panoramic image in real time for multi-route flight framing CCD camera with very large amount of data, and high accuracy requirements. An automatic aerial image mosaic system based on GPU development platform is described in this paper. Parallel computing of SIFT feature extraction and matching algorithm module is achieved by using CUDA technology for motion model parameter estimation on the platform, which makes it’s possible to stitch multiple CCD images in real-time. Aerial tests proved that the mosaic system meets the user’s requirements with 99% accuracy and 30 to 50 times’ speed improvement of the normal mosaic system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2015 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 96220T (2015) https://doi.org/10.1117/12.2189743
The paper presents a depth map super-resolution method of which the core content is a novel edge enhancement algorithm. Auto-regressive algorithm is applied to generate an initial upsampled depth map before the edge enhancement. Except for the low-resolution depth map, an intensity image derived from high-resolution color image is also utilized to extract accurate depth edge, which is finally rectified by combining color, depth and intensity information. The experimental results show that our approach is able to recover high-resolution (HR) depth maps with high quality. What’s more, in comparison with the previous state-of-art algorithms, our approach can generally achieve better results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2015 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 96220U (2015) https://doi.org/10.1117/12.2189993
Aiming at the nonlinear and non-Gaussian features of the real infrared scenes, an optimal nonlinear filtering based algorithm for the infrared dim target tracking-before-detecting application is proposed. It uses the nonlinear theory to construct the state and observation models and uses the spectral separation scheme based Wiener chaos expansion method to resolve the stochastic differential equation of the constructed models. In order to improve computation efficiency, the most time-consuming operations independent of observation data are processed on the fore observation stage. The other observation data related rapid computations are implemented subsequently. Simulation results show that the algorithm possesses excellent detection performance and is more suitable for real-time processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2015 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 96220V (2015) https://doi.org/10.1117/12.2190330
In order to achieve real-time extraction of feature point in the target detection and tracking system, we propose improved SIFT feature extraction algorithm based on FPGA hardware platform, which focuses on Pyramid structures and Gaussian convolution, at the same time, through the reasonable selection of algorithm parameter and fixed-point number , we can ensure the precision of the algorithm. In addition, we can achieve coordination between the various modules by multiplexing the SRAM. Experimental results indicate that the improved SIFT algorithm on FPGA hardware platform has high stability, low algorithm complexity and high accuracy, meanwhile, shows up good real-time capability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2015 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 96220W (2015) https://doi.org/10.1117/12.2190531
To suppress the interference of the target detecting in the turbid medium, a kind of polarization detection technology based on Curvelet transform was applied. This method firstly adjusts the angles of polarizing film to get the intensity images of the situations at 0°,60° and 120°, then deduces the images of Stokes vectors, degree of polarization (DOP) and polarization angle (PA) according to the Mueller matrix. At last the DOP and intensity images can be decomposed by Curvelet transform to realize the fusion of the high and low coefficients respectively, after the processed coefficients are reconstructed, the target which is easier to detect can be achieved. To prove this method, many targets in turbid medium have been detected by polarization method and fused their DOP and intensity images with Curvelet transform algorithm. As an example screws in moderate and high concentration liquid are presented respectively, from which we can see the unpolarized targets are less obvious in higher concentration liquid. When the DOP and intensity images are fused by Curvelet transform, the targets are emerged clearly out of the turbid medium, and the values of the quality evaluation parameters in clarity, degree of contract and spatial frequency are prominently enhanced comparing with the unpolarized images, which can show the feasibility of this method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2015 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 96220X (2015) https://doi.org/10.1117/12.2190592
Nowadays, binocular stereo matching technology has achieved a great progress in theory, but most of the algorithms have good results only for standard image pairs. In the actual binocular systems, the image pair will be different in color, intensity and sharpness because of the inconsistence of illumination, optical defocus, color response of image sensors and so on, which will significantly reduce the stereo matching accuracy or even totally get wrong matching result. To acquire a good disparity map, a new stereo matching method was proposed for actual binocular system in this paper. After acquiring image pair, an automatic method based on Log-space is used to rectify left and right images in order that they are consistent in color, intensity and sharpness. Then the disparity map is obtained from the rectified images by Census algorithm. Experimental results show that our proposed method is more robust to illumination changes and can improve disparity map largely.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2015 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 96220Y (2015) https://doi.org/10.1117/12.2190714
Non-scanning imaging lidar, as a sensor, is applied in target tracking system to acquire distance image, intensity image and amplitude image, which makes it possible to achieve information fusion of the target. This system applies ARM as a hardware development platform which makes it easy to carry and achieve the system miniaturization. Target characteristics are extracted by the method combines codebook model and connected domain denoising to improve the accuracy of target characteristics extraction. Qt/Embedded development platform applied in building graphical user interface has a good architecture and programming mode which improves man-machine interaction and control efficiency. The results show the high accuracy of the target tracking, excellent man-machine interaction and perfect interface functions of the designed system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2015 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 96220Z (2015) https://doi.org/10.1117/12.2191823
Tracking and registration is one key issue for an augmented reality (AR) system. For the marker-based AR system, the research focuses on detecting the real-time position and orientation of camera. In this paper, we describe a method of tracking and registration using the vector operations. Our method is proved to be stable and accurate, and have a good real-time performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2015 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 962210 (2015) https://doi.org/10.1117/12.2191836
In this paper, we propose a novel scene-based non-uniformity correction algorithm for infrared image processing-temporal high-pass non-uniformity correction algorithm based on grayscale mapping (THP and GM). The main sources of non-uniformity are: (1) detector fabrication inaccuracies; (2) non-linearity and variations in the read-out electronics and (3) optical path effects. The non-uniformity will be reduced by non-uniformity correction (NUC) algorithms. The NUC algorithms are often divided into calibration-based non-uniformity correction (CBNUC) algorithms and scene-based non-uniformity correction (SBNUC) algorithms. As non-uniformity drifts temporally, CBNUC algorithms must be repeated by inserting a uniform radiation source which SBNUC algorithms do not need into the view, so the SBNUC algorithm becomes an essential part of infrared imaging system. The SBNUC algorithms’ poor robustness often leads two defects: artifacts and over-correction, meanwhile due to complicated calculation process and large storage consumption, hardware implementation of the SBNUC algorithms is difficult, especially in Field Programmable Gate Array (FPGA) platform. The THP and GM algorithm proposed in this paper can eliminate the non-uniformity without causing defects. The hardware implementation of the algorithm only based on FPGA has two advantages: (1) low resources consumption, and (2) small hardware delay: less than 20 lines, it can be transplanted to a variety of infrared detectors equipped with FPGA image processing module, it can reduce the stripe non-uniformity and the ripple non-uniformity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2015 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 962211 (2015) https://doi.org/10.1117/12.2192523
Traditional three-dimensional (3D) calibration targets consist of two or three mutual orthogonal planes (each of the planes contains several control points constituted by corners or circular points) that cannot be captured simultaneously by cameras in front view. Therefore, large perspective distortions exist in the images of the calibration targets resulting in inaccurate image coordinate detection of the control points. Besides, in order to eliminate mismatches, recognition of the control points usually needs manual intervention consuming large amount of time. A new design of 3D calibration target is presented for automatic and accurate camera calibration. The target employs two parallel planes instead of orthogonal planes to reduce perspective distortion, which can be captured simultaneously by cameras in front view. Control points of the target are constituted by carefully designed circular coded markers, which can be used to realize automatic recognition without manual intervention. Due to perspective projection, projections of the circular coded markers’ centers deviate from the centers of their corresponding imaging ellipses. Colinearity of the control points is used to correct perspective distortions of the imaging ellipses. Experiment results show that the calibration target can be automatically and correctly recognized under large illumination and viewpoint change. The image extraction errors of the control points are under 0.1 pixels. When applied to binocular cameras calibration, the mean reprojection errors are less than 0.15 pixels and the 3D measurement errors are less than 0.2mm in x and y axis and 0.5mm in z axis respectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2015 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 962212 (2015) https://doi.org/10.1117/12.2193077
With Sanhu region of Qaidam Basin as the test area and the mineral compositions and hyperspectral remote sensing images as test data, the present paper sets up the quantitative relationships between clay and carbonate of altered minerals caused by oil and gas microseepage and the characteristic parameters from hyperspectral remote sensing image. To get the quantitative relationships between these characteristic parameters and contents, the statistical regression method is used after the spectral characteristics extraction from Hyperion image. The research results show the contents of clay and carbonate have a high degree fitting with the depth of spectral absorption peak, while there are low correlations between other characteristic parameters and the contents. This conclusion provides references for using the hyperspectral remote sensing information to explore the oil and gas direct and lessening or even getting rid of the groundwork, and provides a statistical basis for inversing the surface mineral contents with the hyperspectral remote sensing image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2015 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 962213 (2015) https://doi.org/10.1117/12.2193108
A method of two channel exercise electrocardiograms (ECG) signals denoising based on wavelet transform and independent component analysis is proposed in this paper. First of all, two channel exercise ECG signals are acquired. We decompose these two channel ECG signals into eight layers and add up the useful wavelet coefficients separately, getting two channel ECG signals with no baseline drift and other interference components. However, it still contains electrode movement noise, power frequency interference and other interferences. Secondly, we use these two channel ECG signals processed and one channel signal constructed manually to make further process with independent component analysis, getting the separated ECG signal. We can see the residual noises are removed effectively. Finally, comparative experiment is made with two same channel exercise ECG signals processed directly with independent component analysis and the method this paper proposed, which shows the indexes of signal to noise ratio (SNR) increases 21.916 and the root mean square error (MSE) decreases 2.522, proving the method this paper proposed has high reliability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2015 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 962214 (2015) https://doi.org/10.1117/12.2193157
In this paper, a new algorithm for the detection of moving targets in smoke-screen image sequences is presented, which can combine three properties of pixel: grey, fractal dimensions and correlation between pixels by Rough Set. The first step is to locate and extract regions that may contain objects in an image by locally grey threshold technique. Secondly, the fractal dimensions of pixels are calculated, Smoke-Screen is done at different fractal dimensions. Finally, according to temporal and spatial correlations between different frames, the singular points can be filtered. The experimental results show that the algorithm can effectively increase detection probability and has robustness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Qingmin Ye, Kuo Chen, Huajun Feng, Zhihai Xu, Qi Li
Proceedings Volume 2015 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 962215 (2015) https://doi.org/10.1117/12.2193273
When the human face is on the edge of field of the camera which has a large view, serious deformation will be captured. To correct the distortion of the human face, we present an approach based on setting up a 3D model. Firstly, we construct 3D target face modeling by using the data and depth information of the standard human face, which is set up by the three-dimensional model with three-dimensional Gaussian function with sectional type. According to the size of the face in the image and the parameters of the camera, we can obtain the information of relative position and depth of the human face. Then by translating the virtual camera axis to the center of the face, we can achieve the goal to correct the distortion of the face based on the theory of three-dimensional imaging. Finally, we have made a lot of experiments, and we study the influence of parameters of the 3D model of human face. The result indicates that the method presented by this paper can play an effective role in correcting the distortion of the face in the edge of the view, and we can get better results if the model appreciates the real human face.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2015 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 962216 (2015) https://doi.org/10.1117/12.2193282
At present, most compressed sensing (CS) algorithms have poor converging speed, thus are difficult to run on PC. To deal with this issue, we use a parallel GPU, to implement a broadly used compressed sensing algorithm, the Linear Bregman algorithm. Linear iterative Bregman algorithm is a reconstruction algorithm proposed by Osher and Cai. Compared with other CS reconstruction algorithms, the linear Bregman algorithm only involves the vector and matrix multiplication and thresholding operation, and is simpler and more efficient for programming. We use C as a development language and adopt CUDA (Compute Unified Device Architecture) as parallel computing architectures. In this paper, we compared the parallel Bregman algorithm with traditional CPU realized Bregaman algorithm. In addition, we also compared the parallel Bregman algorithm with other CS reconstruction algorithms, such as OMP and TwIST algorithms. Compared with these two algorithms, the result of this paper shows that, the parallel Bregman algorithm needs shorter time, and thus is more convenient for real-time object reconstruction, which is important to people’s fast growing demand to information technology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2015 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 962217 (2015) https://doi.org/10.1117/12.2193287
Efficient camouflaged target reconnaissance technology makes great influence on modern warfare. Hyperspectral images can provide large spectral range and high spectral resolution, which are invaluable in discriminating between camouflaged targets and backgrounds. Hyperspectral target detection and classification technology are utilized to achieve single class and multi-class camouflaged targets reconnaissance respectively. Constrained energy minimization (CEM), a widely used algorithm in hyperspectral target detection, is employed to achieve one class camouflage target reconnaissance. Then, support vector machine (SVM), a classification method, is proposed to achieve multi-class camouflage target reconnaissance. Experiments have been conducted to demonstrate the efficiency of the proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2015 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 962218 (2015) https://doi.org/10.1117/12.2193290
In this paper, we propose a novel algorithm for processing the non-convex l0≤p≤1 semi-norm minimization model under the gradient descent framework. Since the proposed algorithm only involves some matrix-vector products, it is easy to implement fast implicit operation and make it possible to take use of the advantage of l0≤p≤1 semi-norm based model practically in large-scale applications which is a hard task for common procedure for l0≤p≤1 semi-norm optimization such as FOCUSS. The simulation of image compression and reconstruction shows the super performance of the proposed algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2015 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 962219 (2015) https://doi.org/10.1117/12.2193292
A perceptual preprocessing method for 3D-HEVC coding is proposed in the paper. Firstly we proposed a new JND model, which accounts for luminance contrast masking effect, spatial masking effect, and temporal masking effect, saliency characteristic as well as depth information. We utilize spectral residual approach to obtain the saliency map and built a visual saliency factor based on saliency map. In order to distinguish the sensitivity of objects in different depth. We segment each texture frame into foreground and background by a automatic threshold selection algorithm using corresponding depth information, and then built a depth weighting factor. A JND modulation factor is built with a linear combined with visual saliency factor and depth weighting factor to adjust the JND threshold. Then, we applied the proposed JND model to 3D-HEVC for residual filtering and distortion coefficient processing. The filtering process is that the residual value will be set to zero if the JND threshold is greater than residual value, or directly subtract the JND threshold from residual value if JND threshold is less than residual value. Experiment results demonstrate that the proposed method can achieve average bit rate reduction of 15.11%, compared to the original coding scheme with HTM12.1, while maintains the same subjective quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2015 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 96221A (2015) https://doi.org/10.1117/12.2193363
Search for Terrestrial Exo-Planet (STEP)[1] was originally proposed in 2013 by the National Space Science Center, Chinese Academy of Sciences, which is currently being under background engineering study phase in China. The STEP mission is a space astrometry telescope working at visible light wavelengths. The STEP aims at the nearby terrestrial planets detection through micro-arcsecond-level astrometry. Determination of the separation between star images on a detector with high precision is very important for astrometric exoplanets detection through the observation of star wobbles due to planets. The requirement of centroiding accuracy for STEP is 1e-5 pixel. A centroiding experiment have been carried out on a metrology testbed in open laboratory. In this paper, we present the preliminary results of determining the separations between star images. Without calibration of pixel positions and intra-pixel response, we have demonstrated that the standard deviation of differential centroiding is below 7.4e-3 pixel by the algorithm of linear corrected photon weighted means(LCPWM)[2,3]. For comparison, the photon weighted means(PWM) and Gauss fitting are also used in the data reduction. These results pave the way for the geometrical calibration and the intra-pixel quantum efficiency(QE) calibration of detector array equipment for micro-pixel accuracy centroiding.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2015 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 96221B (2015) https://doi.org/10.1117/12.2193416
The automatic generation of seamline along the overlap region skeleton is a concerning problem for the mosaicking of Remote Sensing(RS) images. Along with the improvement of RS image resolution, it is necessary to ensure rapid and accurate processing under complex conditions. So an automated seamline detection method for RS image mosaicking based on image object and overlap region contour contraction is introduced. By this means we can ensure universality and efficiency of mosaicking. The experiments also show that this method can select seamline of RS images with great speed and high accuracy over arbitrary overlap regions, and realize RS image rapid mosaicking in surveying and mapping production.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2015 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 96221C (2015) https://doi.org/10.1117/12.2193512
To avoid “stair-casing effect” in the disparity map when dealing with slant plane, curved surface and weak texture region, an improved fast dense stereo matching algorithm based on disparity plane estimation is proposed. First, a set of support points are extracted from the edge of the original matching images and the description images. Second, Delaunay triangulation disparity planes are calculated using all the support points. Third, the sub-pixel disparity map is computed from the best support points and parameters of the Delaunay triangulation disparity planes. Finally, experimental results show that the “stair-casing effect” caused by slant plane, curved surface and weak texture region is eliminated by using the presented method. In addition, the proposed method spends less than 600ms on a one-megapixel image averagely.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2015 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 96221D (2015) https://doi.org/10.1117/12.2193528
To obtain the accurate integral PSF of an extended depth of field (EDOF) microscope based on liquid tunable lens and volumetric sampling (VS) method, a method based on statistic and inverse filtering using quantum dot fluorescence nanosphere as a point source is proposed in this paper. First, a number of raw quantum dot images were captured separately when the focus length of the liquid lens was fixed and changed over the exposure time. Second, the raw images were separately added and averaged to obtain two noise-free mean images. Third, the integral PSF was achieved by computing the inverse Fourier transform of the mean image's Fourier transform caught when the focus lens is fixed divided by that when the focus length is changed. Finally, experimental results show that restored image using the measured accumulated PSF has good image quality and no artifacts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2015 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 96221E (2015) https://doi.org/10.1117/12.2193561
Considering the influences of speckle noises and wavefront aberration error on image quality in active imaging based on spatial heterodyne detection, a wavefront correction method that based on metric optimization of multi images is proposed, in which the multi images are generated by aperture dividing technique. An experimental setup is established, and the experiments that correcting the aberration of itself with the above method are performed, in which the stochastic parallel gradient descent algorithm and image sharpness function are used. The results show that the method of multi images averaging can be used to improve the signal-to-noise ratio of target image effectively, and a higher quality of target image can be achieved after correction by optimizing the image sharpness metric that generated with the averaging data of multi images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2015 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 96221F (2015) https://doi.org/10.1117/12.2195478
To estimation the scaling parameter higher than 1.3, a robust remote sensing image registration algorithm is proposed. It decomposes the corner feature windows into a new circle feature curve sequences. According to scaling invariance, the corresponding points are obtained through with calculating the correlation of curves. In the end, it uses line least-squares estimating the parameters. The experiments show that, the corresponding point pair’s error is less than 5%, and the multi-parameter error is less than 0.5 pixel, 0.1degree, and 0.1% scales. Especially, in the condition that scaling is more than 1.3, this algorithm has good performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume 2015 International Conference on Optical Instruments and Technology: Optoelectronic Imaging and Processing Technology, 96221G (2015) https://doi.org/10.1117/12.2195799
Adaptive Optics together with subsequent post-processing techniques obviously improve the resolution of turbulencedegraded images in ground-based space objects detection and identification. The most common method for frame selection and stopping iteration in post-processing has always been subjective viewing of the images due to a lack of widely agreed-upon objective quality metric. Full reference metrics are not applicable for assessing the field data, no-reference metrics tend to perform poor sensitivity for Adaptive Optics images. In the present work, based on the Laplacian of Gaussian (LOG) local contrast feature, a nonlinear normalization is applied to transform the input image into a normalized LOG domain; a quantitative index is then extracted in this domain to assess the perceptual image quality. Experiments show this no-reference quality index is highly consistent with the subjective evaluation of input images for different blur degree and different iteration number.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.