Smartphones are widely used at present. Most smartphones have cameras and kinds of sensors, such as gyroscope, accelerometer and magnet meter. Indoor navigation based on smartphone is very important and valuable. According to the features of the smartphone and indoor navigation, a new indoor integrated navigation method is proposed, which uses MEMS (Micro-Electro-Mechanical Systems) IMU (Inertial Measurement Unit), camera and magnet meter of smartphone. The proposed navigation method mainly involves data acquisition, camera calibration, image measurement, IMU calibration, initial alignment, strapdown integral, zero velocity update and integrated navigation. Synchronous data acquisition of the sensors (gyroscope, accelerometer and magnet meter) and the camera is the base of the indoor navigation on the smartphone. A camera data acquisition method is introduced, which uses the camera class of Android to record images and time of smartphone camera. Two kinds of sensor data acquisition methods are introduced and compared. The first method records sensor data and time with the SensorManager of Android. The second method realizes open, close, data receiving and saving functions in C language, and calls the sensor functions in Java language with JNI interface. A data acquisition software is developed with JDK (Java Development Kit), Android ADT (Android Development Tools) and NDK (Native Development Kit). The software can record camera data, sensor data and time at the same time. Data acquisition experiments have been done with the developed software and Sumsang Note 2 smartphone. The experimental results show that the first method of sensor data acquisition is convenient but lost the sensor data sometimes, the second method is much better in real-time performance and much less in data losing. A checkerboard image is recorded, and the corner points of the checkerboard are detected with the Harris method. The sensor data of gyroscope, accelerometer and magnet meter have been recorded about 30 minutes. The bias stability and noise feature of the sensors have been analyzed. Besides the indoor integrated navigation, the integrated navigation and synchronous data acquisition method can be applied to outdoor navigation.
Based on the analysis of the characteristics of infrared dim small target under complex cloud background, A detection algorithm is proposed for infrared dim small target. Firstly, median filtering algorithm is employed to remove the noise effectively. Secondly, according to the characteristics of the cloud background and targets, a frequency-domain processing method based on Fourier transform and second-order Butterworth high pass filtering is proposed. The cutoff frequency of the filter is adaptively decided by establishing the relationship between the background complexity and the cutoff frequency. Lastly, target detection is achieved by threshold segmentation. Experiments showed that the algorithm can effectively detect infrared dim small target under complex cloud background.
We present an optical method to measure three dimensional (3D) ship deformations. This method is based on optical
collimation theory and thereby, with a crosshairs image projected and captured in a collimation optical path, the 3D
deformation angles, including the pitching angle, the yawing angle and the rolling angle, could be calculated by image’
variation. In order to improve the measurement precision, sub-pixel location algorithm is adopted in image processing.
Particularly, given that the rolling angle is the most difficult to measure in a collimation optical path, numerical
simulation is carried out to analysis the error characteristics of this angular measurement. Experimental results indicate
that this 3D ship deformations measurement achieve the precision of several arcsecs in the distance of 25m and in the
deformation range of -120″~120″.
Star trackers are beyond dispute the most accuracy absolute attitude determination sensors which are widely applied
in spacecraft, satellites, rockets, etc. High precision autonomous star tracker has accuracy better than one arc second and
generally resulting in a low update rate less than 10Hz. Typically, an autonomous star tracker consist two physically
independent components, the optical head and the associated processing electronic system. High accuracy attitude is
obtained through their cooperation. Basic principles of star navigation and components of a star tracker will be
introduced. Star trackers used to be with big body size, heavy mass, high power consumption and complicated structure but
with low accuracy. The state-of-the-art development will decrease the power consumption and mass of autonomous star
trackers significantly while increase update rate and improve dynamic accuracy and system robustness. Advance of
different generations of star trackers will be reviewed here. The accuracy performance of the star tracker depends on the
sensitivity to the starlight of the image sensor, the star detection threshold, the field of view (FOV), the number of stars in
the FOV, the accuracy of the star centroid, the dynamic maneuvering, the calibration and etc. Star centroid is a key
procedure and contributes much more to the final performance of a star track. Accuracy degradation will occur when the
carrier of the star track is in the state of high dynamic maneuvering. Hardware design and algorithms remedies have to
been adopted to reduce the degradation effects. Detailed discussion of accuracy performance will be presented.
In order to enhance the imaging speed of the 3D imaging lidar (light detection and ranging) and implement high-speed
3D imaging under static conditions, we propose a new 3D imaging lidar based on a laser diode and a high-speed 2D laser
scanner. The proposed 3D imaging lidar is mainly composed of a transmitter, a laser scanner, a receiver and a processor.
This paper introduces the components and principle of the proposed 3D imaging lidar first. And then some experiments
have been carried out to evaluate the performance of the 3D imaging lidar, in terms of scanning field, measuring
precision, scanning speed, image resolution and etc. The results show that the scanning field of the 3D imaging lidar is
about 26°×12°, the measuring precision is better than 5 cm (4 m distance), the scanning speed is greater than 30 fps
(frame per second) and the image resolution can reach 16×101. In addition, the 3D imaging lidar can obtain both the 3D
image and intensity image for the given target at the same time.
In order to enhance the imaging speed of the 3D imaging lidar (light detection and ranging) and implement high-speed
3D imaging under static conditions, we propose a novel high-speed 2D laser scanner with an asymmetric 16-plane
rotating mirror. Firstly, this paper analyzes the principles and characteristics of common laser scanners used in 3D
imaging lidars, which mainly include a symmetric rotating mirror scanner, a vibrating mirror scanner, a oval line scanner
and a double optical wedge scanner. And then we propose an asymmetric 16-plane rotating mirror with a novel structure,
which can carry out faster scanning in 2D field with only one rotating mirror. The scanning principle and main structure
of the rotating mirror is introduced in detail. Based on the proposed asymmetric rotating mirror, a new high-speed laser
scanner for the 3D imaging lidar is implemented with some advantages: high scanning speed, large scanning field and
high reflectivity. Finally, the laser scanning experiment has been carried out with the proposed laser scanner. The
experimental results show that the scanning speed is above 30 frames per second, the scanning field is about 32°×12°,
the vertical resolution of each frame is 16, and the laser reflectivity is above 0.9. The proposed laser scanner can be
applied to areas such as groundborne, vehicleborne and airborne 3D imaging lidars.
In order to validate the detection precision of a three Dimensions Optical Deformation Measure System (3D-OMS), a
calibration method of auxiliary coordinate and the optical coordinate base on theodolites has been proposed. The
installation method by using theodolites to calibrate the auxiliary coordinate and the optical coordinate has been
proposed. Specifically, after the auxiliary mirrors installed, the installation accuracy is detected, then we analyzed the
influence of Axis-Error of theodolite under the practical condition of our experiment. Furthermore, the influence of
validation precision for the 3D-OMS caused by the misalignment of auxiliary coordinate and optical coordinate is
analyzed. According to our theoretical analysis and experiments results, the validation precision of the 3D-OMS can
achieve an accuracy of 1″ at the conditions of the coordinate alignment accuracy is no more than 10′ and the measuring
range of 3D-OMS within ±3′. Therefore, the proposed method can meet our high accuracy requirement while not
sensitive to the installation error of auxiliary mirrors. This method is also available for other similar work.
It often needs transfer a reference from one place to another place in aerospace and guided missile launching. At first,
principles of several typical optical azimuth transmission methods are presented. Several typical methods are introduced,
such as Theodolite (including gyro-theodolite) collimation method, Camera series method, Optical apparatus for azimuth
method and polarization modulated light transmission method. For these typical azimuth transmission methods, their
essential theories are elaborated. Then the devices, the application fields and limitations of these typical methods’ are
presented. Theodolite (including gyro-theodolite) collimation method is used in the ground assembly of spacecraft.
Camera series method and optical apparatus for azimuth method are used in azimuth transmission between different
decks of ship. Polarization modulated light transmission method is used in azimuth transmission of rocket and guided
missile. At the last, the further developments of these methods are discussed.
In order to enhance the measurement precision and detection range of the 3D imaging lidar (light detection and ranging), we propose a new broadband low-noise detection circuit for the pulse laser, which mainly includes a high-speed APD (Avalanche Photodiode) detector and a broadband low-noise transimpedance amplifier. In the detection circuit, a high negative bias voltage is applied to the APD detector and used to set the static input current of the amplifier NE5210 to 200 μA with a proper bias method. By this bias method, the allowable input current range of the amplifier NE5210 is enhanced by about 1 time. This paper introduces the main framework and performance of the detection circuit. The output noise voltage, output signal voltage and voltage SNR (Signal-to-noise Ratio) of the detection circuit are analyzed and calculated as well. Some experiments have been carried out with the proposed detection circuit, showing that the detection circuit can detect a narrow pulse laser with about 4 ns pulse width. Based on our experiments and analyses, the pass band of the detection circuit ranges from 0.56 MHz to 200MHz approximately, the allowable input current of the amplifier NE5210 varies from -460 μA to 0, and the effective output differential voltage ranges from -1.6 V to 1.4 V. The proposed detection circuit is implemented and tested in a high-speed 3D imaging lidar. As well as 3D imaging lidars, the detection circuit can be applied to the pulse laser range finder and other pulse laser detection system.
In order to avoid the shortcoming of the passive gain control method in the 3D imaging lidar (light detection and ranging), we propose a new gain control method, which can adjust the gain of the amplifying circuit according to the target distance. This method complies with the principle that the laser echo amplitude is inversely proportional to the square of the target distance. In addition, to simplify the complexity of the gain control module, we propose a simple implementation method based on the charging process of a capacitor. Firstly, the theoretical waveform of the proposed gain control method and the gain control error are analyzed and simulated. The results indicates that when the gain ranged from 1 to 100, the maximum of gain error is less than 28% in the whole target distance range, and the gain error is less than 5% in the most target distance range. Based on this method, a new gain control and amplifying circuit has been developed, which is mainly composed of an amplifying module and a gain control module. The gain control module is used to generate a gain control voltage and apply the voltage to the gain control of the amplifying module. Finally, some experiments have been carried out to verify the entire circuit functions and performances. The experimental results show that the output signal amplitude keep constant on the whole when the target distance is changing. The pass band of the circuit ranges from 0.33 MHz to 150 MHz, and the maximum gain is 316.
Proc. SPIE. 8538, Earth Resources and Environmental Remote Sensing/GIS Applications III
KEYWORDS: Clocks, LIDAR, Laser range finders, Field programmable gate arrays, Time metrology, Precision measurement, Picosecond phenomena, Analog electronics, Defense technologies, Temperature metrology
In order to reduce the negative influence caused by the temperature and voltage variations of the FPGA (Field
Programmable Gate Array), we propose a new FPGA-based time-to-digital converter. The proposed converter adopts a
high-stability TCXO (Temperature Compensated Crystal Oscillator), a FPGA and a new algorithm, which can
significantly decrease the negative influence due to the FPGA temperature and voltage variations. This paper introduces
the principle of measurement, main framework, delayer chain structure and delay variation compensation method of the
proposed converter, and analyzes its measurement precision and the maximum measurement frequency. The proposed
converter is successfully implemented with a Cyclone I FPGA chip and a TCXO. And the implementation method is
discussed in detail. The measurement precision of the converter is also validated by experiments. The results show that
the mean measurement error is less than 260 ps, the standard deviation is less than 300 ps, and the maximum
measurement frequency is above 10 million times per second. The precision and frequency of measurement for the
proposed converter are adequate for the 3D imaging lidar (light detection and ranging). As well as the 3D imaging lidar,
the converter can be applied to the pulsed laser range finder and other time interval measuring areas.
In order to enhance the time discrimination precision in the 3D imaging lidar, we propose a new time discrimination circuit, which improves both the delayer and the attenuator in the previous CFD (Constant Fraction Discriminator) circuit. The proposed circuit mainly includes a delayer, a low-pass filter, and a comparator. The delayer is implemented with a series of inductors and capacitors, which has some advantages: low signal distortion, small volume, easy adjustment, etc. The low-pass filter attenuates the signal amplitude and broadens the signal width, as well as reduces the noise by decreasing the equivalent noise bandwidth, and increases the signal slope at the discrimination time. Therefore, the time discrimination error is reduced significantly. This paper introduces the proposed circuit in detail, carries out a theoretical analysis for the noise and time discrimination error in the proposed circuit and compares them with the previous CFD circuit. The comparison results show that the proposed circuit can reduce the time discrimination error by about 50% under the same noise level. In addition, some experiments have been carried out to test the performances of the circuit. The experiments show that the time delay of the circuit is about 14ns, the time discrimination error is less than 150 ps when the voltage SNR ranges from 18.2 to 81.8, and the time discrimination error is less than 100 ps when the signal amplitude ranges from 0.2 V to 1.86 V. The tested time discrimination error is well in accordance with the theoretical calculation.
Lidar and CCD camera have the excellent ability of capturing 3D information of objects and they are widely used for 3D
modeling. The effective fusion of 3D lidar image and CCD camera color image can give better results. The major
problem of fusion lidar data and CCD camera data is the coordinate calibration between them. In consideration of the
traits of lidar and CCD camera, a special 3D calibration object was designed, and an improved coordinate calibration
method was proposed, which fits a plane using principal components analysis and can highly improve the calibration
precision. After the lidar and CCD camera has been calibrated, the data they captured are transferred to fusion computer
by USB and network. Data processing and display are achieved in fusion software written in C++ and OpenGL.
Experiment results show that our real time image fusion system gives good result in the 3D reconstruction of objects, the
imaging rate of the system can get to 5 frames per second.
The steam flow in low-pressure turbine contained abundant water droplets, which will decrease the work efficiency and
pose potential threaten to operation safety, so measurement of steam wetness has brought great interest in electricity
generation industry. In this paper, a new measuring method using CCD (Charge Coupled Device) imaging technique was
proposed to determine the wetness in steam turbine based on the forward small angle light scattering theory. A simulated
steam turbine facility was designed to generate the wet steam, and light scattering experiments were carried out at
various working conditions in this device. The steam wetness parameters and droplet size distribution were obtained by
means of numerical inversion of the light intensity distribution based on Mie scattering theory. The results demonstrate
that the obtained data from the present analysis is in good agreement with the results of the theory analysis and previous
study, and the proposed method is proved to be suitable for steam wetness measuring and monitoring by further
Hyperspectral remote sensing is not only an important technical method in observing global ecosystems and vegetation cover change, but also a main aspect of studies on precision agriculture. In order to monitor crop nutrient supply condition and to realize precision fertilization, spectral red edge parameter for winter wheat was studied. Experiments were carried out through 8 years since 1997 under four nitrogen support levels in Luancheng Station, Hebei province (e.g., 0, 100, 200 and 300 kg N ha-1). Canopy reflectance spectrum was measured by ASD HandHeld Spectroradiometer (325-1075 nm) during 2002 and 2004. The dynamics of red edge parameters for physiological stages of winter wheat canopy were calculated using first derivative curve. Analyses revealed that the red edge of the wheat canopy reflectance spectrum locates between 720-740 nm. All the different trial had distinct "red shift" trait, but higher N stress had shorter "red edge" wavelength. Position of red edge turned "blue shift" after pregnant period. Red edge swing is a first-order derivative spectrum when wavelength reached red edge position, red edge swing double peak shape showed that the pregnant period was the best stage to detect nitrogen deficiency. Red edge swing correlated with relative chlorophyll content and leaf N content. Area of red edge peak is the value of first-order derivative spectra accumulative total between 680 and 750 nm. These parameters can be used to estimate LAI and N accumulating quantities, and these results provide information needed for the development of variable-rate N application technology.
A new extremely fast and high-power laser diode driver module is introduced, which is made of a fast high-power MOSFET and based on discharging capacitor. The main factors are analysed in theory, which determine the main performance of this kind of laser diode driver module. The whole performance of the laser diode driver module is simulated with SPICE module in detail, and the testing results of the produced laser diode driver module are described. The main methods to change the output peak power and current pulse width of the laser diode driver module are presented. The output peak power of the laser diode driver module is very high, which can reach 50 W. The output current pulse width of the laser diode driver module is very short, which is less than 8 ns. The laser diode driver module can directly drive many kinds of pulse laser diode in the market, and be used as the high performance transmitter driver module of time-of-flight ladar and laser range finder.