Polarization imaging plays an important role in various fields, especially for skylight navigation and target identification, whose imaging system is always required to be designed with high resolution, broad band, and single-lens structure. This paper describe such a imaging system based on light field 2.0 camera structure, which can calculate the polarization state and depth distance from reference plane for every objet point within a single shot. This structure, including a modified main lens, a multi-quadrants Polaroid, a honeycomb-liked micro lens array, and a high resolution CCD, is equal to an “eyes array”, with 3 or more polarization imaging “glasses” in front of each “eye”. Therefore, depth can be calculated by matching the relative offset of corresponding patch on neighboring “eyes”, while polarization state by its relative intensity difference, and their resolution will be approximately equal to each other. An application on navigation under clear sky shows that this method has a high accuracy and strong robustness.
Lucky imaging technology is widely applied in astronomical imaging system because of its low cost and good performance. However, the probability of capturing an excellent lucky image is low, especially for a large aperture telescope. Thus a method of adaptive image partition is proposed in this paper to extract any lucky part of an image so as to increase the probability of obtaining the lucky image. This system is comprised of a telescope and three cameras running synchronously at the image plane, the front defocus plane and the back defocus plane respectively. Two out-focus cameras have the same defocus distance. Our algorithm of selecting each lucky part of the space object picture, which is influenced little by atmosphere turbulence, is based on the difference between pictures obtained by the front and the back defocus cameras. Then image stitching is used to obtain the entire sharp picture.
In the process of 3D reconstruction of the target scene with the moving vehicle-borne single-line scanner, a method of
monocular visual positioning is used to locate the exact position of the moving scanner. Attitude sensor, camera and the
laser scanner are located on the same platform which is installed on the vehicle. Data can be collected synchronously.
We obtained a top view according to the observation of camera's attitude by IPM. A series of planform are acquired at
different moment. After dealing with these pictures with operations of extraction Speeded Up Robust Features (SURF),
matching characteristic points and RANSAC algorism, we calculated the transformation matrix between two adjacent
vertical views. The position and direction of the scanner at this moment can be calculated. On this basis, depending on
the sensors' synchronization control algorithm, the original point clouds data got by laser scanner can be registered as a
reference to the traveling track of the moving scanner, and then the 3D reconstruction of the target scene is established.
Experiment result shows that this method is easy to operate, time-efficient, low cost, and the accuracy of the method in
reconstructing 3D scene is demonstrated.
The monocular vision odometry simplifies the hardware and the software as opposed to the stereo vision odometry, but it
also has defect. When the vehicle is in motion, the camera's attitude changes inevitably, what lead that the method's
performance degrades. To solve this problem, we proposed a monocular visual odometry based on the inverse
perspective mapping (IPM). Attitude of the camera is monitored in real time by the attitude sensor when the vehicle is
moving. Then the images of road surface photographed by camera became top view by using the IPM algorithm, after
that, the characters of images can be calculated by the Speeded Up Robust Features (SURF) algorithm. By the random
sample consensus (RANSAC) algorithm, the amounts of translation and rotation between two adjacent images can be
concluded. Accordingly, the movement distance and the course of the vehicle can be worked out. In order to test the
ranging accuracy of the method, both static and dynamic experiments were implemented. Static experiment showed that
the average accuracy of ranging of this method achieved 1.6%. Dynamic experiment showed that the ranging accuracy
achieved 6%, and the heading measurement error is less than 1.3°. Therefore, the method proposed in this paper is easy
to operate, time-efficient, low cost, and the accuracy of the method in ranging and heading measurement are demonstrated.