The checkerboard is a frequently used pattern in camera calibration, an essential process to get intrinsic parameters for more accurate information from images. An automatic checkerboard detection method that can detect multiple checkerboards in a single image is proposed. It contains a corner extraction approach using self-correlation and a structure recovery solution using constraints related to adjacent corners and checkerboard block edges. The method utilizes the central symmetric feature of the checkerboard crossings as well as the spatial relationship of neighboring checkerboard corners and the grayscale distribution of their neighboring pixels. Five public datasets are used in the experiments to evaluate the method. Results show high detection rates and a short average runtime of the proposed method. In addition, the camera calibration accuracy also presents the effectiveness of the proposed detection method with reprojected pixel errors smaller than 0.5 pixels.
An approach for carrying out depth profile measurement of an object with the plenoptic camera is proposed. A single plenoptic image consists of multiple lenslet images. To begin with, these images are processed directly with a refocusing technique to obtain the depth map, which does not need to align and decode the plenoptic image. Then, a linear depth calibration is applied based on the optical structure of the plenoptic camera for depth profile reconstruction. One significant improvement of the proposed method concerns the resolution of the depth map. Unlike the traditional method, our resolution is not limited by the number of microlenses inside the camera, and the depth map can be globally optimized. We validated the method with experiments on depth map reconstruction, depth calibration, and depth profile measurement, with the results indicating that the proposed approach is both efficient and accurate.
A method is proposed for depth extraction of low-texture region with a light field (LF) camera, which expands the application. Based on the analysis of LF data, it is proven that the depth information can be estimated from a single LF image. Furthermore, as the lenslet LF data can be decoded into a subimages array, and the relationship among subimages is proven to be affine transformation, we used the geometry relationship, which is represented by partition ratio of triangle grids area, to replace the unreliable gray value of low-texture region for stereo matching. In addition, to obtain accurate ratio values, preset points are projected to enrich the texture with a projector, which is convenient and reliable. The proposed method improves the accuracy of the depth extraction obviously at low-texture region compared with the traditional state-of-the-art LF method, and results are validated by experiments.
An online fringe projection profilometry (OFPP) based on scale-invariant feature transform (SIFT) is proposed. Both rotary and linear models are discussed. First, the captured images are enhanced by “retinex” theory for better contrast and an improved reprojection technique is carried out to rectify pixel size while keeping the right aspect ratio. Then the SIFT algorithm with random sample consensus algorithm is used to match feature points between frames. In this process, quick response code is innovatively adopted as a feature pattern as well as object modulation. The characteristic parameters, which include rotation angle in rotary OFPP and rectilinear displacement in linear OFPP, are calculated by a vector-based solution. Moreover, a statistical filter is applied to obtain more accurate values. The equivalent aligned fringe patterns are then extracted from each frame. The equal step algorithm, advanced iterative algorithm, and principal component analysis are eligible for phase retrieval according to whether the object moving direction accords with the fringe direction or not. The three-dimensional profile of the moving object can finally be reconstructed. Numerical simulations and experimental results verified the validity and feasibility of the proposed method.
Remote sensing for unmanned aerial vehicles (UAVs) is becoming the preferred method for Earth observations. Data and command communication between an airborne remote sensing load and the ground terminal is crucial for the realization of real-time and smart observation for remote sensing. However, the current remote sensing observation equipment for UAVs cannot meet these requirements. In order to solve this problem, a real-time and smart remote control and data transmission system for UAVs is designed. The design and implementation of several key functions are presented, including multitask and multithread data transmission, transmission resumption, and task scheduling by priority.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.