Plenoptic cameras attract extensive attention these days for their unique information acquisition and postcapture processing capability. In prior work, the decoding pipeline of plenoptic cameras mainly consisted of calibration, aligning, slicing, and resampling. Besides, slicing and resampling are performed on a generated grid of projected centers. Such methods consist of a series of processing steps, and the errors from different steps are accumulated. We propose a simplified decoding pipeline for plenoptic cameras. We first propose a coarse-to-fine strategy to calibrate the microlens array accurately and automatically. Then, unlike prior work, we perform slicing and resampling using a nonregular grid of projected centers. Such a strategy avoids using aligning and excludes the dark regions among different microlenses from decoding. Experiments on published datasets and real-world scenes demonstrate the validation of the proposed method.
A light-field camera combines optics and computation to provide the ability to perform ranging. Many methods and algorithms, which are time-consuming and noise-sensitive, have been proposed for per-pixel depth estimation. We present a Fourier domain ranging method for light fields (LFs). Instead of estimating the depth for every pixel, we attempt to detect the depths of the different planes at which objects are located. The method is somewhat like using the energy in the focal stack, but it is carried out in a more efficient way. Our method has the advantages of speediness and robustness compared with traditional per-pixel depth estimation methods. In addition to the ranging algorithm, we also demonstrate the idea and application of a region adaptive denoising filter, in which the depth parameters are tuned by the proposed method. We include results for synthetic LFs, the Stanford LF archives, and LFs captured with a Lytro camera, exploiting the algorithm complexity, accuracy, depth resolution, and noise performance of our method. Our method uses less computation and is more robust than per-pixel depth estimation, making it appropriate for many applications, such as autorefocusing and autodenoising.
Dark channel prior haze removal algorithm is very simple and effective for single image haze removal, but it’s somehow limited by color distortion in gray areas and expenditure of time. Firstly, aiming at solving color distortions, we propose a modified dark channel prior haze removal algorithm. We correct the transmission of the gray areas by introducing the correction parameter, and the transmission is unchanged when the areas meet the law of dark channel prior. Secondly, to realize real-time haze removal for video surveillance, we use GPU to parallel accelerate and optimize the new algorithm on Compute Unified Device Architecture platform released by NVIDIA. Experiments show that the modified algorithm works effectively in gray areas. At the same time, the processing speed of images with a resolution of 640x480 can reach 37 frames per second after GPU acceleration and can also obtain the real-time haze removal result.