Synthetic aperture particle image velocimetry (SAPIV) is a flow field diagnostic technique that provides instantaneous velocimetry information non-intrusively. In SAPIV, particle scattering images are captured from different cameras with camera array configuration. To acquire refocusing images, images are remapped and accumulated in pre-designed remapping planes. During the refocused images, particles that lie in the remapped plane are aligned and appear sharp, whereas particles off this surface are blurred due to parallax between the cameras. During the remapping process, captured images are back-projected to different remapped planes of different depth z within the volume. The projected images from different cameras, which are called remapped images, are merged to generate refocused images at different depth z. We developed a remap method based on the weight coefficient to improve the quality of the reconstructed velocity field. The images captured from the cameras are remapped into different remapped planes by use of homography matrix. The corresponding pixels of the remapped images in the same remapped plane are first added and averaged. The corresponding pixels of the remapped images in the same remapped plane are multiplied and the obtained intensity values act as the weight coefficients of the intensity in the added refocused image stacks. The unfocused speckles can be restrained to a great degree, and the focused particles are retained in the added refocused image stacks. A 16-camera array and a vortex ring field at two adjacent frames are simulated to evaluate the performance of our proposed method. In the simulation, a vortex ring can be clearly seen. An experimental system consisting of 16 cameras was also used to show the capability of our improved remap method. The results show that the proposed method can effectively restrain the unfocused speckles and reconstruct the velocity field in the flow field.
Flame tomography of chemiluminescence is a necessary combustion diagnostic technique that provides instantaneous
3D information on flame structure and excited species concentrations. However, in most research, the simplification of
calculation model of weight coefficient based on lens imaging theory always causes information missing, which
influences the result of further reconstructions. In this work, an improved calculation model is presented to determine
the weight coefficient by the intersection areas of the blurry circle with the square pixels, which is more appropriate to
the practical imaging process. The numerical simulation quantitatively evaluates the performance of the improved
calculation method. Furthermore, a flame chemiluminescence tomography system consisting of 12 cameras was
established to reconstruct 3D structure of instantaneous non-axisymmetric propane flame. Both numerical simulating
estimations and experiments illustrate the feasibility of the improved calculation model in combustion diagnostic.
Image deblurring is a fundamental problem in image processing. Conventional methods often deal with the degraded image as a whole while ignoring that an image contains two different components: cartoon and texture. Recently, total variation (TV) based image decomposition methods are introduced into image deblurring problem. However, these methods often suffer from the well-known stair-casing effects of TV. In this paper, a new cartoon -texture based sparsity regularization method is proposed for non-blind image deblurring. Based on image decomposition, it respectively regularizes the cartoon with a combined term including framelet-domain-based sparse prior and a quadratic regularization and the texture with the sparsity of discrete cosine transform domain. Then an adaptive alternative split Bregman iteration is proposed to solve the new multi-term sparsity regularization model. Experimental results demonstrate that our method can recover both cartoon and texture of images simultaneously, and therefore can improve the visual effect, the PSNR and the SSIM of the deblurred image efficiently than TV and the undecomposed methods.