A pure phase-modulated computer-generated hologram (CGH) method is presented to generate a full parallax holographic stereogram. The holographic stereogram plane is divided into several two-dimensional holographic elements (hogels). The spectra of the hogels are rendered from multiview full parallax images of three-dimensional (3-D) objects. The phase-modulated hogel is calculated by iterative Fourier transform algorithms to improve diffraction efficiency and eliminate conjugate images. A gray calibration technique is introduced to generate the accurate intensity modulation of pure phase hogels. The holographic stereogram that we proposed is reconstructed by an optical system based on a phase only spatial light modulator. The experimental results demonstrate that our proposed method can successfully reconstruct parallax images of 3-D objects.
The vergence-accommodation conflict in holographic stereograms is investigated. The visual distortion and fatigue caused by the conflict are analyzed. A method for generating full-parallax holographic stereograms without vergence-accommodation conflicts is proposed. Two-dimensional spatial and spectral samplings are carried out on both the hologram and the reconstructed planes. The depth cues of three-dimensional object points are introduced in the iterative process of calculating subholograms with different spectral components. The stereogram is a combination of holographic elements (hogels), and each hogel is formed by performing a weighted summation of subholograms, where parallax images and depth information are used to select the constituent subholograms. A proof-of-principle experiment is carried out in an optical system based on a spatial light modulator. The results show that the improved full-parallax holographic stereogram can control the focusing depths of points and guarantee consistency between the vergence and accommodation distances. The influence of the size of the hogels on holographic imaging quality is also investigated.
A phase-searching look-up table (PS-LUT) method is proposed to dramatically reduce the memory size of the new look-up table (N-LUT) method as well as to keep its advantage of fast computational speed. Small quantities of samples are chosen as the basic phase points (BPPs) in the principle fringe pattern (PFP), and phases of the object points for BPPs are precalculated and stored in the table. The phases of object point for the whole PFP could be quickly obtained through a phase-searching method. With proper reference beam phases, PFP could be rapidly generated. The experimental results reveal that the computational speed of the proposed method is about 24 times faster than that of the ray-tracing method, and the required memory size is 1731 times less than that of the N-LUT method. To eliminate the coherent noise of the N-LUT and PS-LUT, the random PFP N-LUT (RN-LUT) and PS-LUT (RPS-LUT), in which the PFP is randomly chosen for every object point, are put forward. The reconstructed images with a spatial light modulator indicate that both the RN-LUT and RPS-LUT methods are extremely effective in improving the quality of the reconstructed images.
Based on the existed ways of 2-D reconstructed images by spatial light modulator (SLM), the methods to enhance the reconstructed quality of 3-D images are investigated in this paper. Based on the diffraction theory, the effects of a lattice SLM with a limited fill factor on the reconstructed images are analyzed. Through adding the phase of the convergent spherical wave, the two focused planes of reconstructed images and the multi-order beams caused by the lattice structure of the SLM can be separated spatially. Therefore, the spatial filter is used to eliminate the influences of higher-orders diffraction beams and zero-order light of reconstructed images, respectively. A holographic optoelectronic display system based on liquid crystal spatial light modulator (LC-SLM) is set up to demonstrate this method.
A dual-channel fusion system of visual and infrared images based on color transfer The increasing availability and deployment of imaging sensors operating in multiple spectrums has led to a large research effort in image fusion, resulting in a plethora of pixel-level image fusion algorithms. However, most of these algorithms have gray or false color fusion results which are not adapt to human vision. Transfer color from a day-time reference image to get natural color fusion result is an effective way to solve this problem, but the computation cost of color transfer is expensive and can’t meet the request of real-time image processing. We developed a dual-channel infrared and visual images fusion system based on TMS320DM642 digital signal processing chip. The system is divided into image acquisition and registration unit, image fusion processing unit, system control unit and image fusion result out-put unit. The image registration of dual-channel images is realized by combining hardware and software methods in the system. False color image fusion algorithm in RGB color space is used to get R-G fused image, then the system chooses a reference image to transfer color to the fusion result. A color lookup table based on statistical properties of images is proposed to solve the complexity computation problem in color transfer. The mapping calculation between the standard lookup table and the improved color lookup table is simple and only once for a fixed scene. The real-time fusion and natural colorization of infrared and visual images are realized by this system. The experimental result shows that the color-transferred images have a natural color perception to human eyes, and can highlight the targets effectively with clear background details. Human observers with this system will be able to interpret the image better and faster, thereby improving situational awareness and reducing target detection time.