The three-dimensional (3D) display technology has made a great progress in the last several decades, which provides
a dramatic improvement in visual experiences. The availability of 3D content is a critical factor limiting wide
applications of 3D technology. An adaptive point tracking method based on the depth map is demonstrated, which is used
to automatically generate depth maps elaborately. Point tracking method used in the previous investigation is template
matching and it can’t track points precisely. An adaptive point tracking method with adaptive window and weights based
on the discontinuous edge information and texture complexity of the depth map is used. In the experiment, a method to
automatically generate the depth maps using trace points between adjacent images is realized. Theoretical analysis and
experimental results show that the presented method can track feature points precisely, and the depth maps of non-key
images are perfectly generated.
Normally, it requires a huge amount of spatial information to increase the number of views and to provide smooth motion parallax for natural three-dimensional (3D) display similar to real life. To realize natural 3D video display without eye-wears, a huge amount of 3D spatial information is normal required. However, minimum 3D information for eyes should be used to reduce the requirements for display devices and processing time. For the 3D display with smooth motion parallax similar to the holographic stereogram, the size the virtual viewing slit should be smaller than the pupil size of eye at the largest viewing distance. To increase the resolution, two glass-free 3D display systems rear and front projection are presented based on the space multiplexing with the micro-projector array and the special designed 3D diffuse screens with the size above 1.8 m× 1.2 m. The displayed clear depths are larger 1.5m. The flexibility in terms of digitized recording and reconstructed based on the 3D diffuse screen relieves the limitations of conventional 3D display technologies, which can realize fully continuous, natural 3-D display. In the display system, the aberration is well suppressed and the low crosstalk is achieved.
High-immersion three-dimensional (3D) displays making them valuable tools for many applications, such as
designing and constructing desired building houses, industrial architecture design, aeronautics, scientific
research, entertainment, media advertisement, military areas and so on. However, most technologies provide
3D display in the front of screens which are in parallel with the walls, and the sense of immersion is decreased.
To get the right multi-view stereo ground image, cameras’ photosensitive surface should be parallax to the
public focus plane and the cameras’ optical axes should be offset to the center of public focus plane both
atvertical direction and horizontal direction. It is very common to use virtual cameras, which is an ideal pinhole
camera to display 3D model in computer system. We can use virtual cameras to simulate the shooting method
of multi-view ground based stereo image. Here, two virtual shooting methods for ground based
high-immersion 3D display are presented. The position of virtual camera is determined by the people's eye
position in the real world. When the observer stand in the circumcircle of 3D ground display, offset perspective
projection virtual cameras is used. If the observer stands out the circumcircle of 3D ground display, offset
perspective projection virtual cameras and the orthogonal projection virtual cameras are adopted. In this paper,
we mainly discussed the parameter setting of virtual cameras。The Near Clip Plane parameter setting is the
main point in the first method, while the rotation angle of virtual cameras is the main point in the second
method. In order to validate the results, we use the D3D and OpenGL to render scenes of different viewpoints
and generate a stereoscopic image. A realistic visualization system for 3D models is constructed and
demonstrated for viewing horizontally, which provides high-immersion 3D visualization. The displayed 3D
scenes are compared with the real objects in the real world.
An arbitrary view synthesis method from 2D-Plus-Depth image for real-time auto-stereoscopic display is presented.
Traditional methods use depth image based rendering (DIBR) technology, which is a process of synthesizing “virtual”
views of a scene from still or moving images and associated per-pixel depth information. All the virtual view images are
generated and then the ultimate stereo-image is synthesized. DIBR can greatly decrease the number of reference images
and is flexible and efficient as the depth images are used. However it causes some problems such as the appearance of
holes in the rendered image, and the occurrence of depth discontinuity on the surface of the object at virtual image plane.
Here, reversed disparity shift pixel rendering is used to generate the stereo-image directly, and the target image won’t
generate holes. To avoid duplication of calculation and also to be able to match with any specific three-dimensional
display, a selecting table is designed to pick up appropriate virtual viewpoints for auto-stereoscopic display. According to
the selecting table, only sub-pixels of the appropriate virtual viewpoints are calculated, so calculation amount is
independent of the number of virtual viewpoints. In addition, 3D image warping technology is used to translate depth
information to parallax between virtual viewpoints and parallax, and the viewer can adjust the
zero-parallax-setting-plane (ZPS) and change parallax conveniently to suit his/her personal preferences. The proposed
method is implemented with OPENGL and demonstrated on a laptop computer with a 2.3 GHz Intel Core i5 CPU and
NVIDA GeForce GT540m GPU. We got a frame rate 30 frames per second with 4096×2340 video. High synthesis
efficiency and good stereoscopic sense can be obtained. The presented method can meet the requirements of
real-time ultra-HD super multi-view auto-stereoscopic display.
Multiview video coding (MVC) is essential for applications of the
auto-stereoscopic three-dimensional displays. However, the computational complexity
of MVC encoders is tremendously huge. Fast algorithms are very desirable for the
practical applications of MVC. Based on joint early termination , the selection of
inter-view prediction and the optimization of the process of Inter8×8 modes by
comparison, a fast macroblock(MB) mode selection algorithm is presented.
Comparing with the full mode decision in MVC, the experimental results show that
the proposed algorithm can reduce up to 78.13% on average and maximum 90.21%
encoding time with a little increase in bit rates and loss in PSNR.
It is important to acquire the proper parallax images for the stereoscopic display system. By setting the proper distance
between the cameras and the location of the convergent point in this capturing configuration, the displayed 3D scene
with the appropriate stereo depth and the expected effect in front of and behind the display screen can be obtained
directly. The quantitative relationship between the parallax and the parameters of the capturing configuration with two
cameras is presented. The capturing system with multiple cameras for acquiring equal parallaxes between the adjacent
captured images for the autostereoscopic display system is also discussed. The proposed methods are demonstrated by
the experimental results. The captured images with the calculated parameters for the 3D display system shows the
expected results, which can provide the viewers the better immersion and visual comfort without any extra processing.
A simple optical path for fabricating photonic crystals is presented. It is convenient to change the number of beams and its angles. Then photonic crystals of different lattices can be gotten. Recording
material is home-made water-resisting photopolymer. The fabrication of photonic crystals with holography is simulated by matlab. By changing the number of beams and its angles, triangular, square
and circular structures of photonic crystals are obtained. When polarization states of beams are changed, photonic crystals of different refractive index modulation are obtained. Via the simulation, the refractive index modulation of linearly polarized light is the highest. In the experiment, different exposures are set. And the best exposure is 20s, which is less than before. In this paper, photonic crystals using four light beams and five light beams are fabricated. Then, compare simulated results with experimental results and they are in substantial agreement.
No-glasses optical grating stereoscopic display is one of a chief development of stereoscopic display, but it is always confined by the range of stereoscopic visible and quantity of stereoscopic information
and quantity of users. This research use the combination of Fresnel lens array and controllable point
lights to output information of the two eyes of different users separately. Combining the technology of
eyes-tracking, it can make no-glasses optical grating stereoscopic display be visible in 3D orientation
range by multiuser in the condition of two-angle image sources. And it also can be visible in 360°
stereoscopic overlook by one user in the condition of multi-angle image sources.