This paper presents 4dVizMed, a framework for interactive analysis and autostereoscopic visualization of 3d
time-varying objects in volumetric image sequences. It combines a deformable surface model which automatically
tracks volumetric features, real-time multi-view stereo volume rendering, and some interactive tools for
manipulation and quantization. Our method is based on a topological feature tracking process, using a flow-based
paradigm and a deformable surface model. It tracks through time the evolution of the components of an
isosurface and their interaction with other components. We focus on the difficulties of visualizing 4d volume
data, and we report the results of preliminary experiments designed to evaluate the utility of autostereoscopic
displays for this purpose.
In recent years, we have seen several different approaches dealing with multiview compression. First, we can find
the H264/MVC extension which generates quite heavy bitstreams when used on n-views autostereoscopic medias
and does not allow inter-view reconstruction. Another solution relies on the MVD (MultiView+Depth) scheme
which keeps p views (n > p > 1) and their associated depth-maps. This method is not suitable for multiview
compression since it does not exploit the redundancy between the p views, moreover occlusion areas cannot be
accurately filled. In this paper, we present our method based on the LDV (Layered Depth Video) approach which
keeps one reference view with its associated depth-map and the n-1 residual ones required to fill occluded areas.
We first perform a global per-pixel matching step (providing a good consistency between each view) in order
to generate one unified-color RGB texture (where a unique color is devoted to all pixels corresponding to the
same 3D-point, thus avoiding illumination artifacts) and a signed integer disparity texture. Next, we extract the
non-redundant information and store it into two textures (a unified-color one and a disparity one) containing the
reference and the n-1 residual views. The RGB texture is compressed with a conventional DCT or DWT-based
algorithm and the disparity texture with a lossless dictionary algorithm. Then, we will discuss about the signal
deformations generated by our approach.
We now have numerous autostereoscopic displays, and it is mandatory to characterize them because it will allow
to optimize their performances and to make efficient comparison between them. Therefore we need standards
so we have to be able to quantify the quality of the viewer's perception. The purpose of the present paper is
twofold; we first present a new instrument of characterization of the 3D perception on a given autostereoscopic
display; then we propose a new way to realize an experimental protocol allowing to get a full characterization.
This instrument will allow us to compare efficiently the different autostereoscopic displays but it will also validate
practically the adequacy between the shooting and rendering geometries. In this aim, we are going to match a
perceived scene with the virtual scene. It is hardly possible to determine the scene perceived by a viewer placed
in front of an autostereoscopic display. Indeed if it may be executable on the pop-out, it is impossible on the
depth effect because the depth of the virtual scene is set behind the screen. Therefore, we will have to use an
optical illusion based on the deflection of light by a mirror to know the position which the viewer perceives some
points of the virtual scene on an autostereoscopic display.
Today, 3D viewing devices still need qualitative content; up to now, there is no real 3D video shooting system
specifically designed to ensure a qualitative 3D experience on some pre-chosen 3D display. A fundamental element
of multiscopic image production is the geometrical analysis of shooting and viewing conditions in order to obtain
a qualitative 3D perception experience. Many autostereoscopic camera systems are proposed but none is designed
with control of possible depth distortions in mind. This article introduces a patented autostereoscopic camera
design scheme based upon this distortion control. Thanks to our scientific know-how, we based our work on
the link between the shooting and rendering geometries, which enables to control the distortion of the perceived
depth. Thus this design scheme provides camera systems producing qualitative 3D content complying with
any pre-chosen distortion when rendered on any specific autostereoscopic display. Thanks to our technological
expertise, we use this design scheme to product pre-industrial camera systems devoted to 3D live or pre-recorded
shooting. These systems are compact, lightweight, easy to deploy and rather adaptable to other conditions (3D
displays, depth distortion). We will introduce the associated software, which allows to control our 3D cameras
and to display in real-time on the autostereoscopic display formerly specified. According to numerous spectators,
both naive and expert, the 3D perception is really qualitative.