Tissue classifications of the MRI brain images can either be obtained by segmenting the images or propagating the segmentations of the atlas to the target image. This paper compares the classification results of the direct segmentation method using FAST with those of the segmentation propagation method using nreg and the MNI Brainweb phantom images. The direct segmentation is carried out by extracting the brain and classifying the tissues by FAST. The segmentation propagation is carried out by registering the Brainweb atlas image to the target images by affine registration, followed by non-rigid registration at different control spacing, then transforming the PVE (partial volume effect) fuzzy membership images of cerebrospinal fluid (CSF), grey matter (GM) and white matter (WM) of the atlas image into the target space respectively. We have compared the running time, reproducibility, global and local differences between the two methods. Direct segmentation is much faster. There is no significant difference in reproducibility between the two techniques. There are significant global volume differences on some tissue types between them. Visual inspection was used to localize these differences. This study had no gold standard segmentations with which to compare the automatic segmentation solutions, but the global and local volume differences suggest that the most appropriate algorithm is likely to be application dependent.
Conventional integral three-dimensional images, either acquired by cameras or generated by computers, suffer from narrow viewing range. Many methods to enlarge the viewing range of integral images have been suggested. However, by far they all involve modifications of the optical systems, which normally make the system more complex and may bring other drawbacks in some designs. Based on the observation and study of the computer generated integral images, this paper quantitatively analyzes the viewing properties of the integral images in conventional configuration and its problem. To improve the viewing properties, a new model, the maximum viewing width (MVW) configuration is proposed. The MVW configured integral images can achieve the maximum viewing width on the viewing line at the optimum viewing distance and greatly extended viewing width around the viewing line without any modification of the original optical display systems. In normal applications, a MVW integral image also has better viewing zone transition properties than the conventional images. The considerations in the selection of optimal parameters are discussed. New definitions related to the viewing properties of integral images are given. Finally, two potential application schemes of the MVW integral images besides the computer generation are described.
For computer generated integral images, a transition line can be observed when the viewer shifts parallel to the lens sheet and reaches the edge of current viewing zone during viewing. This is due to the transition from current viewing zone to the next. The images generated using conventional algorithms will suffer from big transition zone, which damages the replaying visual effect and greatly decreases the effective viewing width. This phenomenon is especially apparent for large size images. The conventional computer generation algorithms of integral images use the same boundary configuration as the micro-lenses, which is straightforward and easy to be implemented, but the cause of large transition zone and narrow viewing angle. This paper presents a novel micro-image configuration and algorithm to solve the problem. In the new algorithm, the boundaries of micro-images are not confined by the physical boundaries but normally larger than them. To achieve the maximum effective viewing width, each micro-image is arranged according to the rules decided by several constraints. The considerations in the selection of optimal parameters are discussed, and new definitions related to this issue are given.
The Imaging Technologies group at De Montfort University has developed an integral 3D imaging system, which is seen as the most likely vehicle for 3D television avoiding psychological effects. To create real fascinating three-dimensional television programs, a virtual studio that performs the task of generating, editing and integrating the 3D contents involving virtual and real scenes is required. The paper presents, for the first time, the procedures, factors and methods of integrating computer-generated virtual scenes with real objects captured using the 3D integral imaging camera system. The method of computer generation of 3D integral images, where the lens array is modelled instead of the physical camera is described. In the model each micro-lens that captures different elemental images of the virtual scene is treated as an extended pinhole camera. An integration process named integrated rendering is illustrated. Detailed discussion and deep investigation are focused on depth extraction from captured integral 3D images. The depth calculation method from the disparity and the multiple baseline method that is used to improve the precision of depth estimation are also presented. The concept of colour SSD and its further improvement in the precision is proposed and verified.