Seamless integration of 3D acquisition and 3D display systems offers enhanced experience in 3D visualization of the real world objects or scenes. The vivid representation of captured 3D objects displayed on a glasses-free 3D display screen
could bring the realistic viewing experience to viewers as if they are viewing real-world scene. Although the technologies in 3D acquisition and 3D display have advanced rapidly in recent years, effort is lacking in studying the
seamless integration of these two different aspects of 3D technologies. In this paper, we describe our recent progress on integrating a light-field 3D acquisition system and an autostereoscopic multiview 3D display for real-time light field
capture and display. This paper focuses on both the architecture design and the implementation of the hardware and the software of this integrated 3D system. A prototype of the integrated 3D system is built to demonstrate the real-time 3D acquisition and 3D display capability of our proposed system.
Multiview three-dimensional (3D) display is able to provide horizontal parallax to viewers with high-resolution and fullcolor images being presented to each view. Most multiview 3D display systems are designed and implemented using multiple projectors, each generating images for one view. Although this multi-projector design strategy is conceptually straightforward, implementation of such multi-projector design often leads to a very expensive system and complicated calibration procedures. Even for a multiview system with a moderate number of projectors (e.g., 32 or 64 projectors), the cost of a multi-projector 3D display system may become prohibitive due to the cost and complexity of integrating multiple projectors. In this article, we describe an optical design technique for a class of multiview 3D display systems that use only a single projector. In this single projector multiview (SPM) system design, multiple views for the 3D display are generated in a time-multiplex fashion by the single high speed projector with specially designed optical components, a scanning mirror, and a reflective mirror array. Images of all views are generated sequentially and projected via the specially design optical system from different viewing directions towards a 3D display screen. Therefore, the single projector is able to generate equivalent number of multiview images from multiple viewing directions, thus fulfilling the tasks of multiple projectors. An obvious advantage of the proposed SPM technique is the significant reduction of cost, size, and complexity, especially when the number of views is high. The SPM strategy also alleviates the time-consuming procedures for multi-projector calibration. The design method is flexible and scalable and can accommodate systems with different number of views.
Dynamic volumetric medical imaging (4DMI) has reduced motion artifacts, increased early diagnosis of small mobile tumors, and improved target definition for treatment planning. High speed cameras for video, X-ray, or other forms of sequential imaging allow a live tracking of external or internal movement useful for real-time image-guided radiation therapy (IGRT). However, none of 4DMI can track real-time organ motion and no camera has correlated with 4DMI to show volumetric changes. With a brief review of various IGRT techniques, we propose a fast 3D camera for live-video stereovision, an automatic surface-motion identifier to classify body or respiratory motion, a mechanical model for synchronizing the external surface movement with the internal target displacement by combination use of the real-time stereovision and pre-treatment 4DMI, and dynamic multi-leaf collimation for adaptive aiming the moving target. Our preliminary results demonstrate that the technique is feasible and efficient in IGRT of mobile targets. A clinical trial has been initiated for validation of its spatial and temporal accuracies and dosimetric impact for intensity-modulated RT (IMRT), volumetric-modulated arc therapy (VMAT), and stereotactic body radiotherapy (SBRT) of any mobile tumors. The technique can be extended for surface-guided stereotactic needle insertion in biopsy of small lung nodules.
The progresses in 3D display systems and user interaction technologies will help more effective 3D visualization of 3D information. They yield a realistic representation of 3D objects and simplifies our understanding to the complexity of 3D objects and spatial relationship among them. In this paper, we describe an autostereoscopic multiview 3D display system with capability of real-time user interaction. Design principle of this autostereoscopic multiview 3D display system is presented, together with the details of its hardware/software architecture. A prototype is built and tested based upon multi-projectors and horizontal optical anisotropic display structure. Experimental results illustrate the effectiveness of this novel 3D display and user interaction system.
Rapid optical three-dimensional (O3D) imaging systems provide accurate digitized 3D surface data in real-time, with no patient contact nor radiation. The accurate 3D surface images offer crucial information in image-guided radiation therapy (IGRT) treatments for accurate patient repositioning and respiration management. However, applications of O3D imaging techniques to image-guided radiotherapy have been clinically challenged by body deformation, pathological and anatomical variations among individual patients, extremely high dimensionality of the 3D surface data, and irregular respiration motion. In existing clinical radiation therapy (RT) procedures target displacements are caused by (1) inter-fractional anatomy changes due to weight, swell, food/water intake; (2) intra-fractional variations from anatomy changes within any treatment session due to voluntary/involuntary physiologic processes (e.g. respiration, muscle relaxation); (3) patient setup misalignment in daily reposition due to user errors; and (4) changes of marker or positioning device, etc. Presently, viable solution is lacking for in-vivo tracking of target motion and anatomy changes during the beam-on time without exposing patient with additional ionized radiation or high magnet field. Current O3D-guided radiotherapy systems relay on selected points or areas in the 3D surface to track surface motion. The configuration of the marks or areas may change with time that makes it inconsistent in quantifying and interpreting the respiration patterns. To meet the challenge of performing real-time respiration tracking using O3D imaging technology in IGRT, we propose a new approach to automatic respiration motion analysis based on linear dimensionality reduction technique based on PCA (principle component analysis). Optical 3D image sequence is decomposed with principle component analysis into a limited number of independent (orthogonal) motion patterns (a low dimension eigen-space span by eigen-vectors). New images can be accurately represented as weighted summation of those eigen-vectors, which can be easily discriminated with a trained classifier. We developed algorithms, software and integrated with an O3D imaging system to perform the respiration tracking automatically. The resulting respiration tracking system requires no human intervene during it tracking operation. Experimental results show that our approach to respiration tracking is more accurate and robust than the methods using manual selected markers, even in the presence of incomplete imaging data.
Recent development in optical 3D surface imaging technologies provide better ways to digitalize the 3D surface and its motion in real-time. The non-invasive 3D surface imaging approach has great potential for many medical imaging applications, such as motion monitoring of radiotherapy, pre/post evaluation of plastic surgery and dermatology, to name a few. Various commercial 3D surface imaging systems have appeared on the market with different dimension, speed and accuracy. For clinical applications, the accuracy, reproducibility and robustness across the widely heterogeneous skin color, tone, texture, shape properties, and ambient lighting is very crucial. Till now, a systematic approach for evaluating the performance of different 3D surface imaging systems still yet exist. In this paper, we present a systematic performance assessment approach to 3D surface imaging system assessment for medical applications. We use this assessment approach to exam a new real-time surface imaging system we developed, dubbed "Neo3D Camera", for image-guided radiotherapy (IGRT). The assessments include accuracy, field of view, coverage, repeatability, speed and sensitivity to environment, texture and color.
In this article, we describe a volumetric 3D display system based on the high speed DLPTM (Digital Light
Processing) projection engine. Existing two-dimensional (2D) flat screen displays often lead to ambiguity and confusion
in high-dimensional data/graphics presentation due to lack of true depth cues. Even with the help of powerful 3D
rendering software, three-dimensional (3D) objects displayed on a 2D flat screen may still fail to provide spatial
relationship or depth information correctly and effectively. Essentially, 2D displays have to rely upon capability of human
brain to piece together a 3D representation from 2D images. Despite the impressive mental capability of human visual
system, its visual perception is not reliable if certain depth cues are missing.
In contrast, volumetric 3D display technologies to be discussed in this article are capable of displaying 3D
volumetric images in true 3D space. Each "voxel" on a 3D image (analogous to a pixel in 2D image) locates physically at
the spatial position where it is supposed to be, and emits light from that position toward omni-directions to form a real 3D
image in 3D space. Such a volumetric 3D display provides both physiological depth cues and psychological depth cues to
human visual system to truthfully perceive 3D objects. It yields a realistic spatial representation of 3D objects and
simplifies our understanding to the complexity of 3D objects and spatial relationship among them.
In this paper, we provide a thorough overview of recent advances in 3D surface imaging technologies. We focus
particularly on non-contact 3D surface measurement techniques based on structured illumination. The high-speed and
high-resolution pattern projection capability offered by the digital light processing (DLP) technology, together with the
recent advances in imaging sensor technologies, may enable new generation systems for 3D surface measurement
applications that provide much better functionality and performance than existing ones, in terms of speed, accuracy,
resolution, size, cost, and ease of use. Performance indexes of 3D imaging systems in general are discussed and various
3D surface imaging schemes are categorized, illustrated, and compared. Calibration techniques are also discussed since
they play critical roles in achieving the required precision. Benefits and challenges of using DLP technology in 3D imaging
applications are discussed. Numerous applications of 3D technologies are discussed with several examples.
Human faces are fundamentally three-dimensional (3D) objects, and each face has its unique 3D geometric profile. The 3D geometric features of a human face can be used, together with its 2D texture, for rapid and accurate face recognition purposes. Due to the lack of low-cost and robust 3D sensors and effective 3D facial recognition (FR) algorithms, almost all existing FR systems use 2D face images. Genex has developed 3D solutions that overcome the inherent problems in 2D while also addressing limitations in other 3D alternatives. One important aspect of our solution is a unique 3D camera (the 3D FaceCam) that combines multiple imaging sensors within a single compact device to provide instantaneous, ear-to-ear coverage of a human face. This 3D camera uses three high-resolution CCD sensors and a color encoded pattern projection system. The RGB color information from each pixel is used to compute the range data and generate an accurate 3D surface map. The imaging system uses no moving parts and combines multiple 3D views to provide detailed and complete 3D coverage of the entire face. Images are captured within a fraction of a second and full-frame 3D data is produced within a few seconds. This described method provides much better data coverage and accuracy in feature areas with sharp features or details (such as the nose and eyes).
Using this 3D data, we have been able to demonstrate that a 3D approach can significantly improve the performance of facial recognition. We have conducted tests in which we have varied the lighting conditions and angle of image acquisition in the "field." These tests have shown that the matching results are significantly improved when enrolling a 3D image rather than a single 2D image. With its 3D solutions, Genex is working toward unlocking the promise of powerful 3D FR and transferring FR from a lab technology into a real-world biometric solution.
We present some preliminary study results of an automated fingerprint pattern classification approach based on a novel neural network structure called the fuzzy cerebellar model arithmetic computer (CMAC) neural network. The fingerprint images are first preprocessed to generate ridge flow, then the Karhunen-Loever (K-L) transform is used to extract the features from the ridge-flow images. The feature vector is then sent to a fuzzy CMAC neural network for classification. Excellent results were obtained through our preliminary experiments on the "two classes" problem.
The successful vibration reduction of machine tools during machining process can improve productivity, increase quality, and reduce tool wear. This paper will present our initial investigation in the application of smart material technologies in machine tool vibration control using magnetostrictive actuators and electrorheological elastomer dampers on an industrial Sheldon horizontal lathe. The dynamics of the machining process are first studied, which reveals the complexity in the machine tool vibration response and the challenge to the active control techniques. The active control experiment shows encouraging results. The use of electrorheological elastomer damping device for active/passive vibration control provides significant vibration reduction in the high frequency range and great improvement in the workpiece surface finishing. The research presented in this paper demonstrates that the combination of active and active/passive vibration control techniques is very promising for successful machine tool vibration control.
A linear actuator system for multi-dimensional structure control using the magnetostrictive material Terfenol-D has been designed, built, and tested by the Intelligent Automation, Inc. The actuator assembly incorporates an instrumented Terfenol-D rod, an excitation coil to provide the magnetic field, a permanent magnet assembly to provide a magnetic bias field, and a mechanical preload mechanism. The prototype of the actuator is 2.0 inches in diameter and 8 inches long, and provides a peak-to-peak stroke of 0.01 inches. A linear model was also established to characterize the behavior of the actuator for small motion. Based on the prototype of the actuator, we have performed a study of a six degree-of-freedom active vibration isolation system using a Stewart Platform in a new configuration. IAI's final system is intended for precision control of a wide range of space-based structures as well as earth- base systems.