We present a general framework for the modeling and optimization of scalable large format 3-D displays using multiple
projectors. Based on this framework, we derive algorithms that can robustly optimize the visual quality of an arbitrary
combination of projectors (e.g. tiled, superimposed, combinations of the two) without manual adjustment. The
framework creates for the first time a new unified paradigm that is agnostic to a particular configuration of projectors yet
robustly optimizes for the brightness, contrast, and resolution of that configuration. In addition, we demonstrate that our
algorithms support high resolution stereoscopic video at real-time interactive frame rates achieved on commodity
graphics hardware. Through complementary polarization, the framework creates high quality multi-projector 3-D
displays at low hardware and operational cost for a variety of applications including digital cinema, visualization, and
This paper proposes a real-time 3D user interface using multiple possibly uncalibrated cameras. It tracks the user’s pointer in real-time and solves point correspondences across all the cameras. These correspondences form spatio-temporal “traces” that serve as a medium for sketching in a true 3-D space. Alternatively, they may be interpreted as gestures or control information to elicit some particular action(s). Through view synthesis techniques, the system enables the user to change and seemingly manipulate the viewpoint of the virtual scene even in the absence of camera calibration. It also serves as a flexible, intuitive, and portable mixed-reality display system. The proposed system has numerous implications in interaction and design, especially as a general interface for creating and manipulating various forms of 3-D media.
This paper presents a practical framework for creating and visualizing interactive 3-D media using a system of uncalibrated projector-cameras. The proposed solution uses light patterns that temporally encode the projector’s coordinate system to solve the traditionally challenging multiframe correspondence problem by straightforward decoding instead of computational multiframe optimization. Two sets of coded light patterns (black/white stripes and colored 2x2 blocks, both of varying spatial resolutions) are presented and compared. The resulting correspondences are directly used as a compelling form of interactive 3-D media through described techniques including three-frame view synthesis, multiframe view synthesis using multiple three-frame groupings, and even single-camera view interpolation. It is shown that adapting the rendering order of the correspondences with respect to the projector’s coordinate system ensures the correct visibility for the synthesized views. Experimental results demonstrate that the framework works well for various real-world scenes, even including those with multiple objects and textured surfaces. The framework, along with the resulting correspondences, also has implications in many other computer vision and image processing applications, especially those that require multiframe correspondences.
Adaptive chain coding algorithms based on using multiple templates are developed. One algorithm differentially encodes a curve using a directional template whose angle is dynamically scaled to accommodate a variety of curvature properties. A second algorithm differentially encodes using a template with small angular range with another template occasionally used for reorientation if abrupt changes occur in the properties of the underlying curve. By exploiting the piecewise regularity of most curves, our techniques provide substantially more accurate and efficient encodings compared with standard chain coding algorithms.