We report on the development of a highly scalable head-tracking system capable of tracking many users.
Throughout the operating area, a series of high-speed (4 kHz) near-infrared LED-based Digital Light Processor (DLP) picoprojectors provide overlapping illumination of the volume. Each projector outputs a sequence of binary images which encode the position of each pixel within the projected image as well as an identifier sequence for the
projector. Overlapping projectors use differing temporal multiplexing to allow sensor discrimination and background rejection. Pixel positions from multiple projectors received by each sensor are triangulated to obtain
position and orientation.
Effective fusion of multi-parametric heterogeneous data is essential for better object identification, characterization
and discrimination. In this report we discuss a practical example of fusing the data provided by imaging and nonimaging
electro-optic sensors. The proposed approach allows the processing, integration and interpretation of such
data streams from the sensors. Practical examples of improved accuracy in discriminating similar but non-identical
objects are presented.
This paper discusses the design and development of an autonomous intelligent modular surveillance system (AIM2S).
The system represents a novel class of "smart" surveillance platforms that integrates multiple sensors on an open-bus
chassis. AIM2S modular architecture allows plug & play system operation, enabling its performance as a standalone unit
or in conjunction with other systems. The integration of multiple smart sensors facilitates the affective fusion of
heterogeneous data sources to obtain previously unavailable state information.
New generations of infrared transmitting optical domes are currently being developed to improve the drag, range, speed,
and payload capabilities of missiles. Traditionally, these domes have been hemispheres, which can be well characterized
with conventional optical interferometers. These interferometers, however, are not generally well-suited to the new
shapes, such as tangent ogives, because the transmitted and reflected wavefronts can differ by many wavelengths from
the planar or spherical wavefronts that are normally used as a reference. In this paper, we present an innovative
technique to characterize unconventional optical components such as aspheric domes, mirrors, and freeform optics. The
measurements are based on an innovative instrument that combines an instantaneous digital phase-shifting infrared
interferometer with a dynamic spatial light modulator that extends the range of the interferometer. The goal of the
measurement is to determine the wavefront error, within a small fraction of a wavelength, caused by the deviation of the
optical component from a perfect geometrical shape of any type (i.e. not spherical). Experimental results are presented
from several infrared components.
The implementation of a time multiplexed display capable of eight simultaneously visible viewing zones will be
described. The system employs a high speed digital micromirror device (DMD) to allow for the high framerate essential
for flicker free display of multiple viewing zones. A combination of custom graphical processor unit (GPU)
programming and a correspondingly optimized field programmable gate array (FPGA) DMD driver allows for real time
interactive rendering of scenes. The rendering engine is entirely based on off the shelf with the use of a standard DVI-D
interface for data transfer to the DMD interface. A rapidly switched LED light engine is employed to overcome the
speed limitations of color wheel light sources, as well as providing a highly saturated color gamut. Selection of viewing
zones is achieved by the use of a high-speed shutter interfaced directly to the DMD driver for precise synchronization.
Progress in the performance of Spatial Light Modulators (SLM), Graphical Processing Units (GPU) and off the shelf
high speed data busses have led to advances in the design of multiscopic 3D displays based on temporal multiplexing.
Having developed a proof of concept prototype capable of displaying four independent viewing zones, we report on
progress in the development of an improved system incorporating 8-12 viewing zones and a large format display. The
designs under development employ a high speed LCD shutter operating synchronously with a high speed Deformable
Mirror Device (DMD) based projector that forms multiple viewing zones via persistence of vision. Progress in the
development of the optical design and corresponding hardware and software will be reported on.
Recent advances in the framerate of deformable mirror devices (DMDs) coupled with a corresponding increase in the
rendering and data transfer capabilities of graphical processing units (GPUs) has lead to the greatly increased viability
of temporal multiplexing approaches for 3d display technology. Employing these advances an initial proof of concept
four-zone multiscopic display has been demonstrated, with a 8-16 zone large format display in development. Both
designs employ a high-speed LCD shutter located at the pupil of the optical train synchronized with a high framerate
DMD projector allowing for the formation of multiple viewing zones via persistence of vision. Present results and
ongoing progress in the design of the optical train, high speed projector and the associated rendering system.
Two approaches in designing autostereoscopic displays capable of providing collaborative viewing of real time 3D scenery will be presented and discussed. Both techniques provide multiscopic "look around" capabilities and are applicable for situation rooms or mobile command centers. In particular, we discuss a prospective use of these displays for interactive visualization of detailed three-dimensional models of urban areas, and the specific demands associated with managing and rendering large volumes of highly detailed information. Latest advances in scanning, survey and registration in urban areas have provided a wealth of detailed three-dimensional data and imagery. Recent events have shown a severe need and demand for systems capable in a high-level 3D visualization upon homeland security posed by terrorist actions and natural disasters within urban areas, as well as for military operations in urban terrain (MOUT). The capacity to visualize sightlines, airflow, flooding, and traffic in real time 3D within dense urban environments is increasingly critical for military and civilian authorities, as well as urban planners and city managers. Development of a high-quality 3D imaging systems is critical also for such areas as medical data imaging, gaming industry, mechanical design and rapid prototyping.
Implementation of an efficient 3D display with high-quality image is beneficiary for a variety of applications, including the entertainment industry, surveillance centers, advanced engineering design, etc. A number of 3D display systems are currently under the development, such as autostereoscopic 3D display (ASD), spatially multiplexed, volumetric and (electro) holographic. Temporally multiplexed ASD approaches have certain advantages as compared to other methods, especially in retaining the full resolution of the display and in providing large headboxes. The confluence of high framerate deformable mirror displays, graphical processing units (GPUs) capable of specialized rendering, high bandwidth commodity grade computer busses (particularly PCI-express) and rapidly switchable, high brightness LEDs have all served to make a high quality temporally multiplexed ASD viable. We report on the incorporation of the previously noted technologies within an ASD with multiple viewing zones and a look around capability. In addition, the same technologies allow for a practical realization of the aspect-in-point display (APD) concept, which couples the use of a temporally multiplexed display in conjunction with the faceted holographic optical elements to form a 3D image. In essence, the APD consists of a multiplex hologram that is electronically updated in a high-speed fashion, incorporating many of the advantages of the former.
Embedded training is to enhance and maintain the skill proficiency of fleet/armor personnel in taking advantage of the capabilities built into or added onto operational systems, subsystems, or equipment. Physical Optics Corporation (POC) is developing a new scene projector system (collimating display system for out-the window) for simulation applications, where it can fully be integrated into tanks, automobiles, submarines, and other vehicles. This concept integrates the advanced holographic technology with Beowulf computer-cluster highly parallel microprocessors.
We propose a novel true 3-D display based on holographic optics, called HAD (Holographic Autostereoscopic Display), or Holographic Inverse Look-around and Autostereoscopic Reality (HILAR), its latest generation. It does not require goggles, unlike the state of the art 3-D system which do not work without goggles, and has a table-like 360° look-around capability. Also, novel 3-D image-rendering software, based on Beowulf PC cluster hardware is discussed.
Proc. SPIE. 4708, Sensors, and Command, Control, Communications, and Intelligence (C3I) Technologies for Homeland Defense and Law Enforcement
KEYWORDS: Unmanned aerial vehicles, Sensors, Image segmentation, Image processing, Geographic information systems, Data processing, Navigation systems, Commercial off the shelf technology, Global Positioning System, Tin
Catastrophe-theory-based Autonomous Terrain-Feature UAV Relative (CATFUR) navigation is geolocation without the Global Positioning System (GPS). As fully autonomous navigation based only on recognition of terrain features, it can be integrated with GPS or other state-of-the-art navigation systems, or can be independent. CATFUR navigation is based on integration/comparison and sensor fusion of DEM (digital elevation map) 3-D data, processed by commercial off-the-shelf geographic information system (COTS GIS) environments into a vectorial graph. CATFUR obtains data from the vertical takeoff unmanned air vehicle (VTUAV) COTS inertial and visual sensors, and from components of an azimuth-elevation local positioning system (LPS). Real-time data processing could perform on highly parallel 2 in. x 3 in. Application specific hardware. Typical point or line catastrophic singularities on surfaces are edges, ridges, wrinkles, and surface cracks. Such singularities have a fixed location on the surface. In contrast, catastrophes have the unexpected property of not being fixed to a surface. Catastrophes can be the basis of GPS-independent relative navigation, based only on the existence of a folded terrain, even without landmarks. Since mountains do not move, we can use mountain guidance, much as star guidance has been used for centuries to navigate the oceans.
In this paper a novel teleparamedic robot concept, based on high practicality and economy has been presented. This new UGV (Unmanned Ground Vehicle) has haptic feedback-based driving and teleparamedic robotic operation, based on true 3-D visualization. The robotic operations include: soldier evacuation and two basic FAM (First Aid Measure) modes.
The search for a dynamic recording medium that can be used in real time without the need for processing has become a critical issue in the development of practical neural network systems, correlators, all-optic switches, image and signal processors, and optical storage devices. A typical optical material respond to changes in the intensity, polarization, or wavelength of the illuminating light. The optical material developed and used for neural network applications responds to changes in the polarization of blur or green laser light. Implementing a neural network or performing optically-controlled acoustic beam steering requires a high-speed read/write/erase optical memory. POCs erasable dye polymer material offers a high read/write/erase data rate, nondestructive reading, fast data access, high storage density, overwrite capability, and long cycle life.
In this paper, the phase-space formulation is applied to the evaluation of nonimaging optics sub-systems. Brightness (luminance) efficiency is introduced as a Figure of Merit for system performance maximization procedures that can be applied, for example, to plasma diagnostics (by utilizing coherent fiber imaging).
The WKB (Wentzel-Kramers-Brillouin) method, well-known in quantum mechanics, is applied in the second-order approximation into non-uniform Bragg structure, such as rugged dielectric thin films, sinusoidal gratings, and holograms. In this paper, the analytic WKB problem solution will be presented including numerical results.
A practical, special purpose digital optical computer architecture is proposed which exploits the
energetic advantages of a Wave Particle Duality computer. A specific, practical algorithm is proposed
which contains operations that proceed at an energy cost per operation ofless than kT.
Proc. SPIE. 1296, Advances in Optical Information Processing IV
KEYWORDS: Digital signal processing, Optical signal processing, Digital photography, Quantum efficiency, Computing systems, Monte Carlo methods, Data processing, Neural networks, Analog electronics, Wavelet packet decomposition
We show that both from the computational viewpoint and from the photon efficiency viewpoint, digital processors require more information input than analog processors to achieve the same result. Thus, both can accomplish the same tasks (a version of Church's thesis) but only at a previously unnoticed price.