Speckle can seriously degrade image quality in laser-illuminated projection displays. Various solutions appropriate
to small microdisplay-based projectors are presented. Illumination optical system and projection optical
system design are discussed in order to better understand the critical constraints of size, power consumption,
optical efficiency, and performance. DYOPTYKA's innovative solution, using a phase randomizing deformable
mirror, is described.
We present a projection optical system incorporating a dynamic optical element and we outline some of its
potential applications for digital projection displays. We describe experiments undertaken to validate one of
these applications, the correction aberrations in a rear-projection television (RPTV) optical module where the
original fold mirror is replaced by a simple MEMS deformable mirror.
We present a projection optical system incoporating a dynamic optical element and we outline some of its potentional applications for digital projections. We describe experiments undertaken to validate a variety of these applications, in particular: change of focus without mechanical motion of the focus group; correction of chromatic aberration; and correction of a variety of other aberrations. We conclude that dynamic optical element can be used to improve the quality of image achieved from a very simple digital projection optical system.
The use of correlation-based techniques to perform stereoscopic matching enables real-time frame rates to be achieved. The disparity maps produced are not as accurate as those produced using more sophisticated methods. This paper presents three ideas aimed at decreasing the computation time of standard correlation-based techniques. The reduction in computation time is achieved using two different methods. Firstly, the number of comparisons is reduced using MPEG-2 motion vectors to narrow the disparity search range to an optimised region and identifying areas of a scene which have remained static between frames so the previous disparity can be used. The second approach is an implementation of stereo using the above ideas on a GPU. The increases in error and frame rate can be controlled depending on the requirements of the application.
The design, implementation, and preliminary evaluation of a volumetric stereoscopic 3D display is discussed. Pixels rendered from different ranges of distance, or depth fields, in the 3D scene and are displayed field-sequentially. An adaptive optics element is used to modulate wavefront curvature for each field such that its optical distance matches its depth in the 3D scene. This allows the observer to accommodate (focus) to various depths in the scene in the same
way as they do in under natural viewing conditions. The enabling of appropriate accommodation is particularly useful in stereoscopic 3D displays. These are prone to the problem of accommodation-convergence conflict, hypothesised as leading cause of visual discomfort. The
system has been implemented in a binocular design, i.e. fixed-viewpoint rather than autostereoscopic, using commercially-available liquid crystal microdisplays and deformable mirror adaptive optics components.
Many image processing systems have real-time performance
constraints. Systems implemented on general purpose processors
maximize performance by keeping busy the small fixed number of
available functional units such as adders and multipliers. In this
paper we investigate the use of programmable logic devices to
accelerate the execution of an application. Field Programmable Gate
Arrays (FPGAs) can be programmed to generate application specific
logic that alters the balance and type(s) of functional units to match application characteristics. In this paper we introduce a
correction of geometric image distortion application. Real number support is a requirement in most image processing applications. We examine the suitability of fixed point, floating-point and logarithmic number systems for an FPGA implementation of this image processing application. Performance results are presented in terms of: (1) execution time, and (2) FPGA logic resource requirements.
We quantitatively evaluated a technique for combining multiple videokeratograph views of different areas of cornea. To achieve this we first simulated target reflection from analytic descriptions of various shapes believed to mimic common corneal topographies. The splicing algorithm used the simulated reflections to achieve a good quality estimation of the shapes. Actual imagery was then acquired of manufactured models of the same shapes and the splicing algorithm was found to achieve a less perfect estimation. The cause was thought mainly to be image blur due to defocus. To investigate this, blur was introduced into the reflection simulation, and the results of the splicing algorithm compared to those found from the actual imagery.
A method of satellite scanner resection which uses a coplanarity constraint, as opposed to the conventional colinearity constraint, is evaluated. It relies on a simplification of the resection problem achieved through adjusting ground control point coordinates and image-forming ray direction vectors using satellite motion data typically recorded during image acquisition. The problems caused by ground control points not being sufficiently dispersed throughout the scene are examined and the coplanarity resection technique found to be more reliable in this case. Both simulated and actual data are used to quantify the performance of the technique. Some insight into the problems associated with its usage are provided.
Satellite pushbroom scanners deviate from their predetermined positional and rotational trajectories causing geometric distortion in their scanned imagery. Attitude and orbit control systems usually supply sufficient data for the actual trajectory to be reconstructed through splined interpolation. Geometric correction of imagery requires that image pixels be retroprojected onto the scene surface from points along the reconstructed trajectory and the scene subsequently resampled in a regular tessellation. Since this retroprojection can be very computationally expensive, a trajectory model is used which facilitates an efficient iterative subsampling ray-tracing algorithm. Actual SPOT satellite trajectory data is used for demonstration purposes.
A new geometric formulation is given for the problem of determining position and orientation of a satellite scanner from error-prone ground control point observations in linear pushbroom imagery. The pushbroom satellite resection problem is significantly more complicated than that of the conventional frame camera because of irregular platform motion throughout the image capture period. Enough ephemeris data are typically available to reconstruct satellite trajectory and hence the interior orientation of the pushbroom imagery. The new approach to resection relies on the use of reconstructed scanner interior orientation to determine the relative orientations of a bundle of image rays. The absolute position and orientation which allows this bundle to minimize its distance from a corresponding set of ground control points may then be found. The interior orientation is represented as a kinematic chain of screw motions, implemented as dual-number quaternions. The motor algebra is used in the analysis since it provides a means of line, point, and motion manipulation. Its moment operator provides a metric of distance between the image ray and the ground control point.
Satellites are free-moving rigid bodies subject to various external forces which make them deviate from their predetermined positional and rotational trajectories. Since many remote sensing imaging devices use the linear pushbroom scanning model, trajectory deviation during the image scanning period causes geometric distortion in the imagery. Unless actual satellite trajectory during imaging is modeled, accurate rectification of imagery is impossible. A means of recovering the trajectory from known satellite motion is presented here. Rotational motion is usually sensed by gyroscopes which measure angular velocity. Translational motion can be determined in several ways including telemetry analysis and linear accelerometers. In more recent satellites GPS receivers may be used to determine motion data. We show how to interpolate and subsequently integrate angular velocity to yield a rotational trajectory. The screw, implemented as a dual-number quaternion, is shown to be a suitable parameterization of motion to model the trajectory as a kinematic chain. This representation is useful for image geometry analysis and hence for correction of image distortion. Applications of this parameterization to scanned image resampling and rectification are mentioned.
Issues relating to the fusion of dynamically distorted scanned imagery are discussed. In particular the problem of resampling imagery for simulation of orthographic projection is addressed. Using a suitable parameterization of the scanner trajectory enables a ray tracing supersampling technique to be used which is highly efficient compared to conventional rendering approaches. It is seen that the problem of sampling rays within distorted pixel can be solved by sampling platform motion during the scanline integration period. Difficulties are encountered when using conventional parameterizations of motion. These are avoided by using the screw representation.
Satellite scanner imaging geometry is quite different from that of the frame camera. The differences play an important role when determining parameters of exterior orientation. This paper discusses current approaches to scanner resection and describes how the accuracy of these approaches may be enhanced by using reconstructed satellite motion information and alternative parameterizations of the unknowns. A new geometric formulation of the resection problem is introduced which uses a point-to-line distance metric instead of the conventional collinearity equations. This is proposed in order to reduce the amount of approximation required for scanner resection, and hence increase accuracy.