Designers of advanced digital imaging systems are frequently challenged with considering not only the optics and sensor, but also the effects of image processing in the selection of the best architecture to meet their system objectives. Leveraging the image processing degree of freedom presents a considerable opportunity if one incorporates system-level metrics in the design and optimization process. Including the image processing degree of freedom also significantly expands the set of solutions and enables different trades of performance, cost, size, weight, and power. Here, we demonstrate the opportunity available to the system designer by exploring the design of a wide angle system intended to maximize a system-level human visual performance metric. The resulting system solutions span a range of optical, optomechanical, and signal processing complexity and show systems with a wide range of size and cost.
With reductions in microbolometer size and cost, long-wave infrared (LWIR) systems are increasingly being developed
for platforms with challenging size, weight, power, and cost (SWAP-C) constraints, such as helmet-mounted systems
and unmanned vehicles. Past optimization of imaging systems toward the simultaneous objectives of improved stand-off
detection and low size, weight, and power required an iterative, multi-disciplinary design process. Here we demonstrate
the direct optimization of the full LWIR system model including the optics, sensor, signal processing, and display
degrees of freedom with system level metrics including SWAP-C and detection range. The end result is a system with
superior size and weight for a given detection range.
The DRS Tamarisk® <sub>320</sub> camera, introduced in 2011, is a low cost commercial camera based on the 17 µm pixel pitch 320×240 VOx microbolometer technology. A higher resolution 17 µm pixel pitch 640×480 Tamarisk®<sub>640 </sub>has also been developed and is now in production serving the commercial markets. Recently, under the DARPA sponsored Low Cost Thermal Imager-Manufacturing (LCTI-M) program and internal project, DRS is leading a team of industrial experts from FiveFocal, RTI International and MEMSCAP to develop a small form factor uncooled infrared camera for the military and commercial markets. The objective of the DARPA LCTI-M program is to develop a low SWaP camera (<3.5 cm<sup>3</sup> in volume and <500 mW in power consumption) that costs less than US $500 based on a 10,000 units per month production rate. To meet this challenge, DRS is developing several innovative technologies including a small pixel pitch 640×512 VOx uncooled detector, an advanced digital ROIC and low power miniature camera electronics. In addition, DRS and its partners are developing innovative manufacturing processes to reduce production cycle time and costs including wafer scale optic and vacuum packaging manufacturing and a 3-dimensional integrated camera assembly. This paper provides an overview of the DRS Tamarisk® project and LCTI-M related uncooled technology development activities. Highlights of recent progress and challenges will also be discussed. It should be noted that BAE Systems and Raytheon Vision Systems are also participants of the DARPA LCTI-M program.
A foveated imager providing a panoramic field of view with simultaneous region of interest optical zoom for use on a
micro unmanned aerial vehicle is described. The foveated imager reduces size, weight and power by imaging both wide
and telephoto fields onto a single detector. The balance of resolution between panoramic and zoom fields is optimized
against the goals of threat detection and identification with a small unmanned aerial system, resulting in a 3X reduction
in target identification time compared to conventional systems. A description of the design trades and the evaluation of a
prototype electro-optical system are provided.
Many computational imaging technologies introduced in the last several years use optical, mechanical, sensor,
illumination and computational degrees of freedom to enable special system characteristics. The general computational
imaging framework will be discussed along with the value of some specific approaches. A more in-depth analysis of the
extended depth of field class of solutions is described which includes a method for visualizing the efficiency and
comparing the SNR of the many approaches, the value of extended depth of field solutions and a simplified design
approach that does not require tools beyond what the designer is currently accustomed.
In a traditional optical system the imaging performance is maximized at a single point in the operational
space. This characteristic leads to maximizing the probability of detection if the object is on axis, at the
designed conjugate, with the designed operational temperature and if the system components are manufactured
without error in form and alignment. Due to the many factors that influence the system's image
quality the probability of detection will decrease away from this peak value.
An infrared imaging system is presented that statistically creates a higher probability of detection over the
complete operational space for the Hotelling observer. The system is enabled through the use of wavefront
coding, a computational imaging technology in which optics, mechanics, detection and signal processing are
combined to enable LWIR imaging systems to be realized with detection task performance that is difficult
or impossible to obtain in the optical domain alone. The basic principles of statistical decision theory
will be presented along with a specific example of how wavefront coding technology can enable improved
performance and reduced sensitivity to some of the fundamental constraints inherent in LWIR systems.
Proc. SPIE. 5784, Infrared Imaging Systems: Design, Analysis, Modeling, and Testing XVI
KEYWORDS: Long wavelength infrared, Signal to noise ratio, Optical design, Imaging systems, Sensors, Interference (communication), Wavefronts, Signal processing, Modulation transfer functions, Systems modeling
In a long wave infrared (LWIR) system there is the need to capture the maximum amount of information of objects over a broad volume for the identification and classification by the human or machine observer. In a traditional imaging system the optics limit the capture of this information to a narrow object volume. This limitation can hinder the observer's ability to navigate and/or identify friend or foe in combat or civilian operations. By giving the observer a larger volume of clear imagery their ability to perform will drastically improve. The system presented allows the efficient capture of object information over a broad volume and is enabled by a technology called Wavefront Coding. A Wavefront Coded system employs the joint optimization of the optics, detection and signal processing. Through a specialized design of the system’s optical phase, the system becomes invariant to the aberrations that traditionally limit the effective volume of clear imagery. In the process of becoming invariant, the specialized phase creates a uniform blur across the detected image. Signal processing is applied to remove the blur, resulting in a high quality image. A device specific noise model is presented that was developed for the optimization and accurate simulation of the system. Additionally, still images taken from a video feed from the as-built system are shown, allowing the side by side comparison of a Wavefront Coded and traditional imaging system.
Telescope performance is often limited by aberrations, and/or fabrication and alignment errors. Additionally, image formation in large space-based systems is sensitive to changes in physical form parameters such as temperature-related deformations, mirror structure, piston position and detector alignment. Changes in these parameters significantly degrade image quality and often limit the performance of the system. A fundamental new technology called Wavefront Coding has been successfully demonstrated via simulations for large space-based imaging systems that promise to surpass the performance attained by traditional optical designs. Wavefront Coding uses specialized aspheric optics and signal processing of the detected image to correct defocus-like aberrations thereby enabling a new paradigm in aberration balancing for telescope systems. Wavefront Coding can provide dramatic new mission capabilities by allowing space-based imaging systems that are simpler, lighter, and cheaper, while also providing high quality imagery in dynamic environments that are difficult or impossible to image in with traditional imaging systems. As an example two systems are presented that allow the telescope to repoint the boresight through the actuation of the primary segments or through the use of a scan mirror. Traditional systems with the same goal of repointing the boresight historically have not been feasible due to either the increased error space or due to constraints on system cost and complexity.
A long wave infrared (LWIR) computational imaging system has been designed and fabricated that has a decreased hyperfocal distance compared to traditional optics. Through the combination of aspheric optics and signal processing the near point with clear imagery has been reduced from 50m to less than 10m. Both systems deliver high quality imaging when the object is at infinity. The decrease in the hyperfocal distance was realized though the use of Wavefront Coding, a technology where all system components are jointly optimized. The system components include the optics, detector and signal processing. System optimization is used with optical/digital constraints such as manufacturability, cost, signal processing architecture, FPA characteristics, etc. Through a special design of the system’s optical phase, the system becomes invariant to the aberrations that traditionally limit the effective operational range. In the process of becoming invariant, the specialized phase creates a uniform blur across the detected image. Signal processing is applied to remove the blur, resulting in a high quality image. In this paper imagery from the Wavefront Coded system is described and compared to traditional imagery.
Proc. SPIE. 5407, Infrared Imaging Systems: Design, Analysis, Modeling, and Testing XV
KEYWORDS: Long wavelength infrared, Signal to noise ratio, Point spread functions, Optical design, Imaging systems, Spatial frequencies, Sensors, Wavefronts, Signal processing, Modulation transfer functions
By jointly optimizing the design of optics, mechanics, and electronics systems with reduced size, weight, and cost can be realized. This joint optimization acts to increase the system trade-space compared to systems that optimize each component separately. Increasing the size of the system trade-space allows highly customized system design. An example of joint optimization is given for a LWIR imaging system with a conformal first surface. This example demonstrates an approximately 50% reduction in size, weight, and cost compared to acceptable traditional system solutions.
A new methodology, called Wavefront Coding, allows the joint optimization of optics, mechanics, detection and signal processing of computational imaging systems. This methodology gives the system designer access to a large design trade space. This trade space can be exploited to enable the design of imaging systems that can image with high quality, with fewer physical components, lighter weight, and less cost compared to traditional optics. This methodology is described through an example conformal single lens IR imaging system. The example system demonstrates a 50% reduction in physical components, and an approximate 45% reduction in weight compared to a traditional two lens system.
Wavefront Coded imaging systems are jointly optimized optical and digital imaging systems that can increase the performance and/or reduce the cost of modern imaging systems by reducing the effects of aberrations. Aberrations that can be controlled through Wavefront Coding include misfocus, astigmatism, field curvature, chromatic aberration, temperature related misfocus, and assembly related misfocus. The design and simulation of these systems are based on a model that describes all of the important aspects of the optics, detector, and digital processing being used. These models allow theoretical calculation of ideal MTFs and signal processing related parameters for new systems. These parameters are explored for extended depth of field, field curvature, and temperature related misfocus effects.