Low-light-level video cameras have benefited from rapid advances in digital technology during the past two decades. In legacy cameras, the video signal was processed using analog electronics which made real-time, nonlinear processing of the video signal very difficult. In state-of-the-art cameras, the analog signal is digitized directly from the sensor and processed entirely in the digital domain, enabling the application of advanced processing techniques to the video signal in real time. In fact, all aspects of modern low-light television cameras are controlled via digital technology, resulting in various enhancements that surpass analog electronics.
In addition to video processing, large-scale digital integration in these low-light level cameras enables precise control of the image intensifier and image sensor, facilitating large inter-scene dynamic range capability, extended intra-scene dynamic range and blooming control. Digital video processing and digital camera control are used to provide improved system-level performance, including nearly perfect pixel response uniformity, correction of blemishes, and electronic boresight. Compact digital electronics also enable comprehensive camera built-in-test (BIT) capability which provides coverage for the entire camera--from photons into the sensor to the processed video signal going out the connector.
Individuals involved in the procurement of present and future low-light-level cameras need to understand these advanced camera capabilities in order to write accurate specifications for their advanced video system requirements. This paper provides an overview of these modern video system capabilities along with example specification text.
A signal processing model is presented for acoustic sensors on ground and unmanned aerial vehicles
(UAV). Such sensors normally experience more flow noise than stationary sensors, because moving
platforms must vary their velocity to accomplish their missions. In the case of the UAV, this includes
sufficient speed to remain airborne. Unfortunately, high airflow speeds over the sensor cause turbulence
noise that tends to confound the acoustic detection of signals from sources of interest on the ground. This
model transforms the fluctuations in the magnitudes and the phase angles of signals and turbulence noise.
The temporal coherences of the signals are improved to the point where detections can be made
unambiguously, and be based on temporal coherence rather than on the signal-to-noise ratio, which is the
customary way to detect signals. Additionally, because the flow noise is temporally incoherent, it is easily
discriminated against. The model transforms phase and amplitude fluctuations in a such a manner that the
temporal coherences of the signals are increased. This makes them more easily exploited to achieve signal
processing gains, such as increases in signal-to-noise ratio and automatic detection. The rationale for this
model is that both signal and noise posses magnitude, but only signals posses temporal coherence. Two
transformations are presented herein. One transforms the phase angles, and the other one transforms the
spectral amplitudes. The transformations give the amplitudes and phase angles similar exploitable
coherence characteristics, while the corresponding noise incoherence is easily attenuated.
An interferometric spectrometer is proposed for low resource LWIR hyperspectral imaging. Scaling from an uncooled LWIR HSI system, we find that signal to noise ratios of 1000 or higher can be achieved with cooled detectors and uncooled optics. Signal flux is high enough to collect high quality data at very high rates (500Hz and above). Sensitivity is limited by the full well of the detector and the spectral resolution for scenes at typical temperatures.
This paper will present an ultra-light weight, 65 gram, dual-imaging infrared camera. The camera has been optimized
for use in small micro-air vehicles, as well as other weight-sensitive applications. There are three key features of the
system. First, it has no moving parts, as the calibration shutter has been removed. The technological hurdle of
removing the shutter from conventional un-cooled VOx imagers was overcome by innovative software correction
techniques. This shutter-less operation renders the camera significantly more rugged and allows its use in environments
that were previously intolerant of such systems. Second, the assembly contains a one-piece rigid-flex board design.
This adds significantly to ruggedness and assembly simplicity. Finally, the two lens/sensor assemblies incorporate
sturdy, yet extremely light-weight housings. Prototype system thermal sensitivity measurements achieved less than 50
mK NETD with 10.5 mm efl/ f0.86 optics.
The 3rd Generation Goodrich DB-110 system provides users with a three (3) field-of-view high performance Airborne
Reconnaissance capability that incorporates a dual-band day and nighttime imaging sensor, a real time recording and a
real time data transmission capability to support long range, medium range, and short range standoff and over-flight
mission scenarios, all within a single pod. Goodrich developed their 3rd Generation Airborne Reconnaissance Pod for
operation on a range of aircraft types including F-16, F-15, F-18, Euro-fighter and older aircraft such as the F-4, F-111,
Mirage and Tornado. This system upgrades the existing, operationally proven, 2nd generation DB-110 design with
enhancements in sensor resolution, flight envelope and other performance improvements. Goodrich recently flight tested
their 3rd Generation Reconnaissance System on a Block 52 F-16 aircraft with first flight success and excellent results.
This paper presents key highlights of the system and presents imaging results from flight test.
The motion imagery community would benefit from the availability of standard measures for assessing image interpretability. The National Imagery Interpretability Rating Scale (NIIRS) has served as a community standard for still imagery, but no comparable scale exists for motion imagery. Previous studies have explored the factors affecting the perceived interpretability of motion imagery and the ability to perform various image exploitation tasks. More recently, a study demonstrated an approach for adapting the standard NIIRS development methodology to motion imagery. This paper presents the first step in implementing this methodology, namely the construction of the perceived interpretability continuum for motion imagery. We conducted an evaluation in which imagery analysts rated the interpretability of a large number of motion imagery clips. Analysis of these ratings indicates that analysts rate the imagery consistently, perceived interpretability is unidimensional, and that interpretability varies linearly with log(GSD). This paper presents the design of the evaluation, the analysis and findings, and implications for scale development.
A fundamental problem in image processing is finding objective metrics that parallel human perception of image
quality. In this study, several metrics were examined to quantify compression algorithms in terms of perceived loss
of image quality. In addition, we sought to describe the relationship of image quality as a function of bit rate. The
compression schemes used were JPEG2000, MPEG2, and H.264. The frame size was fixed at 848x480 and the
encoding varied from 6000 k bps to 200 k bps. The metrics examined were peak signal to noise ratio (PSNR),
structural similarity (SSIM), edge localization metrics, and a blur metric. To varying degrees, the metrics displayed
desirable properties, namely they were monotonic in the bit rate, the group of pictures (GOP) structure could be
inferred, and they tended to agree with human perception of quality degradations. Additional work is being
conducted to quantify the sensitivity of these measures with respect to our Motion Imagery Quality Scale.
Motion imagery will play a critical role in future intelligence and military missions. The ability to provide a real time, dynamic view and persistent surveillance makes motion imagery a valuable source of information. The ability to collect, process, transmit, and exploit this rich source of information depends on the sensor capabilities, the available communications channels, and the availability of suitable exploitation tools. While sensor technology has progressed dramatically and various exploitation tools exist or are under development, the bandwidth required for transmitting motion imagery data remains a significant challenge. This paper presents a user-oriented evaluation of several methods for compression of motion imagery. We explore various codecs and bitrates for both inter- and intra-frame encoding. The analysis quantifies the effects of compression in terms of the interpretability of motion imagery, i.e., the ability of imagery analysts to perform common image exploitation tasks. The findings have implications for sensor system design, systems architecture, and mission planning.
The Small Unmanned Aircraft System (SUAS) is a rucksack portable aerial observation vehicle designed to supplement reconnaissance, surveillance and target acquisition tasks of an infantry company. The Raven is an earlier version of the SUAS. Raven is an Urgent Material Release (UMR) acquisition and has been used for the past two years by selected Army units in Operations Enduring Freedom and Iraqi Freedom (OEF/OIF). Army Test and Evaluation Command-led surveys were used to assess the capabilities and limitations of the Raven in OEF/OIF. Results and analyses of the surveys indicate that Raven enhances situational awareness of a small unit in urban areas and in selected close combat missions. Users of the Raven state it is easy to use, although there are major issues with frequency de-confliction, airspace management, short endurance, and sensor performance.
The SUAS is a program of record and completed developmental and operational testing in preparation for full rate production. This paper addresses the SUAS effectiveness, suitability, and survivability evaluation strategy based on actual testing of the system. During the Initial Operational Test (IOT), the SUAS was found to be effective with limitations in a set of 21 closed combat missions and two call for fire tests for which it was tested. Low Mean Time Between Operational Mean Failure (MTBOMF) and human factors issues make the system suitable with limitations. Acoustic (audible to the human ear) and electronic vulnerabilities make the system non-survivable in most combat scenarios. The SUAS was found to be useful as an extra asset usable in certain infantry company close combat missions where terrain and visual line of sight give the system an advantage over traditional reconnaissance patrols. Army aviation and infantry units uncover new ways every day to use this portable "eye in the sky", especially when unmanned aerial reconnaissance assets are in premium demand. A discussion on changes in doctrine with the SUAS and how it will be integrated into future combat systems for the Army completes the evaluation analysis and its likely benefits to the Soldier.
Results of a field demonstration of an air-to-ground communication link using an airborne bare
optical fiber are presented. The demonstration was conducted by the Johns Hopkins University,
Applied Physics Laboratory at the TCOM, L.P. Test Facility in Elizabeth City, NC in May 2006
using a 38 m, tethered aerostat raised to an altitude of 2100 ft. A bare, single mode optical fiber
attached between the aerostat and its mooring station was evaluated as an optical link for several
hours. Wavelength Division Multiplexed channels operating in the 1550 nm band with data rates of 1
and 10 Gbps were tested to achieve error free data transfers. A separate, continuous wave channel
was also multiplexed for performance monitoring. BER vs. link power measurements and eye
diagrams will be analyzed for data transfer performance over the airborne bare optical fiber.
This paper presents a system that creates and navigates an unlimited-size mosaic with geographical information. The input is a sequence of airborne images with or without telemetry data, and the output is a mosaic with a combined geographical coordinate layer inherited from the input images. Rather than registering input images with an orthoimage, which is popular in existing applications, the proposed system only takes use of telemetry data as prior information. The airborne images embedded with geo-information are pair-wise registered, based on image feature correspondence. We extract feature points and form a modified EDGE-based descriptor for image registration. Subsequently, the geographical coordinate layers derived from the telemetry data stream are fused using a registration matrix computed from the previous step. However, due to the unreliability of the telemetry data, the new geodetic coordinate layer might be inconsistent with the image coordinate layer and therefore requires rectification to minimize the squared error between the mosaic coordinate layer and the warped geographical coordinate layer. The above process is incorporated into a cluster framework so that the output mosaic is extensible to an infinite size. That is, once the current mosaic size has expanded beyond computer memory limitations, the image is saved to a database. Its spatial relationship with respect to the world coordinate system is also saved to the database so that the system can navigate the collection of image mosaic data by querying the spatial database and retrieving the relevant mosaics. This method is especially suitable for video sequences spanning large regions, such as surveillance video from a micro UAV. Results with real-world UAV video are provided to demonstrate the performance of the proposed system.
In this paper, we discuss the real-time compensation of air turbulence in imaging through long atmospheric paths. We
propose the use of a reconfigurable hardware platform, specifically field-programmable gate arrays (FPGAs), to reduce
costs and development time, as well as increase flexibility and reusability. We present the results of our acceleration
efforts to date (40x speedup) and our strategy to achieve a real-time, atmospheric compensation solver for highdefinition