Video tracking of rocket launches inherently must be done from long range. Due to the high temperatures produced, cameras are often placed far from launch sites and their distance to the rocket increases as it is tracked through the flight. Consequently, the imagery collected is generally severely degraded by atmospheric turbulence. In this talk, we present our experience in enhancing commercial space flight videos. We will present the mission objectives, the unique challenges faced, and the solutions to overcome them.
Long range telescopic video imagery of distant terrestrial scenes, aircraft, rockets and other aerospace vehicles can
be a powerful observational tool. But what about the associated acoustic activity? A new technology, Remote
Acoustic Sensing (RAS), may provide a method to remotely listen to the acoustic activity near these distant objects.
Local acoustic activity sometimes weakly modulates the ambient illumination in a way that can be remotely sensed.
RAS is a new type of microphone that separates an acoustic transducer into two spatially separated components: 1) a
naturally formed in situ acousto-optic modulator (AOM) located within the distant scene and 2) a remote sensing
readout device that recovers the distant audio. These two elements are passively coupled over long distances at the
speed of light by naturally occurring ambient light energy or other electromagnetic fields. Stereophonic, multichannel
and acoustic beam forming are all possible using RAS techniques and when combined with high-definition
video imagery it can help to provide a more cinema like immersive viewing experience.
A practical implementation of a remote acousto-optic readout device can be a challenging engineering problem. The
acoustic influence on the optical signal is generally weak and often with a strong bias term. The optical signal is
further degraded by atmospheric seeing turbulence. In this paper, we consider two fundamentally different optical
readout approaches: 1) a low pixel count photodiode based RAS photoreceiver and 2) audio extraction directly from
a video stream. Most of our RAS experiments to date have used the first method for reasons of performance and
simplicity. But there are potential advantages to extracting audio directly from a video stream. These advantages
include the straight forward ability to work with multiple AOMs (useful for acoustic beam forming), simpler optical
configurations, and a potential ability to use certain preexisting video recordings. However, doing so requires
overcoming significant limitations typically including much lower sample rates, reduced sensitivity and dynamic
range, more expensive video hardware, and the need for sophisticated video processing. The ATCOM real time
image processing software environment provides many of the needed capabilities for researching video-acoustic
signal extraction. ATCOM currently is a powerful tool for the visual enhancement of atmospheric turbulence
distorted telescopic views. In order to explore the potential of acoustic signal recovery from video imagery we
modified ATCOM to extract audio waveforms from the same telescopic video sources. In this paper, we
demonstrate and compare both readout techniques for several aerospace test scenarios to better show where each has
Methods to reconstruct pictures from imagery degraded by atmospheric turbulence have been under development for
decades. The techniques were initially developed for observing astronomical phenomena from the Earth’s surface, but
have more recently been modified for ground and air surveillance scenarios. Such applications can impose significant
constraints on deployment options because they both increase the computational complexity of the algorithms
themselves and often dictate a requirement for low size, weight, and power (SWaP) form factors. Consequently,
embedded implementations must be developed that can perform the necessary computations on low-SWaP platforms.
Fortunately, there is an emerging class of embedded processors driven by the mobile and ubiquitous computing
industries. We have leveraged these processors to develop embedded versions of the core atmospheric correction engine
found in our ATCOM software. In this paper, we will present our experience adapting our algorithms for embedded
systems on a chip (SoCs), namely the NVIDIA Tegra that couples general-purpose ARM cores with their graphics
processing unit (GPU) technology and the Xilinx Zynq which pairs similar ARM cores with their field-programmable
gate array (FPGA) fabric.
Modern digital imaging systems are susceptible to degraded imagery because of atmospheric turbulence.
Notwithstanding significant improvements in resolution and speed, significant degradation of captured imagery still
hampers system designers and operators. Several techniques exist for mitigating the effects of the turbulence on
captured imagery, we will concentrate on the effects of Bi-Spectrum Speckle Averaging ,  approach to image
enhancement, on a data-set captured in-conjunction with meteorological data.
Domain-specific languages are a useful tool for productivity allowing domain experts to program using familiar
concepts and vocabulary while benefiting from performance choices made by computing experts. Embedding the
domain specific language into an existing language allows easy interoperability with non-domain-specific code and
use of standard compilers and build systems. In C++, this is enabled through the template and preprocessor features.
C++ embedded domain specific languages (EDSLs) allow the user to write simple, safe, performant, domain specific
code that has access to all the low-level functionality that C and C++ offer as well as the diverse set of libraries
available in the C/C++ ecosystem.
In this paper, we will discuss several tools available for building EDSLs in C++ and show examples of projects
successfully leveraging EDSLs. Modern C++ has added many useful new features to the language which we have
leveraged to further extend the capability of EDSLs.
At EM Photonics, we have used EDSLs to allow developers to transparently benefit from using high performance
computing (HPC) hardware. We will show ways EDSLs combine with existing technologies and EM Photonics high
performance tools and libraries to produce clean, short, high performance code in ways that were not previously
Atmospheric turbulence degrades imagery by imparting scintillation and warping effects that can reduce the ability to
identify key features of the subjects. While visually, a human can intuitively understand the improvement that turbulence
mitigation techniques can offer in increasing visual information, this enhancement is rarely quantified in a meaningful
way. In this paper, we discuss methods for measuring the potential improvement on system performance video
enhancement algorithms can provide. To accomplish this, we explore two metrics. We use resolution targets to
determine the difference between imagery degraded by turbulence and that improved by atmospheric correction
techniques. By comparing line scans between the data before and after processing, it is possible to quantify the
additional information extracted. Advanced processing of this data can provide information about the effective
modulation transfer function (MTF) of the system when atmospheric effects are considered and removed, using this data
we compute a second metric, the relative improvement in Strehl ratio.
When capturing imagery over long distances, atmospheric turbulence often degrades the data, especially when observation paths are close to the ground or in hot environments. These issues manifest as time-varying scintillation and warping effects that decrease the effective resolution of the sensor and reduce actionable intelligence. In recent years, several image processing approaches to turbulence mitigation have shown promise. Each of these algorithms has different computational requirements, usability demands, and degrees of independence from camera sensors. They also produce different degrees of enhancement when applied to turbulent imagery. Additionally, some of these algorithms are applicable to real-time operational scenarios while others may only be suitable for postprocessing workflows. EM Photonics has been developing image-processing-based turbulence mitigation technology since 2005. We will compare techniques from the literature with our commercially available, real-time, GPU-accelerated turbulence mitigation software. These comparisons will be made using real (not synthetic), experimentally obtained data for a variety of conditions, including varying optical hardware, imaging range, subjects, and turbulence conditions. Comparison metrics will include image quality, video latency, computational complexity, and potential for real-time operation. Additionally, we will present a technique for quantitatively comparing turbulence mitigation algorithms using real images of radial resolution targets.
Atmospheric turbulence degrades imagery by imparting scintillation and warping effects that blur the collected pictures and reduce the effective level of detail. While this reduction in image quality can occur in a wide range of scenarios, it is particularly noticeable when capturing over long distances, when close to the ground, or in hot and humid environments. For decades, researchers have attempted to correct these problems through device and signal processing solutions. While fully digital approaches have the advantage of not requiring specialized hardware, they have been difficult to realize in real-time scenarios due to a variety of practical considerations, including computational performance, the need to integrate with cameras, and the ability to handle complex scenes. We address these challenges and our experience overcoming them. We enumerate the considerations for developing an image processing approach to atmospheric turbulence correction and describe how we approached them to develop software capable of real-time enhancement of long-range imagery.
When capturing image data over long distances (0.5 km and above), images are often degraded by atmospheric turbulence, especially when imaging paths are close to the ground or in hot environments. These issues manifest as time-varying scintillation and warping effects that decrease the effective resolution of the sensor and reduce actionable intelligence. In recent years, several image processing approaches to turbulence mitigation have shown promise. Each of these algorithms have different computational requirements, usability demands, and degrees of independence from camera sensors. They also produce different degrees of enhancement when applied to turbulent imagery. Additionally, some of these algorithms are applicable to real-time operational scenarios while others may only be suitable for post-processing workflows. EM Photonics has been developing image-processing-based turbulence mitigation technology since 2005 as a part of our ATCOM  image processing suite. In this paper we will compare techniques from the literature with our commercially available real-time GPU accelerated turbulence mitigation software suite, as well as in-house research algorithms. These comparisons will be made using real, experimentally-obtained data for a variety of different conditions, including varying optical hardware, imaging range, subjects, and turbulence conditions. Comparison metrics will include image quality, video latency, computational complexity, and potential for real-time operation.
The use of commodity mobile processors in wearable computing and field-deployed applications has risen as these processors have become increasingly powerful and inexpensive. Battery technology, however, has not advanced as quickly, and as the processing power of these systems has increased, so has their power consumption. In order to maximize endurance without compromising performance, fine-grained control of power consumption by these devices is highly desirable. Various methodologies exist to affect system-level bias with respect to the prioritization of performance or efficiency, but these are fragmented and global in effect, and so do not offer the breadth and granularity of control desired. This paper introduces a method of giving application programmers more control over system power consumption using a directive-based approach similar to existing APIs such as OpenMP. On supported platforms the compiler, application runtime, and Linux kernel will work together to translate the power-saving intent expressed in compiler directives into instructions to control the hardware, reducing power consumption when possible while still providing high performance when required.
Graph analytics is a key component in identifying emerging trends and threats in many real-world applications. Largescale graph analytics frameworks provide a convenient and highly-scalable platform for developing algorithms to analyze large datasets. Although conceptually scalable, these techniques exhibit poor performance on modern computational hardware. Another model of graph computation has emerged that promises improved performance and scalability by using abstract linear algebra operations as the basis for graph analysis as laid out by the GraphBLAS standard. By using sparse linear algebra as the basis, existing highly efficient algorithms can be adapted to perform computations on the graph. This approach, however, is often less intuitive to graph analytics experts, who are accustomed to vertex-centric APIs such as Giraph, GraphX, and Tinkerpop. We are developing an implementation of the high-level operations supported by these APIs in terms of linear algebra operations. This implementation is be backed by many-core implementations of the fundamental GraphBLAS operations required, and offers the advantages of both the intuitive programming model of a vertex-centric API and the performance of a sparse linear algebra implementation. This technology can reduce the number of nodes required, as well as the run-time for a graph analysis problem, enabling customers to perform more complex analysis with less hardware at lower cost. All of this can be accomplished without the requirement for the customer to make any changes to their analytics code, thanks to the compatibility with existing graph APIs.
Many ISR applications require constant monitoring of targets from long distance. When capturing over long distances, imagery is often degraded by atmospheric turbulence. This adds a time-variant blurring effect to captured data, and can result in a significant loss of information. To recover it, image processing techniques have been developed to enhance sequences of short exposure images or videos in order to remove frame-specific scintillation and warping. While some of these techniques have been shown to be quite effective, the associated computational complexity and required processing power limits the application of these techniques to post-event analysis. To meet the needs of real-time ISR applications, video enhancement must be done in real-time in order to provide actionable intelligence as the scene unfolds. In this paper, we will provide an overview of an algorithm capable of providing the enhancement desired and focus on its real-time implementation. We will discuss the role that GPUs play in enabling real-time performance. This technology can be used to add performance to ISR applications by improving the quality of long-range imagery as it is collected and effectively extending sensor range.
The transmission characteristics of millimeter waves (mmWs) make them suitable for many applications in defense and security, from airport preflight scanning to penetrating degraded visual environments such as brownout or heavy fog. While the cold sky provides sufficient illumination for these images to be taken passively in outdoor scenarios, this utility comes at a cost; the diffraction limit of the longer wavelengths involved leads to lower resolution imagery compared to the visible or IR regimes, and the low power levels inherent to passive imagery allow the data to be more easily degraded by noise. Recent techniques leveraging optical upconversion have shown significant promise, but are still subject to fundamental limits in resolution and signal-to-noise ratio. To address these issues we have applied techniques developed for visible and IR imagery to decrease noise and increase resolution in mmW imagery. We have developed these techniques into fieldable software, making use of GPU platforms for real-time operation of computationally complex image processing algorithms. We present data from a passive, 77 GHz, distributed aperture, video-rate imaging platform captured during field tests at full video rate. These videos demonstrate the increase in situational awareness that can be gained through applying computational techniques in real-time without needing changes in detection hardware.
Several image processing techniques for turbulence mitigation have been shown to be effective under a wide range of long-range capture conditions; however, complex, dynamic scenes have often required manual interaction with the algorithm’s underlying parameters to achieve optimal results. While this level of interaction is sustainable in some workflows, in-field determination of ideal processing parameters greatly diminishes usefulness for many operators. Additionally, some use cases, such as those that rely on unmanned collection, lack human-in-the-loop usage. To address this shortcoming, we have extended a well-known turbulence mitigation algorithm based on bispectral averaging with a number of techniques to greatly reduce (and often eliminate) the need for operator interaction. Automations were made in the areas of turbulence strength estimation (Fried’s parameter), as well as the determination of optimal local averaging windows to balance turbulence mitigation and the preservation of dynamic scene content (non-turbulent motions). These modifications deliver a level of enhancement quality that approaches that of manual interaction, without the need for operator interaction. As a consequence, the range of operational scenarios where this technology is of benefit has been significantly expanded.
The OpenCL API allows for the abstract expression of parallel, heterogeneous computing, but hardware implementations
have substantial implementation differences. The abstractions provided by the OpenCL API are
often insufficiently high-level to conceal differences in hardware architecture. Additionally, implementations
often do not take advantage of potential performance gains from certain features due to hardware limitations
and other factors. These factors make it challenging to produce code that is portable in practice, resulting in
much OpenCL code being duplicated for each hardware platform being targeted. This duplication of effort
offsets the principal advantage of OpenCL: portability.
The use of certain coding practices can mitigate this problem, allowing a common code base to be adapted
to perform well across a wide range of hardware platforms. To this end, we explore some general practices
for producing performant code that are effective across platforms. Additionally, we explore some ways of
modularizing code to enable optional optimizations that take advantage of hardware-specific characteristics.
The minimum requirement for portability implies avoiding the use of OpenCL features that are optional,
not widely implemented, poorly implemented, or missing in major implementations. Exposing multiple levels of
parallelism allows hardware to take advantage of the types of parallelism it supports, from the task level down
to explicit vector operations. Static optimizations and branch elimination in device code help the platform
compiler to effectively optimize programs. Modularization of some code is important to allow operations to
be chosen for performance on target hardware. Optional subroutines exploiting explicit memory locality allow
for different memory hierarchies to be exploited for maximum performance. The C preprocessor and JIT
compilation using the OpenCL runtime can be used to enable some of these techniques, as well as to factor in
hardware-specific optimizations as necessary.
The OpenCL standard for general-purpose parallel programming allows a developer to target highly parallel computations towards graphics processing units (GPUs), CPUs, co-processing devices, and field programmable gate arrays (FPGAs). The computationally intense domains of linear algebra and image processing have shown significant speedups when implemented in the OpenCL environment. A major benefit of OpenCL is that a routine written for one device can be run across many different devices and architectures; however, a kernel optimized for one device may not exhibit high performance when executed on a different device. For this reason kernels must typically be hand-optimized for every target device family. Due to the large number of parameters that can affect performance, hand tuning for every possible device is impractical and often produces suboptimal results. For this work, we focused on optimizing the general matrix multiplication routine. General matrix multiplication is used as a building block for many linear algebra routines and often comprises a large portion of the run-time. Prior work has shown this routine to be a good candidate for high-performance implementation in OpenCL. We selected several candidate algorithms from the literature that are suitable for parameterization. We then developed parameterized kernels implementing these algorithms using only portable OpenCL features. Our implementation queries device information supplied by the OpenCL runtime and utilizes this as well as user input to generate a search space that satisfies device and algorithmic constraints. Preliminary results from our work confirm that optimizations are not portable from one device to the next, and show the benefits of automatic tuning. Using a standard set of tuning parameters seen in the literature for the NVIDIA Fermi architecture achieves a performance of 1.6 TFLOPS on an AMD 7970 device, while automatically tuning achieves a peak of 2.7 TFLOPS
Organic electro-optic material based optical modulators have been fervently pursued over the past two decades. The material properties of organic materials over crystalline electro-optic materials such as LiNbO3 have yielded devices with record low drive voltages and significant promise for high frequency operation that are ideal for implementation in many developing telecommunication technologies. This paper will discuss a TM electro-optic phase modulator based on a recently developed material IKD-1-50. A simple fabrication process that is compatible with wafer scale manufacturability using commercially available cladding materials, spin processing, standard photolithography, and dry etching will be presented. Non-centrosymmetric order is induced in the core material via a thermally enabled poling process that was developed based on work in simple slab waveguide material characterization devices, and optimized for polymer stack waveguide architectures. Basic phase modulators are characterized for half wave voltage and optical loss. In device r33 values are estimated from a combination of measured and simulated values. Additional work will be discussed including amplitude modulation and high frequency applications. The design for a Mach-Zehnder interferometer amplitude modulator that implements a multi mode interference cavity splitter will be presented along with plans for a microstrip transmission line traveling wave modulator.
Silicon slot waveguides leverage the field enhancement provided by the continuity of normal electric flux density across a dielectric boundary to confine an optical mode to a void between two proximal silicon strips. Silicon-organic hybrid slot modulators make use of this mode profile by infiltrating the slot region with a non-linear organic electro-optic material (OEOM) for modulation. The dual slot modulator takes this idea a step further by similarly confining a propagating RF mode to the same slot region to increase modal overlap for improved modulation efficiency. This effect is achieved by aligning a titanium dioxide RF slot along a conventional silicon slot waveguide. The TiO2 has an optical refractive index lower than silicon, but a significantly higher index in the RF regime. As a result of the large modal overlap and high electro-optic activity of the OEOM this design can produce measured phase modulated VπL of less than 1.40 V•cm. Furthermore, as the modulator operates without the introduction of a doping scheme it can potentially realize high operational bandwidth and low loss. We present work towards achieving various working prototypes of the proposed device and progress towards high frequency operation.
Chalcogenide glasses, namely the amorphous compounds containing sulfur, selenium, and/or tellurium, have emerged as a promising material candidate for mid-infrared integrated photonics given their wide optical transparency window, high linear and nonlinear indices, as well as their capacity for monolithic integration on a wide array of substrates. Exploiting these unique features of the material, we demonstrated high-index-contrast, waveguide-coupled As2Se3 chalcogenide glass resonators monolithically integrated on silicon with a high intrinsic quality factor of 2 × 105 at 5.2 micron wavelength, and what we believe to be the first waveguide photonic crystal cavity operating in the mid-infrared.
Dual vertical slot modulators leverage the field enhancement provided by the continuity of the normal electric flux density across a boundary between two dielectrics to increase modal confinement and overlap for the propagating optical and RF waves. This effect is achieved by aligning a conventional silicon-based optical slot waveguide with a titanium dioxide RF slot. The TiO2 has an optical refractive index lower than silicon, but a significantly higher index in the RF regime. The dual slot design confines both the optical and RF modes to the same void between the silicon ribs of the optical slot waveguide. To obtain modulation of the optical signal, the void is filled with an organic electro optic material (OEOM), which offers a high optical non-linearity. The optical and RF refractive index of the OEOM is lower than silicon and can be deposited through spin processing. This design causes an extremely large mode overlap between the optical field and the RF field within the non-linear OEOM material which can result in a device with a low Vπ and a high operational bandwidth. We present work towards achieving various prototypes of the proposed device, and we discuss the fabrication challenges inherent to its design.
As EO phase modulators become more prevalent components in optical and RF applications, the demand increases for high bandwidth and low drive voltage modulators that can easily be integrated into developing photonic technologies. The proposed paper will discuss a device architecture for a phase modulator based on a recently developed organic EO material (OEOM), IKD-1-50 integrated into a PMMA polymer host, using a low-index, photo-curable resin as the cladding layers all on a Si platform. Designs for a TM waveguide and electrode configuration will be presented from theory and modeling, through fabrication to characterization. The EO material serving as the core of the waveguide is poled using a poling stage and monitoring apparatus with same electrodes designed for modulation. Poling procedures have been optimized for this material based on experimentation in simple slab-capacitor characterization devices, and produce in-device r33 values that are comparable with attenuated total internal reflection measurements. The challenges presented by the instability of OEOMs in common processing conditions have been addressed and a very simple fabrication process has been developed using standard photolithography and reactive ion etching to define an inverted ridge waveguide structure, pattern surrounding electrodes, and prepare usable end facets. Phase modulator characterization results for fabricated and poled devices have been quantified and will be presented. The simplicity of this device architecture on a Si handle allows for integration into various photonic applications.
Organic EO materials, sometimes called EO polymers, offer a variety of very promising properties that have improved at remarkable rates over the last decade, and will continue to improve. However, these materials rely on a “poling” process to afford EO activity, which is commonly cited as the bottleneck for the widespread implementation of organic EO material-containing devices. The Solution Phase-Assisted Reorientation of Chromophores (SPARC) is a process that utilizes the mobility of chromophores in the solution phase to afford acentric molecular order during deposition. The electric field can be generated by a corona discharge in a carefully-controlled gas environment. The absence of a poling director during conventional spin deposition forms centric pairs of chromophores which may compromise the efficacy of thermal poling. Direct spectroscopic evidence of linear dichroism in modern organic EO materials has estimated the poling-induced order of the chromophores to be 10-15% of its theoretical maximum, offering the potential for a manyfold enhancement in EO activity if poling is improved. SPARC is designed to overcome these limitations and also to allow the poling of polymeric hosts with temporal thermal (alignment) stabilities greater than the decomposition temperature of the guest chromophore. In this report evidence supporting the theory motivating the SPARC process and the resulting EO activities will be presented. Additionally, the results of trials towards a device demonstration of the SPARC process will be discussed.
A technique is described for displaying polarization information from passive millimeter-wave (mmW) sensors. This technique uses the hue of an image to display the polarization information and the lightness of an image to provide the unpolarized information. The fusion of both images is done in such a way that minimal information is lost from the unpolarized image while adding polarization information within a single image. The technique is applied to experimental imagery collected in a desert environment with two orthogonal linear polarization states of light and the results are discussed. Several objects such as footprints, ground textures, tire tracks, and shrubs display strong polarization features that are clearly visible with this technique, while materials with low polarization signatures such as metal are also clearly visible in the same image.
Modern high frequency applications necessitate the utilization of the millimeter wave band. Slot waveguides have
previously been used for electro optic modulators as the enhancement of the electric field strength in the slot creates a
large overlap with the electro optic material. We present a design that utilizes the field enhancement provided by a slot
waveguide geometry for both the optical field and the RF modulating field. The dual RF and optical slot configuration
maximizes the overlap of the optical field and the modulating field in the electro optic material, creating the maximum
amount of phase change per applied volt of modulating signal. This design presents unique fabrication challenges.
An all-polymer high-frequency Mach-Zehnder modulator that can be fabricated using standard UV lithography is
proposed. The optical waveguide structure consists of three polymer layers, two low-index, outer cladding layers and an
organic-electro-optic material in a polymer host as the core. Lateral confinement is provided by a trench that is defined
in the lower cladding layer, resulting in an inverted electro-optic polymer ridge waveguide. The inverted nature of this
trench structure allows for a fabrication process in which the cladding layer is patterned, and the highly sensitive electrooptic
material is simply spun on and cured. Microstrip transmission line electrodes patterned on the outer cladding, over
the optical waveguides provide the modulation field. Similar devices using CLD1 or AJL8, as the electro-optic material
have been numerically analyzed at up to 260GHz, and characterized at frequencies up to 40 GHz, but to date no electrooptic
polymer device has been characterized at such high frequencies. A recently developed material, IKD-1-50, with
electro-optic coefficients up to five times larger than CLD1 and AJL8 will be utilized as the core layer for the optical
waveguide. The greater nonlinearity of these materials will yield a device with a lower Vπ. Additionally, high
frequency characterization up to 300GHz will demonstrate the high bandwidth application possibilities of these new
Organic electro-optic materials, or "EO polymers," offer much higher nonlinearities than traditional crystalline
materials, making these materials ideal for next generation electro-optic modulators. These materials require an
additional processing step known as poling, which reorients the chromophores through the application of a high electric
field. This effort will focus on corona poling, where a gas is ionized and the electric field across the sample is applied
through the relocation of charged ions. The proposed technique avoids the need to raise the temperature of the material
by applying the electric field while the material is deposited in solution phase. This process can overcome the thermal
stability tradeoff in many organic electro-optic materials, and preliminary results indicate this process results in an
enhancement in the electro-optic activity of the material.