An experiment was conducted to study turbulence along a 149-km path between the Mauna Loa and Haleakala mountain tops using digital cameras and light-emitting diode (LED) beacons. Much of the path is over the ocean, and a large portion of the path is 3 km above sea level. On the Mauna Loa side, six LED beacons were placed in a roughly linear array with pair spacings from 7 to 62 m. From the Haleakala side, a pair of cameras separated by 83.8 cm observed these beacons. Turbulence along the path induces tilts on the wavefronts, which results in displacements of the LED spots in the images. The image motion is caused by unwanted noise sources such as camera platform motion. Differential motion between spots cancels much of this noise, and this differential motion is weighted by the turbulence along the path in different ways depending on the geometry between the sources and the cameras. A camera motion insensitive weighting function is developed to deal with this observational issue. A linear combination of these weighting functions is then used to generate a composite weighting function, which better rejects turbulence near the sources and receivers and is most sensitive to turbulence in the portion of the path out over the ocean. This technique is used to estimate turbulence in this region. The long range involved caused very strong scintillation in the image, which added new challenges to the data processing. A resulting estimate for of 4 × 10 − 17 m − 2 / 3 is in good agreement with the Hufnagel–Valley HV5/7 model and the results of numerical weather modeling.
Turbulence has been experimentally characterized along a 149-km path between the Mauna Loa and Haleakala mountain tops in Hawaii through an imaging experiment. Six light-emitting diode (LED) beacons were placed on the Mauna Loa side, which were observed by a pair of cameras on the Haleakala side. The relative motion between the imaged LED spots is used to estimate turbulence.
There exist many different techniques for turbulence measurement and estimation. Turbulence is being studied here because of its effects on optical system performance, and so optical means of estimating turbulence are especially attractive because they sense turbulence in the same way turbulence (usually) degrades imaging performance. Many optical approaches produce an integrated or path-averaged value for turbulence, albeit with different weighting functions along the path, depending on the method. Scintillometers1 and Hartmann turbulence sensors2 are such two methods producing estimates weighted toward the center and detector ends of the path, respectively. Some optical approaches can profile turbulence along the path producing independent estimates for turbulence at different locations along the path. The difference in differential tilt-variance technique3 and SCIDAR4,5 are examples of this. The LED array technique of Gladysz6 also uses wavefront tilts such as the method discussed in this paper; however, their work does not address the distribution of turbulence along the sensed path. Slope detection and ranging (SLODAR) estimates turbulence along the path using the Fourier transform of the slope covariances between the aperture spacings.7 This paper cannot use this approach since only a pair of cameras are used and thus only a single aperture spacing is present. Nonoptical methods to measure turbulence include sonic anemometry,8 which is a point measurement technique, and weather radar, which can profile, but rather coarsely.9 Satellite measurements have also been used to estimate turbulence, while the measurements here are optical in nature, it is temperature rather than a disturbance to an image that is initially measured and the turbulence estimate is derived from the large-scale measured temperature gradient.10 Coarse resolution (5 to 45 km horizontally and100 m to 1 km vertically) volume distributions of optical turbulence can also be derived from numerical weather prediction (NWP) temperature fields in the same manner as those estimated from satellite-derived temperature fields, but this NWP and satellite method of turbulence calculation has yet to be validated against optically- or image-derived turbulence estimates.
This paper explains the use of an imaging technique to quantify turbulence. This technique, as employed here, is not used to produce a turbulence profile, and it would not be good at this because the weighting functions involved are not sufficiently different from each other deep in the path. However, it does produce turbulence estimates against a number of different weighting functions, and these are used to reject the influences of turbulence near the surface and thus make estimates of turbulence, which are most strongly influenced by the turbulence in the middle of the path. Given that the Hawaii path studied here is mostly about 3 km above the ocean surface, this paper is able to make an initial “validating” comparison of optically inferred turbulence at an altitude obtained from NWP temperature gradients at the day and time of the optical measurement.
The rest of this paper consists of three sections. Section 2 discusses the path weighting functions used and how they were modified to be insensitive to camera motion. The synthesis of a new weighting function from a linear combination of others to better reject ground-level turbulence is also discussed. The section concludes with a simplified derivation of the crossing path weighting functions based on scaling the results of Fried11 and Winick and Marquis.12 Section 3 provides a description of the experiment and the techniques used in image processing. The turbulence estimates are obtained experimentally from weighting functions used, and tilts variances measured are also explained. The section continues with comparisons of these results to the Hufnagel–Valley 5/7 model and turbulence estimates from NWP. The section concludes with the lessons learned and ideas for future improvements. Section 4 concludes the results and the strengths and weaknesses of the techniques used.
Path Weighting Functions
Path weighting functions model how turbulence along the path contributes to the expected value of the tilt-variance is observed. Following the approach of Fried11 and Winick and Marquis,12 the expected value of the differential tilt-variance between a pair of apertures looking at a pair of sources, which lie in the same plane as the apertures, is expressed as follows:13
The first term on the second line of this equation, , corresponds to the total tilt-variance seen in the apertures while the remaining terms correspond to the covariance between these tilts. The first half of this statement can be verified by completing the and integrals for this part of the weighting function expression, which recovers twice the single aperture 2-D tilt-variance. The process of evaluating the definite integral using Mathematica proceeds as follows:
Multiplying out the left side of Eq. (1) and extracting just these tilt-only weighting function parts results in
Recognizing that increases from the aperture to the source in Eq. (4), but from the source to the aperture in Eq. (5), the integrations in these equations are equivalent and substitution results in twice the known expression14 for the 2-D tilt-variance.
Having identified the self- and cross-variance terms in the weighting function, more complicated cases can easily be handled. Figure 1 shows the composition of a crossing path and noncrossing path differential tilts to create a camera motion invariant second difference.
Using the above, we can now compute the path weighting function for cases like this. Writing out the difference between a pair of tilt differences and squaring this quantity inside the expectation value brackets, we get the result as follows:Fig. 2.
Despite the fact that a double difference is being taken here, these weighting functions remain strong. When compared to the weighting functions (not shown) for the differential tilt-variance in a single aperture for a pair of beacons, the weighting functions in Fig. 2 have about twice the strength and better spatial diversity. If the weighting functions are computed for the simpler crossing case (i.e., the left side of Fig. 1), a notch appears where the beams cross and the weighting function goes to zero. Where the beams cross, the turbulence induces the same tilt on both paths and this cancels out in the difference. This encodes information about the distribution of turbulence along the path in the differential tilt-variances. For the motion invariant weighting functions used here, this notch does not go all the way down to zero, but it is still positioned at the crossing point. Armed with this information, it is clear that the lowest curve in Fig. 2 corresponds to the smallest separation between beacons since the crossing point moves toward the beacons as they get closer together. Another nice feature of these weighting functions for our purposes here is that they go to zero on either end of the path. That is, data derived through the camera motion invariant tilt calculation will reject turbulence near the ends of the path. Since the beam path is only near the ground at the start and end of the path, this feature rejects near ground-level turbulence from the measurements. Another way to look at the composite weighting function in Fig. 1 is to note that the beam paths captured in a single camera have opposite signs, so camera motion cancels out in each camera. It is presumed here that this motion is of the camera’s pointing direction. If the camera was to twist about its pointing direction, this effect would not cancel out. However, some microradians of pointing error would cause much apparent image motion, whereas a few microradians of twist would cause very little.
Synthesizing a New Weighting Function
The weighting functions in Fig. 2 can be combined to synthesize a new weighting function, which is most sensitive to turbulence deep in the path. There are only 15 different curves and after about the first third of the path all are very similar, so this technique will clearly be incapable of accurately profiling the entire path. If we have a desired weighting function along the path, , presumed to be a vector sampled along the path, we ask: what weights, , should we put on each of the 15 weighting function curves to best approximate ? We will reuse a technique we have previously employed to compute this.16 These unknown weights can be related to the desired function by a matrix , where is a discretized version of the weighting function curves. Each column of corresponds to one of the 15 weighting functions, and each row corresponds to the distance at which they were discretized. This relationship is given as Eq. (8).
We chose to sample the functions at every 100 m, so the matrix has 1490 rows and 15 columns. Now we use the Moore–Penrose pseudoinverse to compute the weights through the following equation:Figure 3 shows the weighting function resulting from this procedure for a desired weighting function, which is unity in the middle third of the path and zero elsewhere.
It can be seen that this synthetic weighting function does a better job of rejecting turbulence near the camera end of the path than the constituent weighting functions, and near the source side of the path it looks just like the constituent weighting functions. It is also apparent that this synthetic weighting function is a bit noisy near the camera end of the path.
Simplified Derivation of the Weighting Function
The tilt-variance expression given as Eq. (1) was derived13 by closely following the approach used by Fried11 and Winick and Marquis12 for the geometry involved here. It can also be derived directly from Fried’s result by substitution. By recognizing that Fried’s result remains correct even behind the aperture, the equation can be reinterpreted for a two-aperture case when the sources and apertures are all in the same plane. Fried and Winick and Marquis give the anisoplantic tilt error or the differential tilt-variance for the case of a single aperture observing a pair of stars (plane waves) as
Some different choices were made in these equations from what is above. Now is the separation between the optical paths in multiples of the aperture diameter, is the angle between these paths, and the weighting function has been defined without the prefactors used previously in Eq. (2). First, the spherical wave version of this weighting function in formed by scaling the appearances of in the curly braces by . This changes the diameter of the integration region at each plane between the light sources and the camera from the aperture diameter to the diameter of a cone whose base is at the aperture and whose apex is at the light source. This step should be seen as an explanation rather than a derivation. The parameter equals the distance between the optical paths at each plane of divided by the aperture diameter. For a single-aperture case, this is given as Eq. (12). In the case of a pair of apertures, this is just a simple linear relationship:
The separation between subapertures has been labeled as to avoid confusion. Substituting these two changes into the weighting function yields
A roughly linear array of 6 LED beacons was placed at the Mauna Loa Observatory on the island of Hawaii. These beacons were imaged by a pair of cameras located at 149 km away at the Haleakala Observatory on the island of Maui. The cameras were located outside on a single mount about 1 m above the ground and separated by 83.8 cm center to center. The cameras were simultaneously triggered by computer and captured a 10 frame burst at 40 Hz every 10 s. Both cameras were the FLIR Grasshopper 3 GS3-U3-89S6M-C Monochrome versions with pixel pitch. This gives a pixel field of view of 1.7 m at the source plane. The exposure time was held fixed at 5 ms throughout this experiment. The exposure time was set to be as short as possible while maintaining good image strength in the cameras. The resulting images appear to freeze the atmosphere well, but the atmospheric coherence time has not been estimated or measured. The cameras were affixed with identical f/4 Nikon telephoto lenses with 75-mm apertures and 300-mm focal lengths. On the Mauna Loa side, the LEDs were placed on the ground in small PVC pipe fittings for weather protection and were battery powered. The LEDs were near-infrared with an 850-nm nominal wavelength and deployed perpendicularly to the imaging direction. The LEDs were unevenly spaced along this line to provide a variety of different pair-to-pair spacings and to ensure some pairs remained resolvable even if close pairs blurred together. Figure 4 illustrates the imaging path used from overhead. The path was mostly over the ocean and mostly about 3 km above the surface. Figure 5 illustrates the imaging path from the side. Because of the curvature of the earth, the closest approach to sea level occurred near the middle of the path. Most of the path that was not over water was on the Mauna Loa side where the technique was reasonably insensitive to turbulence. This feature helps to naturally reject turbulence near the surface from the measurements and will be discussed more deeply in further sections.
Image Processing Techniques
The desired signal from this experiment was the wavefront tilts on the beacons appearing in the camera apertures. The wavefront tilt is just the apparent position of the source in object space divided by the distance to the object, or equivalently the position of the LED image on the focal plane divided by the focal length. The small angle approximation is well justified by the geometry involved, where the biggest raw angle will be the maximum object separation, 62 m, divided by the object distance of 149 km thus about 0.4 mrad. Camera pointing accuracy is likely a comparably large angle, but it will also be ignored here because it can have little effect on results. Figure 6 shows an hour-long average of the images captured in each camera.
It is apparent on this figure that there is a problem with the data; the left camera shows lateral motion, which is not present in the right camera. The presumed explanation is that the left camera was not mounted securely on the stand. Since a 1-pixel image shift corresponds only to pointing change, other explanations such as a loose lens or detector are not ruled out. This motion was likely imperceptible during data collection. Changes to the camera pointing direction do not affect the difference between the image positions of the beacons in a single camera, but the difference of the image positions between the cameras will be corrupted. The camera motion invariant weighting functions shown in Fig. 2 were developed to deal with this issue. The spacing between the beacons from left to right was set at 15, 18, 7, 12, and 10 m.
There was also a great deal of scintillation present in the imagery, which made identifying the spots and computing their centroids challenging. When the algorithm could not identify a spot location, it used the previous centroid. The centroiding code was written in MATLAB using the image processing toolbox with some hand tuning of parameters based upon the camera imagery.
Turbulence Estimates from Differential Tilt-Variances
The centroids of the spot images were calculated on each individual frame of data over the first 20 min of the 1-h block used for Fig. 6. This produced 1200 centroid locations for each of the six beacons and two cameras. These positions were combined to produce a camera motion invariant tilt as given by Eq. (7) and illustrated on the right side of Fig. 1. The resulting variances in pixel units and as a function of the source pair separation are shown in Fig. 7, along with the variance that would be expected for constant turbulence with a value of along the path.
The constant turbulence level used in Fig. 7 was chosen to make the 7-m source separation match for both theory and experiment. For constant turbulence, the expected value of the desired variance is just times the area under the weighting function curve. Figure 8 shows these results for each of the 15 pairs of sources.
There is a general trend toward smaller estimated turbulence values for smaller source pair separations. This trend means the variances are getting smaller as separation decreases even more quickly than this since the area under the curve is also decreasing as the pair separation decreases. This fact is apparent on the modeled variances in Fig. 7, which are proportional to the areas under the weighting function curves. From Fig. 2, it can be seen that smaller source separations give proportionally more weight to turbulence deeper in the path. This is most apparent in the motion of the main peak of the weighting function, but it could also be shown by calculating the first moment of the weighting functions.
Each of the computed variances can be used to produce an estimate for turbulence. Now we just divided the computed differential tilt-variance by the area under the corresponding weighting function curve. Figure 8 shows these results. There is a clear, but noisy, trend toward smaller estimated turbulence values as the source pair separation decreases. Together with the weighting function trends discussed in the last paragraph, this suggests turbulence is lower in the middle of the path, which is expected, and that the technique used here is successfully rejecting turbulence near the beginning of the path. This raises the question: what can we say about turbulence level out over the ocean and well above ground? The synthetic weighting function from Sec. 2.2 will be used to address this question. This synthetic weighting function produces a single estimated turbulence value. If the small singular values in the pseudoinverse are not controlled first, a nonphysical negative number results for ; this can be corrected by applying a threshold to the pseudoinverse. By cutting off small singular values until a positive result emerges, the value estimated for turbulence is . This is within the range of the lower values found on Fig. 8. Since this value was obtained by constraining the output of the estimation process to be positive, it is probably safest to say that the turbulence averaged over the portion of the path over water is estimated to be less than this value.
Comparing Turbulence Estimates and Models
The result of the previous section can be compared to the amount of turbulence expected at this location by turbulence models and by other means. The Hufnagel–Valley 5/7 model is the standard profile used for turbulence.17 This model predicts turbulence at a height of 3000 m, which is quite close to what is estimated here. Turbulence can also be estimated from numerical weather models using the Tatarskii relation.10 This approach mostly uses the modeled temperature gradient to estimate turbulence. For the location and time of the experimental data presented here, LEEDR18 was used with a numerical weather input from the global forecasting system to produce a turbulence profile. At 3000 m of altitude, this prediction matched closely with the other estimates These results are shown in Fig. 9.
Lessons Learned and Future Work
The cameras and lenses were mounted on a single platform and were relatively close together. This was done in the hope that any motion of the system would be common in both cameras, but it is clear from the data that this hope was unfounded. If the cameras were further apart, the weighting functions would have been better for discriminating turbulence deep in the path, and since a camera invariant tilt has been discovered, there is no reason for not using a longer baseline.
The scatter of the results seen in Figs. 7 and 8 seems likely to be a noise issue. Perhaps it is attributable to incorrectly assigned centroids. The points scattered highest above the general trend involve at least one of the LEDs from the closest pair, where miss assignment or missing spots seems most probable. Spot checks of the values produced against the actual images have not revealed any obvious errors. Work on improved centroid measurement from this data is ongoing. More data from this experiment remain to be analyzed and once the variances are more well tamed this will proceed.
The Greenwood frequency has not been estimated for this experiment, but that can be done from the numerical weather data using both wind data and turbulence estimates. This could verify or refute the proposition that the 5-ms camera integration time was sufficiently fast to freeze the atmosphere.
In conclusion, a method has been developed and experimentally demonstrated to estimate turbulence along an optical path in a region far from the source and detector. The demonstration used relatively low-cost camera equipment and LED beacons along a 149-km path between mountain peaks in Hawaii. A composite differential tilt measurement was found, which is insensitive to camera motion. Estimates of turbulence agree well with other sources.
This work was supported by the Office of Naval Research and the Joint Directed Energy Technology Transition Office through a Multidisciplinary Research Initiative. The views expressed in this paper are those of the authors and do not necessarily reflect the official policy or position of the Air Force, the Department of Defense, or the U.S. Government.
Jack E. McCrae received his PhD in physics from the Air Force Institute of Technology (AFIT) in 1997, an MS degree in optical physics from AFIT in 1993, and a BS degree in physics from MIT in 1984. He is a retired Air Force Colonel and currently a research assistant professor in the Engineering Physics Department at AFIT. His research interests include optics, lasers, quantum and nonlinear optics, laser radar, atmospheric propagation, and imaging.
Santasri R. Bose-Pillai received her PhD in electrical engineering with a focus in optics in 2008 and her MSEE in 2005, both from New Mexico State University. She received her BSEE with honors from Jadavpur University, India, in 2000. Currently, she is a research assistant professor in the Engineering Physics Department at AFIT Center for Directed Energy. Her research interests include propagation and imaging through atmosphere, telescope pointing and tracking, rough surface scattering, and laser communications. She is a member of OSA, SPIE, and DEPS.
Steven T. Fiorino received his BS degrees in geography and meteorology from Ohio State (1987) and Florida State (1989) universities. He additionally holds an MS degree in atmospheric dynamics from Ohio State (1993) and a PhD in physical meteorology from Florida State (2002). He is a retired Air Force Lieutenant Colonel with 21 years of service and currently a professor of atmospheric physics in the Engineering Physics Department at AFIT and the director of the Center for Directed Energy.
Aaron J. Archibald currently serves as a research engineer for the Department of Engineering Physics, AFIT Center for Directed Energy. He earned his BS degree in engineering physics from Wright State University in 2010 and an MS degree in nanotechnology from Chuang Yuen Christian University, Taiwan, in 2012. His research supports the efforts of the Airborne Aero-Optics Laboratory through design, fabrication, and operation of experimental laser tracking system.
Joel Meoak received his BS degree in computer engineering from Cedarville University in 2018. He served as a programming research assistant in the Center for Directed Energy working on enhancements to the AFIT Directed Energy and Atmospheric Models (ADAM) and Physics-based Imaging and Tracking of Ballistics, UAVs, and LEO satellites (PITBUL) software packages. He is now on active duty as a Second Lieutenant in the US Air Force.
Brannon J. Elmore received his BS degree in computer science from Wright State University in 2014. He is the lead software developer for the Center for Directed Energy at the AFIT, Wright-Patterson Air Force Base, Ohio. His research efforts are focused on the continuous improvement of LEEDR and HELEEOS, the core applications of the AFIT directed energy and atmospheric models (ADAM) software package.
Thomas Kesler earned his BS degree in engineering physics with a computer science minor from Wright State University in 2014. He is a software engineer for the Center for Directed Energy at AFIT working on the LEEDR, HELEEOS, and PITBUL software packages. His work includes developing C++ implementations of LEEDR and HELEEOS, and incorporating optimizations that reduce code runtimes and CPU usage. His research efforts include applying scientific computing to solving problems in atmospherics and optics.
Christopher Rice currently serves as a research assistant professor in the Department of Engineering Physics at the AFIT. He earned his BS degree in electrical engineering from Cedarville University, his MS degree in electrical engineering from AFIT, and his PhD in optical sciences and engineering also from AFIT. He is interested in topic areas related to high-energy lasers, remote sensing, laser lethality, and optical diagnostics.