Long range imaging in the atmosphere is impacted, and often limited, by optical turbulence. Random variations in the index of refraction along the optical path are caused by temperature variations and convection.1 This leads to warping and blurring degradations. When the warping and blurring are spatially varying over the field of view of an imaging sensor, this is referred to as anisoplanatic imaging. Anisoplanatic conditions typically prevail when imaging extended scenes. With very small fields of view, the optical turbulence may be reasonably modeled as isoplanatic, using a spatially invariant point spread function (PSF). This is often the case with astronomical imaging.1
The simulation of imaging under isoplanatic atmospheric turbulence has been well studied.1 Commercial and free open-source software2,3 is available for this purpose. However, anisoplanatic imaging simulations using numerical wave propagation have only recently been described in the literature. Some of the first such papers include those of Carrano4 and Praus et al.5 Further wave propagation-based anisoplanatic simulation works include that of Bos and Roggemann,6,7 and Monnier et al.8,9 Anisoplanatic simulations that do not involve wave propagation have also been developed.1011.–12 The ability to accurately simulate the degradation effects of optical turbulence is important for several reasons. First, such simulations allow us to study the impact of atmospheric turbulence under a wide variety of imaging scenarios. Second, we are able to use simulated data to quantitatively evaluate turbulence mitigation methods.4,1314.–15 Quantitative analysis is possible because we have an objective truth image from which the degraded images are generated. Without such a truth image, assessment of restoration algorithms is subjective, and optimization cannot be automated. We believe that quantitative performance analysis is critical to the advancement of image restoration methods.
The simulation methodology presented here is based on that of Bos and Roggemann.7 We expand on, and extend, that work in several ways. We also provide a complete description of the simulation, including the details of the wave propagation method used. Like Bos and Roggemann,7 our approach computes an array of PSFs for a two-dimensional (2-D) grid on the object plane. The PSFs are then used in a spatially varying weighted sum operation with an ideal image to produce a simulated image with realistic optical turbulence degradation. The degradation includes spatially varying warping and spatially varying blurring.
To produce the PSF array, we generate a series of extended phase screen realizations. Simulated point sources are numerically propagated from different positions on the object plane, through the phase screens, and ultimately to the focal plane of the simulated camera. Note that the optical path for each PSF will be different, and thus, pass through a different portion of the extended phase screens. These different paths give rise to spatially varying, but spatially correlated PSFs. As we shall show, these PSFs may be used to generate accurate anisoplanatic effects. The method we use to define the individual phase screen statistics is distinct from that of Bos and Roggemann.7 Our approach is based on the constrained least squares optimization presented by Schmidt,2 but is extended to include isoplanatic angle in the cost function. Another unique feature of our method is that we exclude the phase screen at the pupil plane. This aides in generating the appropriate level of anisoplanatic PSF correlations.
We also present a validation analysis here. In particular, we compare the simulated outputs with the theoretical anisoplanatic tilt correlation,16 and a derived differential tilt variance (DTV) statistic. This is in addition to comparing the long- and average short-exposure PSFs and isoplanatic angle. We believe this analysis represents the most thorough validation of an anisoplanatic simulation to date. The current work is also unique that we simulate and validate both constant and varying profiles. Furthermore, we simulate sequences with both temporally independent and temporally correlated turbulence effects. Temporal correlation is introduced by generating larger extended phase screens and translating this block of screens in front of the propagation area. This approach is similar to that described by Dios et al.17 Our validation analysis shows an excellent match between the simulation statistics and the theoretical predictions. Thus, we believe this approach can be used effectively to study optical anisoplanatic turbulence and to aid in the development of image restoration methods.
The rest of this paper is organized as follows. Key anisoplanatic turbulence statistics are presented and discussed in Sec. 2. These statistics are used in setting up the simulation and in the validation analysis. The details of the simulation are presented in Sec. 3. The experimental results are presented in Sec. 4. Finally, we offer conclusions in Sec. 5.
Anisoplanatic Optical Turbulence
Optical Turbulence Statistics
Variations in the index of refraction in the atmosphere can be modeled with a refractive index structure function. Using a Kolmogorov model,1 this is given by1 The units of are and typical values tend to range from to .2 When this quantity varies along the optical path, it may be expressed as a function , where is the distance from the source.
The atmospheric coherence diameter (or Fried parameter)2 can be expressed as a weighted integral of yielding
Another important statistic is the isoplanatic angle.1 Two point sources that are separated by less than the isoplanatic angle will have a mean wave function phase difference at the aperture of radian.2,18 Another way to think of this is that points separated by less than this angle will have approximately the same PSF. The isoplanatic angle can also be expressed as a weighted integral of , yielding2,182 given by Fig. 1. Note that the isoplanatic angle is most impacted by turbulence at the source, while is most impacted by turbulence near the camera. The log-amplitude variance is impacted most by the center of the optical path. Since the weighting for these three statistics covers the optical path in this balanced manner, we use them to determine the phase screen Fried parameters in our simulation, as shown in Sec. 3.5.
Optical Transfer Functions
The overall optical transfer function (OTF) for an imaging system in optical turbulence may be modeled to include the atmospheric OTF and the diffraction OTF. This is given by1 19
Anisoplanatic Tilt Statistics
While the isoplanatic angle provides information about the level of anisoplanatism on a small scale, it does not provide insight into anisoplanatic behavior from points in the object plane with a large separation angle. Below, we describe two statistics that do capture large-scale anisoplanatic behavior. These are the two-axis -tilt correlation and the DTV. We shall use these as key validation metrics for the simulation.
To begin, let us define a two-axis -tilt vector as for a source viewed from the direction angle vector . For a spherical wave characterized by the Kolmogorov power spectrum, an analytical expression for the -tilt correlation has been derived by Basu (now Bose-Pillai) et al.,16 following techniques outlined by Fried20 and Winick.21 The tilt correlation can be expressed as a weighted integral of as follows:
Figure 2 shows a plot of for different source separations, expressed in terms of pixel spacings. Note that the term “Nyquist pixels” here refers to pixel spacings corresponding to spatial sampling at the Nyquist rate, relative to the diffraction-limited optical cut-off frequency. The aperture size, path length, and Nyquist pixel spacing used in the evaluation are the same as those used in the simulations and are listed in Table 1. Note that the weight is maximum at the camera () and drops down to zero at the source (). This implies that the turbulence near the source does not contribute to tilt correlation seen at the camera. It is also apparent from Fig. 2 that the tilt correlations decrease with increasing angular separation between the sources.
|Nyquist pixel spacing (focal plane)|
|Nyquist pixel spacing (object plane)|
Let us now consider the DTV. This statistic is defined as22 expressed in terms of as
Figure 3 shows a plot of for different source separations. The aperture size and path length used in the evaluation are same as those in Fig. 2. Note that the weighting functions in Fig. 3 drop down to zero at both ends of the path. The zero weight at the source occurs because we have a point source with spherical wave emanating from it. Angular variation due to turbulence immediately at the source is effectively like rotating the point source. This just directs a different ray emanating from the point source to the observer. It does not change the angle of arrival of the point source observed by the camera. On the other side of the path, tilts due to turbulence near the camera tend to be very similar across the field of view, owing to the convergence of the optical paths near the camera. This causes the differential signal to drop to zero at the camera end. It is also evident from Fig. 2 that the DTV grows with increasing angular separation between the sources. This is because the tilt correlation drops and the DTV approaches the tilt variance from Eq. (10). Furthermore, it is interesting to compare Fig. 3 to Fig. 1. Note that for small separations, the weighting function in Fig. 3 is weighted more heavily toward the source. This is also the case for the isoplanatic angle weighting in Fig. 1, which relates to small scale anisoplanatism. For large separations, the weighting in Fig. 3 appears to approach the weighting in Fig. 1. This is a satisfying result, given that governs tilt variance as seen in Eq. (13).
Overview of Simulation
As mentioned in Sec. 1, the proposed method is based on the method of Bos and Roggemann.7 Extended phase screens are generated as shown in Fig. 4. Points in the object plane are projected to the center of the camera pupil. Two such examples are shown in Fig. 4. The local phase screens are cropped from the extended phase screens within a specified distance of the optical path for each point. These local phase screen portions are shown with the blue and green squares in Fig. 4. The extended phase screen sizes are determined based on the cropped phase screen size and the object size, as shown in Fig. 4. More will be said about these dimensions shortly.
A simulated point source is numerically propagated from the source to the pupil plane though the cropped phase screens for up to each point in the object. The grid of object points is spaced according to the Nyquist sample spacing in the object plane. Nyquist spacing in the focal plane is given by , and the Nyquist spacing at the object is . Our simulation includes a skip parameter, whereby we have the option to skip a specified number of Nyquist samples for the purposes of propagation, and then we interpolate the PSFs that are generated to obtain the complete set. A bilinear interpolation is employed here for each sample of the PSF, based on the samples from the 4 PSFs that surround the PSF being interpolated. Once the PSFs are all generated, the output image is generated by using PSFs as weights in a spatially varying weighted sum of pixels from an ideal image.7
The propagation method described in Sec. 3.2 is applied to each of a grid of points in the object plane. The point source wave functions are propagated through the phase screens and to the pupil plane. The incoherent PSF is then computed as described in Sec. 3.3. The phase screens are generated with the appropriate statistics as described in Sec. 3.4. Finally, in Sec. 3.5, we describe how the individual phase screen Fried parameters are determined in our simulation.
Numerical Wave Propagation
The split-step propagation method used to form each PSF is illustrated in Fig. 5. It involves a point source and a series of phase screens that have been cropped based on the geometry shown in Fig. 4. Note that the path from the point in the object plane to the center of the camera aperture forms the centers of the cropped phase screen windows. We use nearest-neighbor interpolation here to speed computation. However, other forms of interpolation may be used.
The real valued wave function for the propagating field in the ’th plane along the -axis is given by2 This is given by 2
We can express the free-space propagation, plus phase screen perturbation,2 as2 for Fresnel diffraction. This is given by 2 In particular, the method described by Schmidt as angular-spectrum propagation is used.2 Absorbing borders using a Gaussian window may be applied in simulations in which a significant amount of signal energy reaches the borders of the simulation area.2
The sample spacing used to represent the phase screens and to implement Eq. (19) is based on Voelz’s critical sampling rule (best use of bandwidth and spatial support).23 This gives a sample spacing for the phase screens of2 In particular, we use a phase screen width of , where is the multiplier parameter. Using Eq. (21), this means that . We chose to be as small as possible such that and is a power of two (to speed up FFT computations). We also use a constant screen spacing so that all propagations can be achieved with the same impulse response in Eq. (20).
Incoherent Point Spread Function
Following the propagations, the complex amplitude at the pupil plane is obtained. This is multiplied by the camera aperture mask , and a collimation-type phase compensation is used to allow the lens operation to focus the image at the focal length. This yields19 7 The final simulation output is computed with a spatially varying weighted sum as
Note that using the collimation in Eq. (22), the PSF image is focused at the focal length and not the image distance. This creates a magnification of , where the in-focus modeled system would have a magnification of . For large ranges, the difference is negligible. For shorter distances, the magnification can be corrected during the resampling step.
Generating Phase Screen Realizations
Phase screen realizations are designed to follow a modified von Kármán phase power spectral density (PSD).2 This PSD includes the Kolmogorov PSD as a special case, but has additional parametric flexibility. The modified von Kármán PSD is given by2 Note that for and , the modified von Kármán PSD is equivalent to a Kolmogorov PSD.2 An example of the modified von Kármán PSD is shown in Fig. 6 for , , and .
Generating a realization of a random process with specified PSD can be done by generating white noise with constant PSD of 1 and filtering it with a frequency response that is the square root of the desired PSD. Thus, we begin realizing the modified von Kármán phase screens by generating an array of independent and identically distributed Gaussian random samples with standard deviation of 1. Note that is the extended phase screen dimension, prior to cropping to a size of for propagation. We wish for these samples to correspond to a constant PSD value of 1 over the range to , where . This means the total power (i.e., the integral of the PSD) should be . For our discrete samples, that means the variance should be . Thus, we multiply the unit standard deviation samples by . Next, we filter the array with a discrete-space impulse invariant24 version of a filter with frequency response given byFig. 4.
One can see in Fig. 6 that the most dynamic part of the modified von Kármán PSD lies at low spatial frequencies. Faithful generation of low spatial frequency content is essential in matching theoretical statistics such as tilt variance and the long-exposure OTF. Since the sampling process limits the resolution at which the spatial frequencies can be evaluated in Eqs. (25) and (26), special subharmonic methods may be required.2,25 Note that evaluating Eqs. (25) and (26) for use with FFTs is done in frequency increments of . The subharmonic methods seek to produce appropriate spectral content at spatial frequencies below the limit. We employ a subharmonic generation method based on the technique presented by Schmidt.2 However, we use a real-valued 2-D Fourier series representation of the subharmonic spatial phase realizations, made up of a sum of 2-D cosines, to simplify the computations.
An example phase screen realization is shown in Fig. 7. This has been generated from the modified von Kármán PSD shown in Fig. 6. The scale is in units of radians, and an aperture of size is shown for reference. A single PSF generated from this phase screen and aperture is shown in Fig. 8. Note that the PSF is down sampled to the Nyquist pixel spacing for the camera model, prior to applying it as part of the spatially varying weighted sum operation in Eq. (24) to produce the output image.
Determining Phase Screen Parameters
The individual phase screen Fried parameters are determined such that they are consistent with the global Fried parameter, log-amplitude variance, and isoplanatic angle. This approach is based on the method presented by Schmidt,2 but we have extended this to also include the isoplanatic angle. Based on the weightings shown in Fig. 1, we see that these three parameters give us a good balance of weighting along the optical path. This allows us to simulate variable profiles more accurately.
Following Schmidt’s method,2 the individual screen Fried parameters are related to screen parameters using
We use a constrained least squares optimization, based on minimizing , where , and is the left side of Eq. (32). Our implementation uses the MATLAB function “fmincon” to perform the minimization. The constraints are user controlled in our simulation tool. For the results presented here, we use two constraints. One constraint is that the log-amplitude variance generated by any phase screen cannot exceed 20% of the total. The other constraint is that the (i.e., no screen at the pupil). Since all point sources converge to the center of the pupil plane, they all share the exact same pupil phase screen. This tends to create excess tilt correlation. This might be expected by looking at the weighting in Fig. 2. Note that this is an artifact of the discrete phase screen approach. With more screens, the common pupil plane screen will represent a smaller and cause less excess correlation. However, by setting the final phase screen phases to zero and using Eq. (32), we find that we are able to achieve a good match with the theoretical predictions, even with a relatively small number of screens.
We initialize the minimization search with Eq. (27), where are the average for the section of the profile represented by the ’th screen, at . We found that we are able to get reasonable results using these initial values for the simulation (especially for a large number of screens). However, we observed a better overall match with the validation metrics using the optimization.
Experimental Results and Validation
The experimental results presented have been generated using the optical parameters listed in Table 1 and simulation parameters in Table 2. Note that we are simulating a range of 7 km for a visible wavelength telescope and camera with a wavelength of . We have elected to use phase screens (9 nonzero phase screens). We have chosen a very large outer scale, since we are validating against the Kolmogorov-based statistics in Sec. 2. Thus, for these results, the modified von Kármán PSD is acting essentially as a Kolmogorov PSD. The images are of size , and spatial sampling at the Nyquist frequency is used (based on the diffraction-limited optical cut-off frequency). Note that other degradations such as downsampling and noise can easily be added after the turbulence simulation. We use a pixel skip parameter of four pixels to speed up the processing by a factor of , compared with generating one PSF for each pixel. This can be adjusted according to the needs of the simulation and processing resources available. Our first set of results, in Sec. 4.1, is for a constant profile, and then we consider varying profiles in Sec. 4.2. For each simulation, 200 frames are generated, and the theoretical validation metrics are compared with the parameters estimated from the simulation. Implementation issues and run times are discussed in Sec. 4.3.
|Cropped screen samples|
|Propagation screen width|
|Pupil plane point spread|
|Propagation sample spacing|
|Number of phase screens||(9 nonzero)|
|Phase screen type||Modified von Kármán with subharmonics|
|Image size (pixels)|
|Image size (object plane)|
|Pixel skip||4 pixels ( PSF array)|
We have simulated a constant profile with five levels of turbulence, ranging from to . Some of the validation results are presented in Table 3. The validation metrics listed are , , and the root-mean-squared (RMS) -tilt. The Fried parameter is estimated from the simulation long-exposure PSF, obtained by averaging the generated PSFs over frames. The isoplanatic angle is estimated by an analysis of the wave function phases over the aperture for point sources of varying angular separations. Finally, the -tilt is estimated by performing a correlation-based registration of the simulated PSFs. We have found that this type of correlation PSF registration corresponds well with -tilt, and PSF centroids correspond well to -tilt. Note that we see a high level of agreement with regard to all of the metrics in Table 3.
Constant Cn2 simulation results. Comparison of theoretical statistical parameters and those estimated from the simulation output.
|Percent error (%)||1.28||2.21||2.72||3.48|
|Percent error (%)||0.78||5.35||4.38||2.24||0.65|
|Percent error (%)||0.78||5.35||4.38||2.24||0.64|
|Theoretical RMS -tilt (pixels)||0.9026||1.4271||2.0182||2.8542||3.4957|
|Simulation RMS -tilt (pixels)||0.9044||1.4294||2.0151||2.8398||3.4692|
|Percent error (%)||0.20||0.16|
The phase screen Fried parameters, obtained using the procedure described in Sec. 3.5, are shown in Fig. 9 for two levels of turbulence. No value is plotted for the pupil location at since it would be infinity, as described in Sec. 3.5. Note that even though we are simulating a constant profile, the values in Fig. 9 are not constant. This is a consequence of the optimization procedure. One can see that the phase screen immediately before the pupil plane does have a lower than the others. This is in response to the pupil plane screen constraint, and this final nonzero screen is effectively “responsible” for a larger portion of the optical path than the others.
The theoretical long- and short-exposure PSFs (with diffraction) are shown in Figs. 10 and 11, respectively. In particular, amplitude normalized cross sections of the theoretical and simulated PSFs are shown for two turbulence levels. An excellent agreement can be seen here in all cases. Note that without subharmonics, the simulated long-exposure PSFs tend to be too narrow, and the RMS -tilts tend to be too low. The relationship between these two parameters is explored in depth by Hardie et al.15
Structure functions of phase2 are shown in Figs. 12. These curves show the average squared phase difference over the aperture for two points separated by an angle of . Note that the isoplanatic angle is defined to be the angle where these structure functions have a value of 1 radian. These plots show fairly good agreement between the theory and simulation. Thus, the simulation appears to be capable of accurately capturing small scale anisoplanatic behavior. Note that the deviation of the simulated and theoretical curves for high source separation angles is due in phase wrapping. Phase wrapping is an unavoidable consequence of FFT processing during propagation. This artificially limits the estimated structure functions of phase in the simulation. It does not, however, compromise the generation of the PSFs in any way.
A comparison of theoretical and simulated -tilt correlations is shown in Fig. 13 for two turbulence levels. A similar comparison of the DTV is shown in Fig. 14. These curves show that the simulation is accurately producing the larger scale anisoplanatic behavior predicted by the theoretical expressions. Note that if a nonzero phase screen is placed at the pupil plane, we tend to see simulated correlation that is higher, and DTV that is lower, than the theoretical values. This is explained by the fact that all PSFs share the exact same phase screen at the pupil, because of the converging optical paths.
Let us now examine some image results. The ideal object image used in these simulations is shown in Fig. 15. Degraded images for two levels of turbulence are shown in Fig. 16. Also shown in Fig. 16 are the corresponding -tilt motion vectors, scaled by . The level of blurring and warping clearly increases with .
Validation analysis for the variable profiles is shown in Table 4. The corresponding profiles are shown in Fig. 17. Path A has heavy turbulence at the source, and Path B has heavy turbulence at the camera. The screen Fried parameters for these two cases are shown in Fig. 18. The long- and short-exposure PSFs are shown in Figs. 19 and 20, respectively. The structure functions of phase are shown in Fig. 21. The tilt correlation and DTV plots for the two paths are shown in Figs. 22 and 23, respectively. As with the constant profile results, there is generally good agreement between theory and simulation. The most deviation is seen with the tilt correlation and DTV for Path A. We shall continue to investigate this in future work.
Variable Cn2(z) profile simulation results. Comparison of theoretical statistical parameters and those estimated from the simulation output. Path average Cn2=1.00×10−15 m−2/3.
|(A) Heavy at source||(B) Heavy at camera|
|Percent error (%)||2.33||2.89|
|Percent error (%)||1.58||3.09|
|Percent error (%)||1.58||3.09|
|Theoretical RMS -tilt (pixels)||2.1080||3.4455|
|Simulation RMS -tilt (pixels)||2.1054||3.4071|
|Percent error (%)|
Image results for the variable profiles are shown in Fig. 24. The blurring is much more significant for Path B (heavy turbulence at camera). This is supported by the much smaller as shown in Table 4. The tilt vectors are also largest for Path B. This is consistent with the higher RMS -tilt reported in Table 4. On the other hand, Path A yields a smaller isoplanatic angle.
The simulation has been implemented in MATLAB 2016a and run on a PC with Intel(R) Xeon(R) CPU E5-2620 v3 at 2.40 GHz, 16 GB RAM, and running Windows 10. We employ a NVIDIA Quadro K-4200 GPU. We use the parallel computing toolbox for the PSF generation component, such that the FFTs are all computed on the GPU. This is done by simply assigning the corresponding numerical arrays to be “gpuArrays.” We have taken care to code the components efficiently by minimizing the use of “for” loops and maximizing the use of array operations. We used MATLAB’s profiler to look for bottle necks and made adjustments for improved processing speed. The algorithm component run times are listed in Table 5 using the parameters listed in Tables 1 and 2. Note that the GPU provides a speed up of the PSF array computation. The algorithm processing times scales with the number of PSFs being generated. Processing larger images or using a smaller pixel skip will give a correspondingly larger run time.
Algorithm average run times for processing one simulated 257×257 pixel frame using the parameters listed in Tables 1 and 2.
|Algorithm component||Run time (s)|
|Phase screen generation||1.67|
|PSF array generation (w/GPU)||20.46|
|PSF array generation (w/o GPU)||115.73|
|Spatially varying convolution||2.23|
|Total (w/o GPU)||119.63|
We have presented a numerical wave propagation method for simulating imaging of an extended scene under anisoplanatic conditions. In the simulation, we compute an array of PSFs for a 2-D grid of points on the object plane. An ideal image is then degraded by applying the PSFs in a spatially varying weighted sum operation. This gives rise to a spatially varying blurring and warping degradation. The PSFs are generated by the propagation of point source through an array of phase screens. The optical path for each point is somewhat different, by virtue of its originating position on the object plane and passes through different portions of the phase screens. This produces distinct but spatially correlated PSFs. We have employed a modified approach for determining the phase screen Fried parameters that incorporates , and the log-amplitude variance.
To see if the resulting PSFs exhibit accurate anisoplanatic statistical properties, we have conducted an extensive validation analysis. This analysis shows that this simulation is capable of generating accurate anisoplanatic effects on both a small and large scale. Small scale anisoplanaticism is validated with the isoplanatic angle. Large scale anisoplanaticism is validated for the first time using tilt correlation,16 as well as with a newly derived DTV statistic for spherical waves. We also find excellent agreement between the simulated and theoretical long- and short-exposure PSFs.
We have demonstrated the simulation’s performance for both constant profiles and varying profiles. We have also generated sequences of independent frames and sequences with temporally correlated frames. The temporal correlation is achieved by generating extended phase screens and translating these according to a specified wind speed. The portion of the screens between the object and camera for a given frame are used for that frame’s PSF generation. We have included video results showing the results of the temporally correlated sequence generation. Based on our analysis, we believe that this simulation approach can be used effectively to study anisoplanatic optical turbulence and to aid in the development of image restoration methods.
The authors would like to thank Dr. Craig Olson and Dr. Mike Theisen at L-3 Communications Cincinnati Electronics, who provided helpful input regarding wave propagation for the simulation. We would also like to thank Matthew D. Howard at AFRL for providing helpful suggestions related to sampling of the phase screens. Thanks also to Michael Rucci and Dr. Barry Karch for providing feedback regarding the simulation and the statistical validation. This work has been supported in part by funding from L-3 Communications Cincinnati Electronics and under AFRL Award No. FA8650-10-2-7028 and FA9550-14-1-0244. Approved for public release, Case Number 88ABW-2016-5066.
Russell C. Hardie is a full professor in the Department of Electrical and Computer Engineering at the University of Dayton, with a joint appointment in the Department of Electro-Optics. He received the University of Dayton’s Top University-Wide Teaching Award, the 2006 Alumni Award in teaching, the Rudolf Kingslake Medal and Prize from SPIE in 1998 for super-resolution research, the School of Engineering Award of Excellence in teaching in 1999, and the first annual Professor of the Year Award in 2002 from the student chapter of the IEEE.
Jonathan D. Power received his BSEE degree from Cedarville University and his MSEE degree from the University of Dayton. He is an electrical engineer working at Wright-Patterson Air Force Base. For his master’s, he completed a thesis and developed a simulation for modeling anisoplanatic atmospheric turbulence for imagery along slanted optical paths. His research interests include atmospheric turbulence, polarimetric imaging, and image processing.
Daniel A. LeMaster is the technical advisor for the EO Target Detection and Surveillance Branch in the Sensors Directorate of the Air Force Research Laboratory (AFRL). He has served as a conference chair, committee member, and guest editor for SPIE and has taught as an adjunct professor (Wright State University) and as a professional education instructor (Georgia Tech Research Institute). Prior to AFRL, he worked as a research engineer in the defense intelligence community and served in the US Army.
Douglas R. Droege received his BS degree in electrical engineering, his BS degree in computer science, his MS degree in electrical engineering, and his PhD degree in electrical engineering, all from the University of Dayton. He is a director of Advanced Programs at L-3 Communications Cincinnati Electronics (L-3). He received the L-3 Integrated Sensor Systems Sector Leadership Award in 2011 and the Corporate L-3 Engineer of the Year Award in 2012.
Szymon Gladysz received his PhD degree in physics from the National University of Ireland in Galway. He is head of the Adaptive Optics Group at Fraunhofer Institute of Optronics, System Technologies and Image Exploitation, Ettlingen, Germany. He has authored or coauthored around 60 publications on turbulence and adaptive optics in journals and conference proceedings. Additionally, he represents Germany on NATO SET-226 Research Task Group [turbulence mitigation for electro optics (EO) and laser systems] and is a committee member on various OSA and SPIE conferences.
Santasri Bose-Pillai received her BSEE degree (with Hons.) from Jadavpur University, India, in 2000, her MSEE degree in 2005, and her PhD in electrical engineering with a focus in optics in 2008 from New Mexico State University. Currently, she is a senior research associate at AFIT’s Center for Directed Energy within Engineering Physics Department. Her research interests include propagation and imaging through atmospheric turbulence, telescope pointing and tracking, rough surface scattering, and laser communications. She is a member of OSA, SPIE, and DEPS.