Open Access
5 November 2014 Graphics processor unit accelerated finite-difference time domain method for electromagnetic scattering from one-dimensional large scale rough soil surface at low grazing incidence
Author Affiliations +
Abstract
The graphics processor unit-based finite-difference time domain (FDTD) algorithm is applied to study the electromagnetic (EM) scattering from one-dimensional (1-D) large scale rough soil surface at a low grazing incident angle. The FDTD lattices are truncated by a uniaxial perfectly matched layer, and finite difference equations are employed in the whole computation domain for convenient parallelization. Using Compute Unified Device Architecture technology, we achieve significant speedup factors. Also, shared memory and asynchronous transfer are used to further improve the speedup factors. Our method is validated by comparing the numerical results with those obtained by using a CPU. The influences of the incident angle, correlation length l, and root-mean-square height δ on the bistatic scattering coefficient of a 1-D large scale rough surface at low grazing incidence are also discussed.

1.

Introduction

Investigations on electromagnetic (EM) randomly rough surface have become a popular topic owing to significant applications in the fields of remote sensing, target identification, and radar detection.13 Many analytical and numerical approaches have been developed to deal with the EM scattering model. For example, the Kirchhoff approximation,4,5 which is valid when a rough surface is smooth, and the small-perturbation method6 used where the standard deviation of a rough surface is small compared with the wavelength, are invalid at low grazing incident angles. To solve this scattering problem, numerical methods, such as the parallel method of moment (MoM) based on the message passing interface (MPI) between the personal computer (PC) clusters,7 the generalized forward-backward method,8 the multilevel sparse-matrix canonical-grid method,9 and the MPI-based parallel finite-difference time domain (FDTD) method10 are extensively used.

This paper presents the graphics processor unit (GPU)-accelerated parallel FDTD method to study the scattering characteristic of the bistatic scattering coefficient. The proposed approach differs from the previously mentioned methods in that it studies the bistatic one-dimensional (1-D) large scale rough surface based on the GPU platform using Compute Unified Device Architecture (CUDA) technology.

Compared with others, the FDTD method has its own advantages.10 When a large scale rough surface with low grazing incident angles is investigated, the generated length of rough surface should be as long as possible,11 which results in large numbers of unknowns. The traditional FDTD method can hardly handle such problems because of the limitation of computation time. Using the MPI-based parallel FDTD mentioned above,10 the computation time is extremely reduced compared to that of sequential implementation. However, the speedup factors of the MPI-based method are limited by the high cost of the hardware. Fortunately, CUDA technology based on GPU has been extensively and successfully implemented for large-scale FDTD simulations.1214 Compared to the MPI technology, the GPU can achieve huge speedup factors at a low cost for its powerful computing capability, which is why we adopted GPU-based FDTD technology to extend the application of the FDTD method in analyzing scattering from a large scale rough surface at low grazing incident angles. To our knowledge, few studies have been reported to solve this problem using the GPU-based FDTD implementation. Here, a uniaxial perfectly matched layer (UPML) medium is used to truncate the FDTD lattices, and the finite difference equations in the UPML medium are used for the total computation domain to facilitate the implementation of the parallel algorithm. All of our calculations are single precision arithmetic.

The remainder of this paper is organized as follows: in Sec. 2, the theoretical equations for calculating EM scattering from rough surface by FDTD are presented in detail. In Sec. 3, a programmable GPU-based CUDA architecture is introduced, and details about the implementations of the GPU accelerated FDTD for a rough surface are illustrated. Shared memory and asynchronous transfer are used to improve the performance. Also, the influences from the incident angle, correlation length, as well as the root-mean-square (rms) height to the bistatic scattering coefficient are also discussed in Sec. 4. Some concluding remarks are addressed and further investigations are proposed in Sec. 5.

2.

Theoretical Analysis

2.1.

Rough Surface Model

We generate the profile of a 1-D rough surface, which is simulated by the Monte Carlo method. Taking the TM incident wave, for example, the scattering model for a 1-D random rough surface with a height profile function y=f(x) is shown in Fig. 1, where an incident wave impinges on the surface in the direction of ki, which makes angle θi relevant to the y-axis. The scattered direction is ks and the scattered angle is θs. f(x) is a Gaussian distributed rough surface with the exponential power spectrum density function W(K) expressed as follows:

Eq. (1)

W(K)=δ2lπ(1+K2l2),
where the quantities δ and l are rms height and correlation length, respectively, and determine the profile of the rough surface. L is the length of the rough surface. As shown in Fig. 1, in order to avoid the edge diffraction effect, a Gaussian window function is introduced and expressed as15

Eq. (2)

G(x,y)=exp{[(xxcen)2+(yycen)2](cosθiT)2},
where xcen and ycen are the center coordinates of the connective boundary. T is a constant which determines the tapering width of the window function, and is chosen so that the tapering drops from unity to 103 at the edge, as well as cosθi/T=2.6/ρm, where ρm is the minimum distance from the center coordinate to the edge of the connective boundary.16

Fig. 1

Geometry for electromagnetic scattering from a one-dimensional (1-D) rough surface (TM wave).

JARS_8_1_084795_f001.png

2.2.

FDTD Method for Rough Surface

Figure 2 shows the division model of the computation region for the FDTD algorithm used to calculate EM scattering from a rough surface. To simulate the infinite free space in the finite computing field, a virtual absorbing boundary is employed outside the FDTD region. We use the UPML absorbing medium17,18 to truncate the FDTD lattices. The connective boundary divides the computation into the total field region and the scattered field region,19 where the incident wave is generated. After the near fields are obtained, far fields can be determined by performing a near-to-far-field transformation at the output boundary.19 Finally, the bistatic scattering coefficient σ in the far zone is calculated by20

Eq. (3)

σ=limr2πrL|Es|2|Ei|2,
where Es is the scattered electric field and Ei is the incident electric field. r is the distance from the scatterer point to the origin.

Fig. 2

Finite-difference time domain (FDTD) model of a 1-D rough surface.

JARS_8_1_084795_f002.png

3.

CUDA Implementation of FDTD for Rough Surface

This section introduces the PC platform and CUDA programming model. The parallelization strategy includes CUDA implementation and computing optimization. Also, the performance is further improved by using shared memory and asynchronous transfer.

The introduction of the GPU-based CUDA architecture by NVIDIA gave rise to a new era of graphics computing without esoteric knowledge of graphics computation models. CUDA is a highly parallel and efficient computing architecture with which GPUs can solve many complex problems through built-in streaming multiprocessors executing a number of threads in parallel.21 The CUDA programming model assumes that the sequential code executes on the host (CPU) while the instruction with high data parallelism executes on the device (CUDA-enabled GPU). As illustrated by Fig. 3, a CUDA program begins with serial execution on the host, including CPU and GPU memory allocation, initialization, and deallocation. Kernels defined as functions are executed on the device by a large amount of threads in parallel. The memories on the two platforms (host and device) are physically separated in the heterogeneous programming model. For further information about the CUDA technology, the reader can refer to Ref. 21.

Fig. 3

Heterogeneous programming.

JARS_8_1_084795_f003.png

As illustrated by Fig. 4, an exponential rough model is first built by the Monte Carlo method presented above. The CPU then assigns the host and device memory, as well as the grid and block size based on the model. Parallel implementation is carried out when referring to the near-field iteration, which is extremely time-consuming in the whole FDTD computation. The near-field iteration includes the incident magnetic field update, the incident electric field update, introduction of the incident wave at the connective boundary, the electric field component(s) update, and the magnetic field component(s) update. It is necessary to synchronize for some threads to share data with each other. The threads in the same block synchronize by using __syncthreads () though shared memory, whereas a new kernel function is invoked to synchronize though global memory for the threads belonging to different blocks. To force synchronization on the grid level, five kernels are utilized to achieve the functions, including IncidentHKernel (the incident magnetic field update), IncidentEKernel (the incident electric field update), ConnectionKernel (introducing the incident wave at the connective boundary), eKernel (the electric field component(s) update), and hKernel (the magnetic field component(s) update). When the near-field iteration is finished by the GPU, the far-field can be obtained with great ease on the CPU platform.

Fig. 4

The flowchart of the graphics processor unit-based FDTD algorithm for a rough surface.

JARS_8_1_084795_f004.png

The CUDA implementation of FDTD for calculating EM scattering from the soil surface is performed on NVIDIA Tesla k40c with 2880 CUDA cores. Also, the sequential program is executed on Intel Xeon CPU E5-2620 2.10 GHz. The computing platform is listed in Table 1. The speedup factor in this paper is defined as the ratio of computation time for one surface sample by sequential FDTD to that by CUDA FDTD.

Table 1

Parameters of the computing platform.

Host
CPUIntel Xeon CPU E5-2620
Memory32 Gb
Device
GPUNVIDIA Tesla k40c
Number of CUDA cores2880
Total amount of global memory11,520 Mbytes
Total amount of shared memory per MP49,152 bytes
Total amount of registers available per MP65,536

Taking the TM case, for example, the CPU and GPU times are compared for calculating the EM scattering from a rough surface as incident frequency increases from fi=1GHz to fi=64GHz at an incident angle of θi=55deg. The mesh along the x-direction increases from 4096Δ to 262144Δ by keeping the length of the rough surface L=61.44m. Table 2 compares computation times of the serial FDTD method for a rough surface with one surface realization with that of the GPU implementation. As illustrated by the table, it is obvious that the speedup factors increase with an increase in the number of unknowns, but is reduced from 89.32 to 88.96 for 131,072 and 262,144 unknowns, which demonstrates that huge computations can make full use of the thousands of threads on the GPU and that the large data transfer between the host and the device reduces the speedup factors.

Table 2

Comparison of CPU and GPU times with one surface realization.

fi (GHz)Mesh (Δ)CPU timeGPU timeSpeedup
14096237.357.6927.07×
416,3843576.2365.2454.81×
832,76814532.75224.4964.73×
1665,53659636.62810.7873.55×
32131,072241638.543243.5374.49×
64262,1441039048.1314524.2171.53×

3.1.

Further Improvement with Shared Memory

In order to boost the performance of the kernels, the on-ship shared memory is utilized to eliminate the uncoalesced access. Shared memory is available to the thread block, in which the threads share their results and the execution of threads in the threadblock can be synchronized at the block level. With the TM case as an example, Fig. 5 shows that the data are first loaded from global memory to shared memory when the electric field and magnetic field updates are executed. When the magnetic components (Hx, Hy) are calculated, not only are the Ez values of the current block of threads copied to the shared memory, but the values of the left column threads of the right adjacent block and the up row thread of the down adjacent block are also loaded. When the electric field iteration function is invoked, not only are the Hx and Hy values of the current block transferred from global memory to shared memory, but Hx values of the down row threads of the up adjacent block and Hy values of right column of the left adjacent block are also delivered. The speedup factors as improved by shared memory are demonstrated in Table 3.

Fig. 5

Data transfers from global memory to shared memory: (a) magnetic field iteration and (b) electric field iteration.

JARS_8_1_084795_f005.png

Table 3

Speedup improvement with shared memory.

fi (GHz)Mesh (Δ)CPU timeGPU timeSpeedup
14096237.357.6930.86×
416,3843576.2356.2563.57×
832,76814532.75193.5375.09×
1665,53659636.62704.9584.59×
32131,072241638.542810.7385.97×
64262,1441039048.1312664.7882.04×

3.2.

Further Improvement with Asynchronous Transfer

As shown in Tables 1 and 2, when the numbers of the meshes are 262144, the time for data transfer from CPU to GPU becomes prominent. Taking the TM case, for example, to achieve the far-field, the values of the Ez component and Hx component are needed to copy back from the GPU to CPU to perform a near-to-far field transformation. The asynchronous transfer is used to hide data transfers between the GPU and CPU by concurrently executing CUDA streams. Using multiple streams, the data transfer and computation can be overlapped. In this paper, the computation region is divided into n subgrids, and n is the number of streams. Figure 6 illustrates the C codes for a asynchronous transfer. It should be pointed out that the “offset_boundary” is the value of the last subgrid needed in the current subgrid update. The speedup factors as improved by asynchronous transfer are listed in Table 4.

Fig. 6

C codes for realizing the asynchronous transfer.

JARS_8_1_084795_f006.png

Table 4

Speedup improvement with asynchronous transfer.

fi (GHz)Mesh (Δ)CPU timeGPU timeSpeedup
14096237.357.4231.98×
416,3843576.2353.5866.74×
832,76814532.75179.2181.09×
1665,53659636.62652.1891.44×
32131,072241638.542595.3193.10×
64262,1441039048.1311144.7393.23×

4.

Electromagnetic Scattering From Soil Surface at Low Grazing Incidence

To ensure the accuracy and stability of the FDTD method, the spatial and time increments are taken as Δx=Δy=Δ=λ/20 and Δt=0.5×Δ/c, respectively. The quantity λ is the incident wavelength and c is the light speed in vacuum. The UPML thickness is 10Δ.

The accuracy of the CUDA implementation is verified by comparing the numerical results with those obtained by sequential execution on the CPU. Figure 7 demonstrates the bistatic scattering from an exponential soil surface with characteristic parameters δ=0.1λ and l=1.0λ under the incident angle θi=40deg at the incident frequency of fi=1GHz. The generated length of the rough surface is L=204.8λ (4096Δ). The real and imaginary values of relative permittivity of the soil surface with 3.8% moisture are taken as εr=(2.5,0.18).22 The results averaged by 20 surface realizations are in good agreement with the two implementations for both TM and TE incidences, demonstrating the accuracy of our FDTD–CUDA implementation. The times consumed for traditional FDTD schemes are approximately 88.25 and 91.23 min for the TM and TE cases, respectively. By contrast, the computation times of GPU-based FDTD are 2.59 and 2.43 min for the two incident cases. As is obvious, the time cost is dramatically reduced by the use of GPU implementation.

Fig. 7

Comparisons of the bistatic scattering from a soil surface by two implementations: (a) TM case and (b) TE case.

JARS_8_1_084795_f007.png

The scattering properties of a soil surface with length L=6553.6λ (131072Δ) for different incident angles increasing from the small incidence θi=30deg to low grazing incidence θi=80deg at the incident frequency of fi=1.9GHz by the GPU-based FDTD implementation are investigated in Fig. 8. Here, the surface characteristic and electrical parameters are δ=0.1λ, l=1.0λ and εr=(2.5,0.18) for both TM and TE cases, respectively. Scattering in the specular direction is strongest for the grazing incident angle regardless of the polarization of the incident wave. It should be noticed that there is a specular peak in the case of the TM incidence, and when TE wave is studied, the scattering for the grazing incidence in the specular direction is also larger than that for small incident angles.

Fig. 8

The bistatic scattering from a rough surface under different incident angles: (a) TM case and (b) TE case.

JARS_8_1_084795_f008.png

Figure 9 compares the influence of rough surface characteristic parameters including the correlation length l and the rms height δ on the EM scattering from 1-D large scale soil surface (L=6553.6λ) under a low grazing incident angle θi=80deg for our implementation. The incident frequency is fi=5.9GHZ.

Fig. 9

The bistatic scattering from a rough surface with different characteristic parameters: (a) TM (l=0.8λ; δ=0.1λ, δ=0.3λ, δ=0.5λ); (b) TE (l=0.8λ; δ=0.1λ, δ=0.3λ, δ=0.5λ); (c) TM (l=0.5λ, l=1.0λ, l=1.5λ; δ=0.1λ); and (d) TE (l=0.5λ, l=1.0λ, l=1.5λ; δ=0.1λ).

JARS_8_1_084795_f009.png

Figures 9(a) and 9(b) plot the bistatic scattering coefficient versus the scattering angle θs with different rms heights δ=0.1λ, δ=0.3λ, and δ=0.5λ keeping the correlation length l=0.8λ for TM and TE cases. For both TM and TE cases, the specular scattering decreases with the increase of rms height δ. Because the rms slope increases with increasing rms height, this leads to a decrease of the scattered energy in the coherent scattering direction. Figures 9(c) and 9(d) show the dependency of the bistatic scattering coefficient on the correlation l versus the scattering angle for TM and TE incidence waves. As shown in Figs. 9(c) and 9(d), with increasing correlation length, the specular scattering increases for both polarizations. The rms slope decreases with increasing correlation length resulting in stronger scattering in the specular direction for both TM and TE modes.

5.

Conclusions

In this paper, the GPU implementation of the FDTD method is applied to investigate the EM scattering from a large scale rough soil surface with an exponential spectrum at the low grazing angle. Shared memory is utilized to optimize our implementation to improve the performance, and favorable speedup factors are achieved by comparing the computation time with that of sequential execution on CPU, which shows that the GPU-based FDTD has an obvious advantage in the study of large scale surfaces over the sequential CPU implementation. Finally, influences from incident angle, correlation length, as well as rms height on the bistatic scattering coefficient are also investigated and analyzed by the algorithm. When a target above or below a rough surface is studied, traditional high-frequency techniques are ineffective in handling the model. Therefore, future investigations on this topic will focus on the composite scattering from a two-dimensional target above a 1-D randomly rough surface using the GPU-based FDTD method.

Acknowledgments

This work was supported by the National Science Foundation for Distinguished Young Scholars of China (Grant No. 61225002) and the Aeronautical Science Fund and Aviation Key Laboratory of Science and Technology on AISSS (Grant No. 20132081015).

References

1. 

M. MartorellaF. BerizziE. D. Mese, “On the fractal dimension of sea surface backscattered signal at low grazing angle,” IEEE Trans. Antennas Propag., 52 1193 –1204 (2004). http://dx.doi.org/10.1109/TAP.2004.827533 IETPAK 0018-926X Google Scholar

2. 

H. C. Kuet al., “Fast and accurate algorithm for electromagnetic scattering from 1-D dielectric ocean surface,” IEEE Trans. Antennas Propag., 54 2381 –2391 (2006). http://dx.doi.org/10.1109/TAP.2006.879193 IETPAK 0018-926X Google Scholar

3. 

L. Tsanget al., “Electromagnetic computation in scattering of electromagnetic waves by random rough surface and dense media in microwave remote sensing of land surfaces,” Proc. IEEE, 101 255 –279 (2013). http://dx.doi.org/10.1109/JPROC.2012.2214011 IEEPAD 0018-9219 Google Scholar

4. 

E. I. Torsos, “The validity of the Kirchhoff approximation for rough surface scattering using a Gaussian roughness spectrum,” J. Acoust. Soc. Am., 83 78 –92 (1988). http://dx.doi.org/10.1121/1.396188 JASMAN 0001-4966 Google Scholar

5. 

A. K. Sultan-SalemG. L. Tyler, “Validity of the Kirchhoff approximation for electromagnetic wave scattering from fractal surfaces,” IEEE Trans. Geosci. Remote Sens., 42 1860 –1870 (2004). http://dx.doi.org/10.1109/TGRS.2004.832655 IGRSD2 0196-2892 Google Scholar

6. 

L. X. Guoet al., “A high order integral SPM for the conducting rough surface scattering with the tapered wave incidence-TE case,” Prog. Electromagn. Res., 114 333 –352 (2011). PELREX 1043-626X Google Scholar

7. 

L. X. GuoA. Q. WangJ. Ma, “Study on EM scattering from 2-D target above 1-D large scale rough surface with low grazing incidence by parallel MOM based on PC clusters,” Prog. Electromagn. Res., 89 149 –166 (2009). http://dx.doi.org/10.2528/PIER08121002 PELREX 1043-626X Google Scholar

8. 

M. R. Pinoet al., “The generalized forward-backward method for analyzing the scattering from targets on ocean-like rough surfaces,” IEEE Trans. Antennas Propag., 47 961 –969 (1999). http://dx.doi.org/10.1109/8.777118 IETPAK 0018-926X Google Scholar

9. 

M. Y. Xiaet al., “An efficient algorithm for electromagnetic scattering from rough surfaces using a single integral equation and multilevel sparse-matrix canonical-grid method,” IEEE Trans. Antennas Propag., 51 1142 –1149 (2003). http://dx.doi.org/10.1109/TAP.2003.812238 IETPAK 0018-926X Google Scholar

10. 

J. Liet al., “Message-passing-interface-based parallel FDTD investigation on the EM scattering from a 1-D rough sea surface using uniaxial perfectly matched layer absorbing boundary,” J. Opt. Soc. Am. A, 26 1494 –1502 (2009). http://dx.doi.org/10.1364/JOSAA.26.001494 JOAOD6 0740-3232 Google Scholar

11. 

H. X. YeY. Q. Jin, “Parameterization of the tapered incident wave for numerical simulation of electromagnetic scattering from rough surface,” IEEE Trans. Antennas Propag., 53 1234 –1237 (2005). http://dx.doi.org/10.1109/TAP.2004.842586 IETPAK 0018-926X Google Scholar

12. 

P. SypekA. DziekonskiM. Mrozowski, “How to render FDTD computations more effective using a graphics accelerator,” IEEE Trans. Magn., 453 1324 –1327 (2009). http://dx.doi.org/10.1109/TMAG.2009.2012614 IEMGAQ 0018-9464 Google Scholar

13. 

W. W. MaD. SunX. L. Wu, “UPML-FDTD parallel computing on GPU,” in Microwave and Millimeter Wave Technology (ICMMT), 2012 Int. Conf. on, 1 –4 (2012). Google Scholar

14. 

M. Liveseyet al., “Development of a CUDA implementation of the 3D FDTD method,” IEEE Antennas Propag. Mag., 54 186 –195 (2012). http://dx.doi.org/10.1109/MAP.2012.6348145 IAPMEZ 1045-9243 Google Scholar

15. 

A. K. FungM. R. ShahS. Tjuatja, “Numerical simulation of scattering from three-dimensional random rough surface,” IEEE Trans. Geosic. Remote Sens., 32 986 –994 (1994). http://dx.doi.org/10.1109/36.312887 IGRSD2 0196-2892 Google Scholar

16. 

J. LiL. X. GuoH. Zeng, “FDTD investigation on bistatic scattering from a target above two-layered rough surfaces using UPML absorbing conditioned,” Prog. Electromagn. Res., 88 197 –211 (2008). http://dx.doi.org/10.2528/PIER08110102 PELREX 1043-626X Google Scholar

17. 

S. D. Gedey, “An anisotropic perfectly matched layer-absorbing medium for the truncation of FDTD lattices,” IEEE Trans. Antennas Propag., 44 1630 –1639 (1996). http://dx.doi.org/10.1109/8.546249 IETPAK 0018-926X Google Scholar

18. 

S. D. Gedey, “An anisotropic PML absorbing media for the FDTD simulation for fields in lossy and dispersive media,” Electromagnetics, 16 399 –415 (1996). http://dx.doi.org/10.1080/02726349608908487 ETRMDV 0272-6343 Google Scholar

19. 

A. TafloveS. C. Hagness, Computational Electrodynamics: The Finite-Difference Time-Domain Method, Artech House, Boston (2005). Google Scholar

20. 

J. A. Kong, Electromagnetic Wave Theory, Wiley, New York (1986). Google Scholar

21. 

NVIDIA CUDA C Programming Guide, Version 4.2, NVIDIA Corporation, Santa Clara, California (2012). Google Scholar

22. 

J. Curtis, Dielectric Properties of Soils: Various Sites in Bosnia (Data Rep.), US Army Corps of Engineers, Waterways Experiment, Washington, D.C. (1996). Google Scholar

Biography

Chungang Jia received a BS degree in 2009 from the School of Science, Taiyuan University of Technology, China, and he is currently pursuing a PhD degree at the School of Physics and Optoelectronic Engineering, Xidian University, China. His research interests include GPU high-performance computing in remote sensing and computational electromagnetics.

Lixin Guo received an MS degree in radio science from Xidian University, Xi’an, China, and a PhD degree in astrometry and celestial mechanics from Chinese Academy of Sciences, Beijing, China, in 1993 and 1999, respectively. During 2001 to 2002, he was a visiting scholar at School of Electrical Engineering and Computer Science, Kyungpook National University, Daegu, Republic of Korea. His research interests mainly include: electromagnetic wave propagation and scattering in random media, and inverse scattering.

Ke Li received a BS degree in electronic information science and technology from Xidian University, Xi’an, China, in 2010. He is currently pursuing a PhD degree in radio science from the School of Physics and Optoelectronic Engineering, Xidian University. His research interests include the areas of computational electromagnetics.

© 2014 Society of Photo-Optical Instrumentation Engineers (SPIE) 0091-3286/2014/$25.00 © 2014 SPIE
Chungang Jia, Lixin Guo, and Ke Li "Graphics processor unit accelerated finite-difference time domain method for electromagnetic scattering from one-dimensional large scale rough soil surface at low grazing incidence," Journal of Applied Remote Sensing 8(1), 084795 (5 November 2014). https://doi.org/10.1117/1.JRS.8.084795
Published: 5 November 2014
Lens.org Logo
CITATIONS
Cited by 3 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Finite-difference time-domain method

Scattering

Visualization

Electromagnetic scattering

Grazing incidence

Lithium

Magnetism


CHORUS Article. This article was made freely available starting 05 November 2015

Back to Top