Introduction and Motivation
This paper introduces a sensitivity test for the characterization of laser damage behavior of a test sample. During a laser damage measurement, each test site is exposed to a proscribed fluence and then a new site is exposed. There are two outcomes for the laser exposure, damage or no damage. Laser damage tests are typically, “one exposure” or a “go, no-go” test on a single site, similar to other instances where an exposed or stressed test specimen is not available for further testing. Sensitivity tests abound in such areas as: testing of explosives, electronic components, fuses, and toxicology.12.3.–4
Application of maximum-likelihood (ML) methods has a long history in the study of laser damage, especially laser safety measurements, “laser damage of the eye,” and date back to at least 1970.56.–7 The first application of ML methods to a nonbiological test sample is the determination of a laser damage threshold with a binomial model on multiple shot testing done at China Lake in the late 1970s.8 Recently, there has been a renaissance in the application of ML techniques to various aspects of the laser damage problem by several groups.910.11.–12 A group from the Lawrence Livermore National Laboratory has used ML damage characterization as part of a procedure to extract the distributions of defects on fused silica, an idea we will revisit later in this work.9 The group from Vilnius University in Lithuania has used ML methods applied to a degenerate defect model on data binned by fluence to study many aspects of the laser damage threshold determination problem.10,11 The results of the Vilnius group show a major increase in the quality of the determination of the threshold over damage frequency methods, frequently used in ISO standard measurements.12 The authors have been studying sensitivity test methods, whose results are also encouraging.1314.–15 All of the recent results emphasize the exciting possibilities that the application of ML methods can bring to the characterization of laser damage; increased threshold determination quality in terms of accuracy and repeatability, comparisons expanded beyond the threshold values, and increased fundamental knowledge of the damage behavior of a sample.
Following this introduction, the ML method will be briefly reviewed. Section 3 introduces the model of the damage performance for our virtual test article namely, selecting the defect distribution and laser illumination profile to formulate the test optic’s damage probability curve . The candidate probability model that will be optimized using the ML process, , is also introduced. In Sec. 4, the Monte Carlo simulation of the laser damage experiments is shown and the results discussed. Section 5 summarizes the results and provides a look forward.
Review of the Maximum-Likelihood Method
The results of a hypothetical damage measurement are shown in Fig. 1. At each test site , the optic is exposed to a single fluence, and has one of two outcomes; , which has the value 1 if the site is damaged and 0 if not. When spots have been examined, the measurement is completed and the data analyzed. The result of the -th event or exposure is an ordered pair, , containing the information of the fluence level, and the outcome, . The entire set of ordered pairs is called the experiment or measurement and each interrogation of the optic, , is called an event.
The likelihood or probability, , of observing a given experiment, all events being independent, is the product of the probabilities of the occurrence each event, ,
The probability of an event occurring is given by the parameterized model for the probability of damage at the fluence for the -th event, , , where are the model parameters. For the case of a laser damage test, is given by
The model parameters, , are varied to determine the parameter values that give the maximum value of .
The models for damage probability, , are substituted into Eq. (4) and the model parameters that maximize are called the ML solution.
The two-parameter Weibull distribution is the damage probability model chosen for this study. This distribution is known for its flexibility17 and is written as
Damage Probability of the Test Optic and Monte Carlo Simulation of Damage Measurement
This section introduces the probability of damage of the test optic under illumination, and the Monte Carlo simulation of the damage measurement.
is derived from the conditions of illuminating a Gaussian distribution of defects in fluence18 with a flat top beam. Our particular assumptions result in a probability of damage curve with the following equation
The parameters selected for this analysis are , and . A plot of Eq. (7) using the listed parameter values is shown in Fig. 2. It should be noted that was deliberately chosen to have a form different from , since in a real experiment the defect distribution is not known a priori.
To simulate a laser damage measurement with events, the test fluences, , are randomly chosen with uniform density on the interval . is selected to be sufficiently high so that . For this study, . For each event, a second random number, , uniformly distributed on the interval [0,1] is selected. If is greater than , the -th site is recorded as having damage and , otherwise. Figure 3 illustrates a Monte Carlo realization of a measurement and shows the .
Monte Carlo Results
The Monte Carlo model described in Sec 3 was evaluated for 100 trials for three values of , 100, 300, and 600 and was determined. Figures 4Fig. 5–6 show bubble plots of the pairs determined from the maximization of . In these plots, the size of the bubble is proportional to the frequency of occurrence and the ellipse enclosing two standard deviations, , of the joint distribution of is shown by the dashed curve. Figure 7 shows that as increases, the area enclosed by the ellipse dramatically decreases. The decrease in the area of the error ellipse with increasing is a clear trend. The error ellipses also decrease in eccentricity as increases, indicating greater statistical independence between and . Figure 7 also shows that the ellipses fit inside one another, meaning that. in this case, the sensitivity method converges to an ultimate pair with increasing .
Figures 8Fig. 9–10 show the probability of damage curves, , determined from the solutions. The are calculated for each of the 100 pairs for a given and the minimum, mean and maximum values of at a given determined and plotted. Figures 8Fig. 9–10 also include the , shown as the solid line trace. As increases, the distance between the maximum and minimum values of decreases. For and 300, the range envelops . At , and agree well at small fluences but differ at higher fluences. The fact that there are some regions where and disagree should be expected, as they do not have the same mathematical form.
Damage measurements made in the manner described in this paper also can yield important insights into the underlying defect distribution, allowing for comparisons among samples beyond the comparison of damage thresholds.9 Consider the formulation of the probability of damage curve, Pr(D), from Poisson statistics, and expressed in terms of the number of defects above threshold within the laser damage spot, 19
The value of is20 Equating (5) with (8) gives
Since our hypothetical beam is a flat top, is given11) now becomes
The defect density distribution, , is ultimately found by differentiation of Eq. (13) by . Equation (13) shows that a laser damage sensitivity test does provide an efficient, simple method to determine the defect density distribution, which is seminal to understanding the damage behavior of an optic under any fluence profile.20
Figure 11 shows a plot of using the mean probability of damage for , 300, and 600 and solving Eq. (8). The estimates of are very similar for the three values of for fluences less than . To calculate the range of , Eq. (8) is again solved for , but using the maximum and minimum probabilities at each fluence. Figure 12 shows the range of and shows a strong trend of decreased range with increasing .
Summary and a Look Ahead
This paper has introduced a sensitivity test for laser damage measurements. This technique processes unbinned damage results and uses ML methods to determine a parametric model for the probability of damage. We have shown how the results derived from this model can be used to directly extract the defect density distribution providing fundamental insight into damage behavior. This study has also demonstrated that the results of the sensitivity test converge with increasing the number of sites exposed. The reader can glean a hint of robustness of the sensitivity method by virtue of the fact that and had different forms, yet a reasonable result was obtained. All of these factors make it clear that the further development of a sensitivity test for laser damage is likely to produce useful outcomes for the entire laser damage community.
In our study, has no relation to the fluence of any of the previous events. This simple implementation is not the most efficient in terms of convergence of the answer with . The selection of fluence or stress levels in similar types of tests has been an active area of research for many years in the literature of sensitivity testing.2122.–23 We plan to investigate the manner in which fluence is selected based on the test history as a means to increase the rate of convergence of the model parameters with .
The model for used in this work is a two-parameter Weibull distribution. It was chosen for its simplicity and flexibility. Recall that for large values of , the true value for lies outside the tolerance limits for , see Fig. 10. Since the form of , which determines is not generally known, determining a more flexible form than our current assumption will be another avenue of process development. Increased flexibility will come with the computational burden of additional model parameters, which will drive up computing time and will likely require a longer test to glean the additional information required to make a more complex worthwhile.
Our research will mainly proceed along these two axes of advancement. We look forward to sharing our results in these pages.
The authors would like to acknowledge the helpful discussions and disagreements with our many colleagues at the Boulder Damage Symposium over the years. These interactions have inspired and guided this work. These discussions, questions, and challenges are of immense value to us, but too numerous to list, lest we forget someone. We would also like to acknowledge the two anonymous reviewers who made helpful and important suggestions for improvements in this paper. Thank you.
Jonathan W. Arenberg holds a BS degree in physics and an MS and PhD degrees in engineering, all from the University of California, Los Angeles. He has been with Northrop Grumman Aerospace Systems since 1989. His work experience includes optical, space, and laser systems. He has worked on major high-energy and tactical laser systems, laser component engineering, and metrology issues. He is currently the chief engineer for the James Webb Space Telescope Program.
Michael D. Thomas received his BS degree in optical engineering from the University of Rochester, in 1983. Upon graduation, He joined Sanders Associates of Nashua NH, where he was involved in solid state laser design and research and thin film design. He has been president of Spica Technologies since the company was founded in 1990 and is currently involved in laser damage testing and performance of precision optical measurements.