In previous years, this committee reported on the need for a US National Laser damage standard, addressing the needs of domestic industry.  Last year, a process was reported that connected the measurement of the active defect density in a small area, a, with the likely density of such defects over a larger area, A. This was presented as the basis of a Type 1, go/no-go test. The main issue as reported last year is that the proper flow of a standard is to start with the required properties of the larger area and design a robust test. The process presented in 2017  is hard to implement in a way convenient for the non-expert user, which is nearly all. The main thrust of the work in 2018, is developing and evaluating options for implementing a useful workable standard.
 “Periodic Review of ISO 21254: US National Committee Proposal for Revision”, Jonathan W. Arenberg, Donna J. Howland, Christopher Wren Carr, Michael D. Thomas, John C. Bellum, Trey Robinson and Jason Yager, Presented at SPIE Laser Damage, Boulder CO, 2016
 "U.S. National Committee proposed revision to the ISO Laser Damage Standard”, Jonathan W. Arenberg, Donna Howland, Michael Thomas, Trey Turner, John Bellum, Ella Field, C. Wren Carr, Gary Shaffer, Matthew Brophy, Allen Krisiloff, Proc. SPIE 10447, Laser-Induced Damage in Optical Materials 2017, 104471E (21 November 2017)
This paper reports on the fundamental idea behind a US National Committee, The Optics and Electro-Optics Standards
Council (OEOSC) Task Force (TF) 7, proposal for a so-called Type 1 laser damage test procedure. A Type 1 test is
designed to give a simple binary, pass or fail, result. Such tests are intended for the transactional type of damage testing
typical of acceptance and quality control testing. As such is it intended for bulk of certification of optics for the ability
to survive a given fluence, useful for manufacturers of optics and their customers, the system builders. At the root of the proposed method is the probability that an optic of area A will have R or less damage occurrences with a user specified
probability P at test fluence Φ. This assessment is made by a survey of area and the observation of n events. The paper
presents the derivation of probability of N or less damage sites on A given n events observed in area a. The paper
concludes with the remaining steps to development of a useful test procedure based on the idea presented.
The laser damage performance of optical components is often limited by the presence of sparse defects rather than intrinsic material properties. In this regime, it is costly to perform destructive laser damage testing over areas large enough to make high-confidence statements of damage likelihood in non-tested parts or regions. Instead, one may record non-destructively the sizes and locations of all defects over a much larger area. It is also straightforward to do selective laser damage testing centered on defects (and defect-free sites) in a subregion. This latter measurement will yield a table that quantifies damage probability as a function of fluence and defect size. Combining the complete defect map and the damage probability table allows laser damage prediction at every location over the whole area of interest. In this paper large-area defect mapping of real-world coated optics is combined with previously established damage probability tables. The defect-driven contribution is shown to enhance the predictive power of the simulations as judged by standard damage testing.
Standard techniques for characterizing laser damage are ill-suited to the regime in which sparse defects form the dominant damage mechanism. Previous work on this problem using REO’s automated laser damage threshold test system has included linking damage events in HfO<sub>2</sub>/SiO<sub>2</sub> high reflector coatings with visible pre-existing defects, and using a probability per defect based on size and local fluence to generate predictions of damage events in subsequent coating runs. However, in all this work the test sites were always in a predefined array, and the association of defects with damage events was done only after the fact. In an effort to make this process both more efficient and less susceptible to uncertainties, we have now developed an adaptive test strategy that puts defect identification and analysis into the loop. A map of defect locations and sizes on a test surface is compiled, and a set of test sites and corresponding fluences based on that map is then generated. With defects of interest now centered on the damaging beam, the problem of higher-order spatial variation in the beam profile is greatly reduced. Test sites in zones with no detectable defects are also included. This technique allows for the test regimen to be tailored to the specific surface under consideration. We report on characterization of a variety of coating materials and designs with this adaptive method.
Laser damage testing is widely utilized by laser and laser system builders to ensure the reliability of their products.
When damage is due primarily to sparse defects, the relatively limited data sets acquired under typical testing protocols
tend to imply that laser damage probabilities go to zero below some reported damage threshold. However, this is rarely
an accurate picture of the actual damage characteristics of the sample set. This study attempts to establish a correlation
between observed coating defects and laser damage (from a 1064 nm laser in the nanosecond regime), utilizing a large
sample size from a single coating run, together with the actual fluence levels present at the defect sites. This correlation
is then used to predict damage for optics coated under different circumstances. Results indicate that it might be possible
to develop an alternate methodology for determining damage characteristics, based on observed defects, which is both
more reliable and less time-consuming than traditional laser damage testing.
Laser induced damage of optical components is a concern in many applications in the commercial, scientific and military
market sectors. Numerous component manufacturers supply “high laser damage threshold” (HLDT) optics to meet the
needs of this market, and consumers pay a premium price for these products. While there’s no question that HLDT
optics are manufactured to more rigorous standards (and are therefore inherently more expensive) than conventional
products, it is not clear how this added expense translates directly into better performance. This is because the standard
methods for evaluating laser damage, and the underlying assumptions about the validity of traditional laser damage
testing, are flawed. In particular, the surface and coating defects that generally lead to laser damage (in many laserparameter
regimes of interest) are widely distributed over the component surface with large spaces in between them. As
a result, laser damage testing typically doesn’t include enough of these defects to achieve the sample sizes necessary to
make its results statistically meaningful. The result is a poor correlation between defect characteristics and damage
events. This paper establishes specifically why this is the case, and provides some indication of what might be done to
remedy the problem.
Light scatter due to surface defects on laser resonator optics produces losses which lower system efficiency and output
power. The traditional methodology for surface quality inspection involves visual comparison of a component to scratch
and dig (SAD) standards under controlled lighting and viewing conditions. Unfortunately, this process is subjective and
operator dependent. Also, there is no clear correlation between inspection results and the actual performance impact of
the optic in a laser resonator. As a result, laser manufacturers often overspecify surface quality in order to ensure that
optics will not degrade laser performance due to scatter. This can drive up component costs and lengthen lead times.
Alternatively, an objective test system for measuring optical scatter from defects can be constructed with a microscope,
calibrated lighting, a CCD detector and image processing software. This approach is quantitative, highly repeatable and
totally operator independent. Furthermore, it is flexible, allowing the user to set threshold levels as to what will or will
not constitute a defect. This paper details how this automated, quantitative type of surface quality measurement can be
constructed, and shows how its results correlate against conventional loss measurement techniques such as cavity