Proceedings Article | 22 March 2008
KEYWORDS: Overlay metrology, Calibration, Metrology, Reticles, Semiconducting wafers, Image processing, Reflectivity, Signal to noise ratio, Diagnostics, Radar
As overlay budgets continue to shrink, there is an increasing need to more fully characterize the tools used
to measure overlay. In a previous paper, it was shown how a single-layer Blossom overlay target could be
utilized to measure aberrations across the field of view of an overlay tool in an efficient and low-cost
manner. In this paper, we build upon this method, and discuss the results obtained, and experiences gained
in applying this method to a fleet of currently operational overlay tools.
In particular, the post-processing of the raw calibration data is discussed in detail, and a number of different
approaches are considered. The quadrant-based and full-field based methods described previously are
compared, along with a half-field method. In each case we examine a number of features, including the
trade off between ease of use (including the total number of measurements required) versus sensitivity /
potential signal to noise ratio. We also examine how some techniques are desensitized to specific types of
tool or mark aberration, and suggest how to combine these with non-desensitized methods to quickly
identify these anomalies.
There are two distinct applications of these tool calibration methods. Firstly, they can be used as part of the
tool build and qualification process, to provide absolute metrics of imaging quality. Secondly, they can be
of significant assistance in diagnosing tool or metrology issues or providing preventative maintenance
diagnostics, as (as shown previously) under normal operation the results show very high consistency, even
compared to aggressive overlay requirements.
Previous work assumed that the errors in calibration, from reticle creation through to the metrology itself,
would be Gaussian in nature; in this paper we challenge that assumption, and examine a specific scenario
that would lead to very non-Gaussian behavior. In the tool build / qualification application, most scenarios
lead to a systematic trend being superimposed over Gaussian-distributed measurements; these cases are
relatively simple to treat. However, in the tool diagnosis application, typical behavior will be very non-
Gaussian in nature, for example individual outlier measurements, or exhibiting bimodal or other probability
distributions.
In such cases, we examine the effect that this has on the analysis, and show that such anomalous behaviors
can occur "under the radar" of analyses that assume Gaussian behavior. Perhaps more interestingly, the
detection / identification of non-Gaussian behavior (as opposed to the parameters of a best fit Gaussian
probability density function) can be a useful tool in quickly isolating specific metrology problems. We also
show that deviation of a single tool, relative to the tool fleet, is a more sensitive indicator of potential
issues.