Reticle Haze results from the deposition of a chemical residue of a reaction that is initiated by Deep Ultra Violet (DUV)
or higher frequency actinic radiation. Haze can form on the backside of the reticle, on the chrome side and on the pellicle
The most commonly reported effect of haze is a gradual loss in transmission of the reticle that results in a need to
increase the exposure-dose in order to maintain properly sized features. Since haze formation is non-uniform across the
reticle, transmission loss results in an increase in the Across Chip Linewidth Variation (ACLV) that is accompanied by a
corresponding reduction in the manufacturing process window. Haze continues to grow as the reticle is exposed to
additional low wavelength radiation through repeated use.
Early haze formation is a small-area phenomenon in comparison to the total area of the reticle and may initiate
simultaneously in separate areas. The early stages of reticle haze therefore results in a degradation of Best Focus, Depth
of Focus and the Exposure latitude of individual features in the "hazed" area prior to any noticeable large area
transmission loss. Production lots subject to reticle hazing on critical layers will experience a direct loss of lithographic
yields, loss of capacity, an increase in rework rates and an ultimate loss in overall final-test yield long before the need for
an overall image exposure-dose increase is detected.
Feature profiles and process response are degraded at the earliest stages of haze formation. While early hazing may occur
in a small area of the reticle, the area influenced by the initial deposition is relatively large in comparison to the size of
an individual circuit feature. A sampled metrological inspection of a regular array of points across the exposure field is
therefore able to detect any form of reticle haze if the analysis monitors the feature-profile response rather than simply
feature widths. A model-driven method for the early detection of reticle-haze using basic feature metrology is developed
in this study. Application results from a production reticle are used to demonstrate validation of the technique that
employs a highly accurate method of calculation of the uniformity of the reticle exposure-response for individual
features across the exposure.
The current ITRS roadmap details the growing complexity of device design and the latest device-manufacturer's
techniques for tuning their process for each new design generation. In spite of the current desire to incorporate
techniques termed "Design for Manufacture" (DFM) into the sequence, simulations and the design cycle do little more
than optimize feature quality for ideal exposure conditions while testing for shorts, opens and overlay problems over
process variations. Testing in the DFM simulation is performed by the adaptation of a technique unchanged in the last
30 years, the Process Window analysis. With this, mediocre successes seen in chip-design have not taken their share of
the burden of technology advancement. Consequently, process adaptation to each new design has fallen to increasingly
complex setup procedures of the exposure toolsets that customize scanner performance for each new device.
Design optimization by simulation focuses on feature layout optimization for resolution. Design solutions that take
advantage of the full potential spectrum of mask-feature alternatives to increase functional process-space and simplify
setup in manufacturing do not exist since there is no method of feedback. A mechanism is needed that can quantify
design performance robustness, with mask-contributions, to variations in the user's specific manufacturing process.
In this study, a <b>Process Behavior Model</b> methodology is presented for the analysis of feature profiles and films to
derive the relative robustness of response to process variations for alternative OPC designs. Analysis is performed
without regard to the specific mechanics of the design itself. The design alternatives of each OPC feature are shown to
be strong contributors not only to resolution and depth-of-focus but also to the stability of final image response; that is
the ability of the feature profile to remain at optimum under varying conditions of process exposure excursion.
Several different, 70 nm multi-pitch OPC designs are compared for their response stability to fluctuations of the
process. The optimal process corrections on the reticle are shown to be dependent upon not only the final image size at
some optimal exposure point but also on the ability of the design to maintain feature size within tolerance across an
increasingly large process-space of the target production process. The failure of the classic Process Window analysis to
anticipate or provide corrective insight for performance improvement under these conditions is illustrated.
Models are presented that allow the extraction of the nonlinear but systematic interactions of several OPC designs with
the normal fluctuations experienced across the process exposure space plus those introduced by the toolset and filmstack
variation. A method of extracting the systematic component of each feature's design-iteration is derived
providing the ability to quantify the specific OPC response sensitivity to changes in the exposure and process films as
well as drift introduced by the tools of the exposure set.
Process Bias is traditionally defined as a manufactured offset of the mask-features that induces a photoresist image size
to more closely match the nominal or desired circuit design size. The metric is calculated as the difference between the
size of the image on the wafer and the mask with image reduction taken into consideration.
Optical process corrections (OPC) in the mask design must consider not only the Bias but also the influence of aerial
image artifacts such as near-neighbor proximity, polarization and birefringence. The interactions are further complicated
by the wavefront's interaction with the imaging media and optical interactions with the translucent film stack on the
wafer. With the increased frequency of resolution enhancement (RET) artifacts on the mask, the concept of Bias as a
simple scalar becomes less clear.
In this study Bias is shown to exhibit the anticipated systematic response to all of the static exposure conditions of the
process. Variations across each field-of-exposure however behave nonlinearly with the range of fluctuations
encountered within the process-space experienced during device manufacture. A model is developed that allows the
Bias response to be comparatively measured for each mask feature-design that characterizes not only the behavior at
optimum exposure but also each features stability across process and imaging perturbation sources.
The Bias models are applied to profile metrology gathered from matrix exposure data. Fine-structure perturbations in
the Bias are extracted comparing their relative variation to process fluctuations that in-turn illustrates a strong individual
feature construction-sensitivity. This analysis suggests that individual feature design is a strong contributor to process-stability
of a reticle. Even more significant, the Static Bias variation across the exposure field of a reticle is shown to be
inversely related to the dose-uniformity map needed to achieve uniform critical features at the process-target size.
A new metric is introduced to provide a means of modeling the non-linear local Bias Signature for IntraField feature
perturbations as a measure of the Bias Error Enhancement Function (BEEF). The BEEF metric is shown to be
relatively insensitive to variations in the manufacturing exposure process-space but strongly responsive to variations in
critical feature manufacture or design. The model is then extrapolated to define the relationship between Bias Response
and the Mask Error Enhancement Function (MEEF).
The base design of a photomask feature is shown to be a strong contributor not only to resolution and depth-of-focus but
also to the robustness of image response or it's ability to maintain stable resolution and depth of focus across the
process-space. The Proper selection of different feature design alternatives can greatly reduce photomask sensitivity to
process variations. The selection process for these designs as well as new reticle validation is simplified using the BEEF
metric as an evaluator. "BEEF" is a metric more closely tied to process response of a reticle design than MEEF and is
more easily extracted from in-process raw metrology.
The classic Bossung Curve analysis is the most commonly applied tool of the lithographer. The analysis maps a control surface for critical dimensions (CD's) as a function of the variables of focus and exposure (dose). Most commonly the technique is used to calculate the optimum focus and dose process point that yields the greatest depth-of-focus (DoF) over a tolerable range of exposure latitude. Recent ITRS roadmaps have cited the need to control CD's to less than 4 nm Across-Chip-Linewidth-Variation (ACLV). A closely related requirement to ACLV is the need to properly evaluate the implementation of Optical Proximity Correction (OPC) in the final resist image on the wafer. Calculation of ACLV and the process points are typically addressed with the use of theoretical simulator evaluations of the actinic wavefront and the photoresist's interactions. Engineers frequently prefer the clean results of the simulation over the more cumbersome and less understood perturbations seen in the empirical metrology data resulting in a loss of valuable process control information. Complexity increases when the analysis assumes a super-positioning of the responses of multiple feature-types in the search for an overlapping process window. Until recently, simulations rarely validated design response to the process and never incorporated the characteristics of the exposure tool and reticle. Fortunately empirical Bossung curve calculations can supply valuable tool, process and reticle specific interaction information if the techniques are expanded through the use of spatial and temporal perturbation models of the actinic image wavefront. In this implementation the classic focus-exposure matrix is shown to be a powerful tool for the determination of optimum focus and focus uniformity across the full exposure field. Although not the tool of choice for pupil aberration analysis, the method is the best implementation for determining the behavior of device critical feature response when the constructs of OPC, forbidden-pitch and inherent reticle variability are involved. Improved process performance can be achieved with algorithms that provide a calculation of the optimum focus ridge whose resulting feature response-to-dose curves are more easily traced to simulation. Response surface models are presented and applied to a calculation of the Best Focus surface for the exposure field. Unlike specialty reticles used in defocus error, the Bossung curve maps the response of the reticle specific feature or OPC design and can provide information on errors induced by the lens/optomechanical system of the exposure tool. The Bossung curve delivers several additional response surfaces needed for proper qualification of any exposure-tool and reticle set. These include the ability to contour-map the critical Feature-Best-Focus surface response across the exposure field of the reticle that accounts for feature and process design variations, the Depth-of-Focus uniformity surface for each critical feature across the full exposure, an Isofocal ridge analysis of the process and the associated process perturbation response and the effective dose-uniformity response needed to achieve target feature size uniformity across the exposure. The Feature-Best-Focus response surface is critical to any systemic analysis because it is the optimum estimation of the reticle feature uniformity without the perturbations induced by exposure defocus. It is shown that when combined in the analysis these techniques provide improved and quick full-field and process-range feature control limit and tolerance calculation for new designs. The exposure limits thus calculated can then provide a realistic and stable process control set for use in the classic process window analysis. Finally, by deconvolving the systemic reticle signature, the original data provides a feature-specific analysis of Dose-Uniformity. The dose-maps created in this step can be linked to local variations in MEEF and can be used for IntraField Dose Compensation in advanced exposure tools.
This report considers a detailed method of rapid and accurate experimental calculation of the Mask Error Enhancement Function (MEF or MEEF) using localized CD variation across the exposure field. MEEF is defined as the non-constant bias of wafer-image replication to small changes in the reticle image. The extraction method of the MEEF response of a reticle to it's process environment is shown to contain a method of measuring the robustness of the OPC design structures on the reticle and their ability to compensate wafer-image replication across the scope of production-process perturbations. This study demonstrates that a MEEF response can be characterized by a regressive comparison of reticle and wafer image sizes for any reticle OPC structure. Expanding the analysis to a focus-dose matrix that approximates normal production variations allows the MEEF response sensitivities to be deconvolved into their component contributions to critical feature variation across the wafer. IntraField dependencies such as sensitivity to the direction of the scan, and thus reticle-stage drive loading are investigated and their contributions are presented at the end of the report. Process induced perturbations such as focus and dose can also change the MEEF and their response is characterized and shown to be a significant contributing factor. An algorithm is then used to extract the full-wafer systematic sensitivity of MEEF to slowly changing perturbations such as film thickness changes in the Anti-Reflective Coating (ARC) and Photoresist thickness. Correlation of the MEEF response to film thickness is discussed and shown to be significant for some films. A budget summary of the systematic perturbation inherent in these MEEF factors is compared against the needs of sub-90 nm nodes with considerations toward the necessity of process-specific OPC design for critical layers.
Most process window analysis applications are capable of deriving the functional focus-dose workspace available to any set of device specifications. Previous work in this area has concentrated on calculating the superpositioned optimum operating points of various combinations of feature orientations or feature types. These studies invariably result in an average performance calculation that is biased by the impact of the substrate, reticle and exposure tool contributed perturbations. Many SEM's and optical metrology tools now provide full-feature profile information for multiple points in the exposure field. The inclusion of field spatial information into the process window analysis results in a calculation of greater accuracy and process understanding because now the capabilities of each exposure tool can be individually modeled and optimized. Such an analysis provides the added benefit that after the exposure tool is characterized, it's process perturbations can be removed from the analysis to provide greater understanding of the true process performance. Process window variables are shown to vary significantly across the exposure field of the scanner. Evaluating the depth-of-focus and optimum focus-dose at each point in the exposure field yields additional information on the imaging response of the reticle and scan-linearity of the exposure tool's reticle stage. The optimal focus response of the reticle is then removed from a full wafer exposure and the results are modeled to obtain a true process response and performance.
Competitive high volume semiconductor manufacturing yields require that critical feature profiles be continually monitored for uniformity and production control. Historically this has involved long and tedious analyses of Scanning Electron Microscope (SEM) photos that resulted in an average feature profile or a qualitative comparison of a matrix of black and white images. Many factors influence profiles including wafer flatness, focus and film thicknesses. Characterizing profile uniformity as a function of these parameters not only stabilizes high product yields but also significantly reduces the time spent in problem aversion and solution discovery. Scatterometry uniquely provides the combination of feature metrics and spatial coverage needed to monitor production profiles. The vast amount of data gathered by these systems is not well handled by classic statistical methods. A more practical approach taken by the authors is to apply spatial models to the profile data to determine the relative stability and contributions of film, substrate and the exposure tool to process perturbations. Recent work performed by Agere and TEA Systems is shown to be capable of quantitatively modeling the relative contributions of lens slit, reticle-scan and lens degradation to feature size and side-wall angle (SWA). This work describes the models used and the slit-and-scan contributions that are unique for each exposure tool. Finally it is shown that the direction and linearity of the reticle scan can be a contributing factor to the feature profile error budget with direct influence production image stability.
It’s commonly reported that a difference exists between directly measured reticle feature dimensions and those produced in the final lithographic image. Quantifying this mask error function (MEF) and the sources of the perturbation has been the topic of many papers of the past several years. Past studies have been content to evaluate these functions by statistical averaging thereby neglecting the potential influence of process and exposure contributions.
The material presented here represents the findings of an extensive study of reticle-process interactions. Phase I of the evaluation consisted of focus and dose exposures of the reticle and subsequent modeling of the full-profile response. This analysis provided extensive information on the optimum-printed feature profiles while removing the contribution of across-field focus variations.
The reticle was directly characterized using both conventional SEM and a new Nanometrics OCD Scatterometer technique. The full-field modeled response surface of the directly measured feature characteristics are then used to calculate the across-field MEF and provide an improved estimate of the true response of the feature to exposure. Phase II of the analysis turns its attention to characterization of the full-wafer process response. Both the modeled and directly measured reticle surfaces were removed from Scatterometry measured full-wafer exposures. Normal process variations consisting of photoresist and ARC thickness volatility are next used to show the response of the printed feature. Finally a summary of the relative contribution of each process perturbation to the feature profile error budget is discussed.
Device Design criteria and product complexity have reduced the Focus Budget on today's technologies to near zero. Recent years have seen the introduction of a number of focus monitor methods involving new designs and processes that attempt more accurately or more easily to define the focus performance of our imaging systems. We have evaluated several focus monitoring techniques and compared their relative strengths and speed. The objective of this study is to demonstrate each technology's ability to evaluate exposure tool lens performance and quantify those factors that directly degrade depth-of-focus in the process. Baseline focus for process exposure and lens aerial image aberration analysis is evaluated using focus matrices. The remaining contributors to depth-of-focus (DOF) degradation are derived from the opto-mechanical interactions of the tool during full-wafer exposures. Full-wafer exposures, biased to -100 nm focus, were used in the determination of these error sources. Exposing all test sequences on the same 193 nm scanner provided consistency of the comparison. A valid analytical comparison of the technologies was further guaranteed by using a single software tool, Weir PSFM software from Benchmark Technologies, to calibrate, analyze and model all metrology. Two of the four techniques we evaluated were found to require focus matrices for analysis. This prohibited them from being able to analyze the fixed-focus exposure detractors to the DOF. One technique was found to be ineffective at the 193 nm because of the high-contrast response of the photoresists used. An analysis of the aerial image was validated by comparison of each technique to the Z5 Zernike as measured by ASML's ARTEMIS analysis. The ASML FOCAL and Benchmark PGM targets, both replicating dense- packed feature response, best tracked ARTEMIS signature. A whole-wafer, fixed exposure tool focus analysis is used to evaluate wafer, photoresist and dynamic scan contributions to the focus budget. Of the four techniques considered only the PSFM and PGM patterns could be used for this evaluation. Performance response is reported for detractors involving the wafer as well as the mechanical scan direction of the reticle stage.
Feed-forward, as a method to control the Lithography process for Critical Dimensions and Overlay, is well known in the semiconductors industry. However, the control provided by simple averaging feed-forward methodologies is not sufficient to support the complexity of a sub-0.18micrometers lithography process. Also, simple feed-forward techniques are not applicable for logics and ASIC production due to many different products, lithography chemistry combinations and the short memory of the averaging method. In the semiconductors industry, feed-forward control applications are generally called APC, Advanced Process Control applications. Today, there are as many APC methods as the number of engineers involved. To meet the stringent requirements of 0.18 micrometers production, we selected a method that is described in SPIE 3998-48 (March 2000) by Terrence Zavecz and Rene Blanquies from Yield Dynamics Inc. This method is called PPC, Predictive Process Control, and employs a methodology of collecting measurement results and the modeled bias attributes of expose tools, reticles and the incoming process in a signatures database. With PPC, before each lot exposure, the signatures of the lithography tool, the reticle and the incoming process are used to predict the setup of the lot process and the expected lot results. Benefits derived from such an implementation are very clear; there is no limitation of the number of products or lithography-chemistry combinations and the technique avoids the short memory of conventional APC techniques. ... and what's next? (Rob Morton, Philips assignee to International Sematech). The next part of the paper will try to answer this question. Observing that CMP and metal deposition significantly influence CD's and overlay results, and even Contact Etch can have a significant influence on Metal 5 overlay, we developed a more general PPC for lithography. Starting with the existing lithography PPC applications database, the authors extended the access of the analysis to include the external variables involved in CMP, deposition etc. We then applied yield analysis methods to identify the significant lithography-external process variables from the history of lots, subsequently adding the identified process variable to the signatures database and to the PPC calculations. With these improvements, the authors anticipate a 50% improvement of the process window. This improvement results in a significant reduction of rework and improved yield depending on process demands and equipment configuration. A statistical theory that explains the PPC is then presented. This theory can be used to simulate a general PPC application. In conclusion, the PPC concept is not lithography or semiconductors limited. In fact it is applicable for any production process that is signature biased (chemical industry, car industry, .). Requirements for the PPC are large data collection, a controllable process that is not too expensive to tune the process for every lot, and the ability to employ feedback calculations. PPC is a major change in the process management approach and therefor will first be employed where the need is high and the return on investment is very fast. The best industry to start with is the semiconductors and the most likely process area to start with is lithography.
An advanced control system providing modeling and predictive data simulation for pass-fail criteria of overlay production control has been used in 0.18 micrometer Design Rule production facilities for over a year. During this period overlay was measured on both product wafers and during periodic process qualification tests. The resulting raw data is modeled using exposure tool specific and layer-focused models. Modeled results, measured process statistics and tool signatures are combined in a real-time simulation to calculate the true overlay distribution over the entire wafer and lot. All results and raw data are automatically gathered and stored in a database for on-going analysis. In this manner, tool, product technology and process performance data are gathered for every overlay process-step. The data provides valuable insights into not only tool stability but also the process- step characteristic errors that contribute to the overlay spectrum of distortions. Data gathered in this manner is very stable and can be used to predict a feed-forward correction for all correctable coefficients. The technique must take into consideration algorithm modeled coefficient variations resulting from: (1) Reticle pattern-to-alignment mark design errors. (2) Process film variations. (3) Tool-to-tool static matching. (4) Tool-to-tool dynamic matching errors which are match-residual, process or time induced. This extensive database has resulted in a method of conducting Predictive Process Control (PPC) for overlay lithography within an advanced semiconductor line. Using PPC the wafer production facility experiences: (1) Improved Yield: Lots are always exposed with optimum setup. Optimized setups reduce rework levels and therefore wafer handling. (2) Capacity Improvement: Elimination of rework tacitly improves capacity in the facility. WIP is also simplified because lots do not have to wait for a dedicated exposure tool to become available. (3) Dynamic Matching<SUP>TM</SUP>: Matching of multiple exposure tools is continuously monitored by the use of the feedback loop. Tool precision can be monitored as well as the setup systematic offsets. In this manner, the need to remove an exposure tool from production for match-maintenance can be predicted and scheduled. Residual matching errors can also be removed from the production cycle. The benefits of full production lot modeling and the contributors to production errors are presented. Process and Tool interactions as well as control- factor coefficient stability indicate the level of control to be well beyond manual methods. Calculations show that these contributors are predictable, stable and are a necessary tool for competitive sub-0.2 micron production. An analysis of the overlay error sources within two facilities results in consistent facility process response and a well-defined error budget derivation. From this analysis, the control added to semiconductor overlay is shown capable of extending mix-and- match exposure tool operations in production down to 0.12 micrometer design rules.
Exposure tool optimization in process development today extends beyond the classic concepts of exposure and focus setting. The lithographer must understand and tune the system for critical feature performance using variables such as Numerical Aperture (NA), Partial Coherence (PC), and critical level tool matching. In a previous study, the authors demonstrated that the phase-shift focal plane monitor (PSFM) accurately measures focal plane variations when appropriate calibrations are employed. That paper also described the development of a model for classic aberrations such as Astigmatism, Field Curvature and Coma. The model considered geometrical aberrations (Seidel) with radial symmetry across image plane as being the primary contributor to CD variation across stepper image plane. The publication correlated image plane focal results to an approximation of the stepper's Critical Dimension (CD) behavior for a matrix of NA and PC settings. In this study, we continue the analysis of the focus budget and CD uniformity using two generations of optical steppers in a 0.35 micrometers process. The analysis first addresses questions involving the use of the PSFM including the variation of calibration across the exposure field and the advantages of using field center or full field calibrations. We describe a method of easily measuring the uniformity of NA and PC across the exposure field. These new tools are then applied as an aid in lens image field characterization and tool-to-tool matching. The information gathered is then applied to measure image aberrations and predict CD variation across the image under various conditions of focus. The predictions are validated by a comparison against CD uniformity as measured by a commercial Scanning Electron Microscope. Present work confirmed previous work and recent assumptions that Zernike diffraction theory of aberration is most appropriate for current stepper lenses with local image plane focal variations across entire field being the major contributor to field CD variations.
In this study we evaluate the focus budget of an i-line stepper, examining the sources of focus erosion for a 0.40 micrometers process. The analysis first examines the best focus of the system as predicted by the several common tools currently used in the industry. Using the overlay focus- monitor, we then determine the value of lens aberrations such as astigmatism and field curvature. The results of lens heating examined for the lens using these common techniques. A model describing the focus aberrations is then developed and applied to the data. This model uses data derived from the focus monitor to determine lens errors such as coma, astigmatism and field curvature. Using this model, data gathered over various numerical aperture and partial coherence values are evaluated to determine the variation of lens aberrations over their range and the depth of focus. Finally, data consisting of critical dimension information gathered using an commercial, automated SEM is used to validate the predictions of the focus model.
Equipment for pattern generation in the semiconductor industry have long been a critical part of the manufacturing process. As the industry matured and finer features were required on larger exposure fields, the complexity of these machines grew. The requirements of calibration soon taxed the ability of statistics to understand the complexities of and control required by the tools. In response, the industry adopted an analytical method of data analysis. Mathematical models describing the operation, design and behavior of these tools were developed. These models are called machine models and herein is contained a critical review of the technology to date. The development of machine models is shownfrom a historical standpoint. First the concepts of metrology and a clear explanation of the analytical method are presented. Then the development of modeling and how it was influenced by the automation of metrology in the mid 1980’s. The later sections describe models and their application to machine characterization and matching. These are followed by recent concepts in model building and evaluation. In doing so, it is shown how the concept of machine models are critical to the continued advancement of lithography and equipment development. A final section presents some previously unpublished work on the determination of machine precision. The work is supported by an appendix of derivations for the basic model elements.
The considerations which drive an expert system for assisting in measurement system characterization are described. The expert system employs several novel techniques for evaluating the integrity of a characterization analysis by determining the degree to which critical assumptions are satisfied and flagging weak points in the data collection or analysis procedure. The properties of good characterization sampling plans are derived. Methods for formulating reliable characterization studies are described. The paper focuses on short term studies intended for equipment comparisons and calibrations; however, with minor alterations it can be expanded to include longer term stability studies.
Overlay of pattern registration is considered by some to be the most yield critical metrology element monitored in the semiconductor manufacturing process. Over the years, the aggressive demands of competitive chip design have constantly maintained these specifications at the process capability limit. This has driven the lithographer from somewhat simple process control techniques like optically read verniers, to computer automated overlay measurement systems whose outputs are applied to the estimation and correction of full field systematic error sources primarily as modeled wafer and lens pattern distortions. When modeled pattern distortions are used to optimize the lithographic overlay process, the point measurement of registration error is no longer the parameter of interest. Instead the lithographer wishes to measure and minimize the surface modeled pattern distortions such as translation, rotation, and magnification. Yet, often neglected is the fact that estimates of these parameters are influenced by measurement system errors resulting in a loss of precision in the estimate of the distortions and the false introduction of otherwise nonexistent distortions leading to improper determination of the true values for the lens. This paper describes the results of a screening simulation designed to determine the relative effects of measurement system errors on the distortion coefficient estimates produced by a pattern distortion model. The simulation confirms the somewhat obvious result that tool induced shift (TIS) translates directly into the estimate of the offset term of the model. In addition, the simulation indicates that errors in the measurement system pixel scale calibration directly scale all distortion estimates by the same factor. The variance of the measurement system sums with the variance of the stepper and inflates the standard error of the regression as well as the uncertainty of each lens parameter's estimate. Higher order nonlinearities or systematic errors in the response of the registration measurement system do not translate directly into distortion coefficient estimates, rather they also inflate the uncertainty associated with each distortion's estimate. Heuristic analytical considerations are presented which explain the behavior observed in the simulation and are used to demonstrate that these conclusions do apply to the general case.
A new figure of merit, the critical dimension capability factor, of CDC, is described. The CDC incorporates measurements taken over a range of linewidths and over a range of process variations which simulate normal and extreme process operating conditions. Under these conditions CDC uniquely quantifies the capability of the measurement instrument on a given substrate and for a given set of parameter settings. CDC is calculated by performing a linear regression between measurements generated by the instrument under test (IUT) and a set of reference values (internally generated standard values). The mean square error (MSE) between the regression line and the observed values is then partitioned into components which estimate the contribution to the MSE from various sources based on a rigorous statistical analysis. The final CDC value is defined as the linewidth to uncertainty ratio and is a function of uncertainty introduced in the characterization procedure as well as the uncertainty introduced when the IUT makes a measurement in practice. Since the CDC is a function of the overall uncertainty in the measurements of the IUT relative to the reference values, it can legitimately be compared from one instrument to another and used to evaluate alternative measurement methods and technologies.
How ready are we to efficiently use the new metrics of film thickness, side-wall angle, profile, line-edge-roughness and focus? These are all variables familiar to us but never before have they been provided so abundantly and provided in so many formats! A decade ago many lithographers addressed the needs of production and process development with little or no automated metrology. Today it's common to have up to six or more types of metrology available for providing the raw data needed to control operations and adjust for product or process changes.
Lithographers work in an industry that has lived by the precepts of statistical control. This course addresses advanced techniques for production control and tool characterization/ matching using focus, overlay and feature profile models. Models and their interactions for metrology, aberrations, process window and distortions will be addressed. Through theory, statistics and real data examples we will consider when to apply each as well as the advantages and potential pitfalls of the each technique. The course addresses and develops process, metrology and spatial models to measure these interactions from a control standpoint using both classic metrics and new techniques in focus, scatterometric and CD-SEM data implementation.
SC110: Model-based Exposure Tool Characterization, Matching, and APC Methods
How ready are you as a lithographer to meet the challenges of a technology confronted by increased technical challenges and dwindling operational margins? Optical lithography will carry the industry through the first decade of the new millennium. New tools such as variable numeric aperture, partial coherence, active lens stacks and individual control of scan, platen, chuck and optics tilt will allow the engineer to fine tune stepper and scanner performance as never before. Yet, these new tools also contribute unexpected interactions in the focus budget, aberrations and distortions present in lithography. Real time control of the production margin is best managed by a combination of experience-based models and performance simulation from sparse data metrology. But what criteria are best employed in these environments and how can the engineer control production without control-loop runaway? The next decade will require the lithographer to clearly understand key areas: equipment setup, tuning, calibration and matching; lens distortions that influence optical train performance and multiple tool matching; focal plane aberrations and their influence on critical dimension control; and the influence of the process and reticle tool-set on aberration and production control. This course addresses advanced techniques for lithography equipment characterization, optimization and matching. Through theory, statistics and real data examples we will consider when to apply each as well as the advantages and potential pitfalls of each technique.