Fully automated semiconductor manufacturing, becoming a reality with the ramping of 300mm fabricators throughout the world, demands the integration of advanced process control (APC). APC is particularly critical for the lithography sector, whose performance correlates to yield and whose productivity often gates the line.
We describe the implementation of a comprehensive lithography APC system at the IBM Center for Nanoelectronics, a 300mm manufacturing and development facility. The base lithography APC function encompasses closed-loop run-to-run control of exposure tool inputs to sustain the overlay and critical dimension outputs consistent with product specifications. Automation demands that no decision regarding the appropriate exposure tool run-time settings be left to human judgment. For each lot, the APC system provides optimum settings based on existing data derived from pertinent process streams. In the case where insufficient prior data exists, the APC system either invokes the appropriate combination of send ahead processing and/or pre-determined defaults.
We give specific examples of the application of APC to stitched field and dose control, and quantify its technical benefits. Field matching < 0.1 ppm and critical dimension control < 2.5% is achieved among multiple exposure tools and masks.
This work integrates fundamental models and metrology sensors with state-of-the-art estimation and model-predictive control techniques in order to regulate overlay photo-lithography errors. Fundamental overlay models are presented that describe the relationship between the photo-lithography steppers and the metrology sensors. A Kalman Filter is employed that utilizes the process model and the sensor model and automatically estimates uncertain states given metrology measurements. A model-predictive controller is employed that is very effective in rejecting disturbances in the overlay process, such as tool drift and model mismatch. All overlay errors have been driven to zero +/- the measurement variance of the metrology tool. This level of control is achieved for every tool-device-layer-reticle combination.
Many state-of-the-art fabs are operating with increasingly diversified product mixes. For example, at Cypress
Semiconductor, it is not unusual to be concurrently running multiple technologies and many devices within each
technology. This diverse product mix significantly increases the difficulty of manually controlling overlay process
corrections. As a result, automated run-to-run feedforward-feedback control has become a necessary and vital
component of manufacturing.
However, traditional run-to-run controllers rely on highly correlated historical events to forecast process corrections.
For example, the historical process events typically are constrained to match the current event for exposure tool, device,
process level and reticle ID. This narrowly defined process stream can result in insufficient data when applied to lowvolume
or new-release devices.
The run-to-run controller implemented at Cypress utilizes a multi-level query (Level-N) correlation algorithm, where
each subsequent level widens the search criteria for available historical data. The paper discusses how best to widen the
search criteria and how to determine and apply a known bias to account for tool-to-tool and device-to-device differences.
Specific applications include offloading lots from one tool to another when the first tool is down for preventive
maintenance, utilizing related devices to determine a default feedback vector for new-release devices, and applying bias
values to account for known reticle-to-reticle differences. In this study, we will show how historical data can be
leveraged from related devices or tools to overcome the limitations of narrow process streams. In particular, this paper
discusses how effectively handling narrow process streams allows Cypress to offload lots from a baseline tool to an
With each new technology node, there is as usual a corresponding tightening of the overlay requirements. To achieve these requirements in production there is increasingly a need to apply APC strategies, in order to control overlay. However, in order to control overlay successfully using such APC strategies, it is critical to have a thorough understanding of all the sources of overlay error, both grid and intrafield, that contribute to the total overlay budget. Without this thorough understanding, it becomes difficult to establish whether the APC strategy is actually reducing the sources of overlay variation, or in the worst case, actually responsible for their increase. In this paper we present an analysis of the sources of overlay error for three ASML step and scan tools, rank their relative significance and develop a methodology for controlling them by means of an APC strategy. The analysis is based on data collected over a period of more than four months using a baseline monitor. Stability is monitored both with and without feedback corrections from an APC system, in order to optimize the APC strategy. From the analysis we propose a knowledge based APC methodology, using feedback optimization, for overlay control of ASML step and scan exposure tools.
Control of registration (overlay error between printed layers) is a key aspect of successfully manufacturing semiconductors. At Intel, registration control was formerly achieved through manual adjustments of the tool to account for the known effects of non-stationary drift. The objective of the stepper registration control (SRC) project was to create a robust algorithm and automated implementation to replace the manual adjustment process. This goal was accomplished at Intel by developing an automated product called SRC. At the heart of the SRC application is the SRC feedback algorithm. At the stepper, alignment settings are adjusted to correct for non-stationary drift. The SRC algorithm uses a weighted average of registration data from previous lots to determine the recommended alignment settings. The novel scheme weights prior lots using a combination of traditional EWMA based weighting and variance based weighting. After piloting and comparing the results against the manual algorithm, the SRC application has been shown to be at least as good as the manual algorithm. Thus the SRC application is being used by all 300mm Intel factories. Since HVM factories cannot resource the same level of frequent manual adjustments, the benefits of reduced rework rate and increased process capability is more pronounced in HVM.
Modern lithographic manufacturing processes rely on various types of exposure tools, used in a mix-and-match fashion. The motivation to use older tools alongside state-of-the-art tools is lower cost and one of the tradeoffs is a degradation in overlay performance. While average prices of semiconductor products continue to fall, the
cost of manufacturing equipment rises with every product generation. Lithography processing, including the cost of ownership for tools, accounts for roughly 30% of the wafer processing costs, thus the importance of mix-and-match strategies.
Exponentially Weighted Moving Average (EWMA) run-by-run controllers are widely used in the semiconductor manufacturing industry. This type of controller has been implemented successfully in volume manufacturing, improving Cpk values dramatically in processes like photolithography and chemical mechanical planarization.
This simple, but powerful control scheme is well suited for adding corrections to compensate for Overlay Tool Bias (OTB). We have developed an adaptive estimation technique to compensate for overlay variability due to differences in the processing tools.
The OTB can be dynamically calculated for each tool, based on the most recent measurements available, and used to correct the control variables. One approach to tracking the effect of different tools is adaptive modeling and control. The basic premise of an adaptive system is to change or adapt the controller as the operating
conditions of the system change. Using closed-loop data, the adaptive control algorithm estimates the controller parameters using a recursive estimation technique. Once an updated model of the system is available, modelbased control becomes feasible. In the simplest scenario, the control law can be reformulated to include the
current state of the tool (or its estimate) to compensate dynamically for OTB. We have performed simulation studies to predict the impact of deploying this strategy in production. The results for high running parts show rework reductions of about 10%, while low running parts improve by over 50%.
Traditional run-to-run controllers that rely on highly correlated historical events to forecast process corrections have been shown to provide substantial benefit over manual control in the case of a fab that is primarily manufacturing high volume, frequent running parts (i.e., DRAM, MPU, and similar operations). However, a limitation of the traditional controller emerges when it is applied to a fab whose work in process (WIP) is composed of primarily short-running, high part count products (typical of foundries and ASIC fabs). This limitation exists because there is a strong likelihood that each reticle has a unique set of process corrections different from other reticles at the same process layer. Further limitations exist when it is realized that each reticle is loaded and aligned differently on multiple exposure tools.A structural change in how the run-to-run controller manages the frequent reticle changes associated with the high part count environment has allowed for breakthrough performance to be achieved. This breakthrough was mad possible by the realization that; 1. Reticle sourced errors were highly stable over long periods of time, thus allowing them to be deconvolved from the day to day tool and process drifts. 2. Reticle sourced errors can be modeled as a feedforward disturbance rather than as discriminates in defining and dividing process streams. In this paper, we show how to deconvolve the static (reticle) and dynamic (day to day tool and process) components from the overall error vector to better forecast feedback for existing products as well as how to compute or learn these values for new product introductions - or new tool startups. Manufacturing data will presented to support this discussion with some real world success stories.
Automated process control loops running in semiconductor manufacturing facilities must be able to compensate for machine variations as well as identify differences between products. With a number of exposure tools, pattern levels, and active devices manufactured in a typical ASIC fab, for photo APC system must maintain thousands of control loops. Control loop context information in TI's Semiconductor Manufacturing System (SMS) is defined in a hierarchal fashion which allows default values in a manufacturing specification to be overridden for particular products or lots. ProcessWORKS APC software automatically adds new control loops for new devices and pattern levels to a defined model structure. This system scales well and supports hundreds of thousands of control loops from a single database server in TI fabs. Application of product specific control systems for alignment and exposure control has provided increased exposure capacity due to decreased reworks and setup time, a substantial reduction in engineering maintenance and improved process capability. The APC system has evolved into a requirement for leading edge photolithography processes in Texas Instruments.
Control of DCCDs (Develop Check Critical Dimension) is a key aspect of successfully manufacturing semiconductors at Intel. DCCD control was formerly achieved through manual adjustments of the exposure dose on the tool to account for the known effects of non-stationary tool/process drift. An automated application EFCC (Exposure-Focus CD Control) was developed at Intel, to create a robust algorithm and automated implementation, replacing the manual adjustment process.
The EFCC algorithm uses DCCD summary measurements as the feedback to the stepper. At the stepper, the exposure setting is adjusted to correct for non-stationary tool/process drift. A weighted average of data from previous lots is used to determine the recommended exposure dose settings. The feedback scheme weights prior lots using a combination of traditional EWMA based weighting and within lot (across sites on wafer) variance based weighting.
The EFCC implementation has benefits in increased Cpk, reduced rework, continuous adjustment. Futhermore, as this is an automated control solution, it can easily be extended to support more sophisticated adjustment algorithms.
Advanced integrated metrology capability is actively being pursued in several process areas, including etch, to shorten process cycle times, enable wafer-level advanced process control (APC), and improve productivity. In this study, KLA-Tencor's scatterometry-based iSpectra Spectroscopic CD was integrated on a Lam 2300 Versys Star silicon etch system. Feed-forward control techniques were used to reduce critical dimension (CD) variation. Pre-etch CD measurements were sent to the etch system to modify the trim time and achieve targeted CDs. CDs were brought to within 1 nm from a starting CD spread of 25 nm, showing the effectiveness of this process control approach together with the advantages of spectroscopic CD metrology over conventional CD measurement techniques.
This paper presents a new first principles thermal model to predict wafer temperatures within a hot-wall Low Pressure Chemical Vapor Deposition (LPCVD) furnace based on furnace wall temperatures as measured by thermocouples. This model is based on an energy balance of the furnace system with the following features:
(a) the model is a transformed linear model which captures the nonlinear relationship between the furnace wall temperature distribution and the wafer temperature distribution, (b) the model can be solved with a direct algorithm instead of iterative algorithms used in all existing thermal models, eliminating potential problems with convergence and local minima related to optimization, and (c) finite area to finite area methods are applied to calculate configuration factors, avoiding the implementation difficulties of numerical integration. The simplicity of the model form makes the model useful for model based run-to-run control. The model predictions agree with experimental data very well. The sensitivity of wafer temperatures to furnace wall temperatures is given
analytically. More uniform wafer temperature profile is obtained via optimization.
In recent years, run-to-run (R2R) control technology has received tremendous interest in semiconductor manufacturing. One class of widely used run-to-run controllers is based on the exponentially weighted moving average (EWMA) statistics to estimate process deviations. Using an EWMA filter to smooth the control action on a
linear process has been shown to provide good results in a number of applications. However, for a process with severe drifts, the EWMA controller is insufficient even when large weights are used. This problem becomes more severe when there is measurement delay, which is almost inevitable in semiconductor industry. In order to control
drifting processes, a predictor-corrector controller (PCC) and a double EWMA controller have been developed. Chen and Guo (2001) show that both PCC and double-EWMA controller are in effect Integral-double-Integral (I-II) controllers, which are able to control drifting processes. However, since offset is often within the noise of the process, the second integrator can actually cause jittering. Besides, tuning the second filter is not as intuitive as a single EWMA filter. In this work, we look at an alternative way Recursive Least Squares (RLS), to estimate and control the drifting process. EWMA and double-EWMA are shown to be the least squares estimate for locally constant mean model and locally constant linear trend model. Then the recursive least squares with exponential factor is applied to shallow trench isolation etch process to predict the future etch rate. The etch process, which is a critical process in the flash memory manufacturing, is known to suffer from significant etch rate drift due to chamber seasoning. In order to handle the metrology delay, we propose a new time update scheme. RLS with the new time update method gives very good result. The estimate error variance is smaller
than that from EWMA, and mean square error decrease more than 10% compared to that from EWMA.
Process Control Systems (PCS) are becoming more crucial to the success of Integrated Circuit makers due to their direct impact on product quality, cost, and Fab output. The primary objective of PCS is to minimize variability by detecting and correcting non optimal performance. Current PCS implementations are considered disparate, where each PCS application is designed, deployed and supported separately. Each implementation targets a specific area of control such as equipment performance, wafer manufacturing, and process health monitoring. With Intel entering the nanometer technology era, tighter process specifications are required for higher yields and lower cost. This requires areas of control to be tightly coupled and integrated to achieve the optimal performance. This requirement can be achieved via consistent design and deployment of the integrated PCS. PCS integration will result in several benefits such as leveraging commonalities, avoiding redundancy, and facilitating sharing between implementations. This paper will address PCS implementations and focus on benefits and requirements of the integrated PCS. Intel integrated PCS Architecture will be then presented and its components will be briefly discussed. Finally, industry direction and efforts to standardize PCS interfaces that enable PCS integration will be presented.
This paper takes published improvements in fabricator metrics that result from Advanced Process Control,and, applying an International SEMATECH cost model to the results, quantifies the expected economic
impact. By converting the improvements in factory metrics to dollars, they can be compared. The benefits are given by equipment type, and by factory benefit mechanism. The majority of these calculations are
based on Run-to-Run control.
Knowledge-based process control integrates advanced sensors with tool and process models for enhanced fault detection and classification (FDC) performance. Rather than use a statistical or template-based control model, the knowledge-based approach is constructed around core information extracted from the process itself. The approach uses data from an advanced sensor that is known to be tool and process sensitive. In this way, the process itself does much of the data compression, rather than having to rely on statistical algorithms compiled from the tool inputs. Because it works with a knowledge of the tool itself, built through observations of the sensor data as systematic changes are made to tool and process conditions, data is used to construct a fault library upon which the FDC engine is based. A fundamental tool/process health indicator reports any excursions that match those in the library. The fault is detected and classified in real time.
This paper describes the development of a run-to-run control algorithm using a feedforward neural network, trained using the backpropagation training method. The algorithm is used to predict the critical dimension of the next lot using previous lot information. It is compared to a common prediction algorithm - the exponentially weighted moving average (EWMA) and is shown to give superior prediction performance in simulations. The manufacturing
implementation of the final neural network showed significantly improved process capability when compared to the case where no run-to-run control was utilised.
APC system health should be monitored to ensure continuing effectiveness. The monitoring must include both input and output monitors. The proposed indicators of the overall health of the system include metrics to monitor the systematic variation of the output and the relationships of the input adjustments to the output as well as the acceptability of the output. Additional indicators are proposed to track the adjustments made by the APC system. Some indicators should be included to monitor the quality of the data, which is critical to the successful implementation of APC.
Process control in the fab today employs a wide range of techniques to gather data, monitor processes and adjust through
feed-forward and feed-back. This paper proposes that many substantial benefits could be derived from a broad
abstraction of process control statistics algorithms as well as of data collection and distribution, in a manner parallel to how software users benefit from object oriented concepts. The abstracted algorithmic approach is based on statistics fundamentals.
The paper first defines abstraction and discusses the benefits of its application to process control. It then defines a statistics experiment to test EWMA as one example of how a popular contemporary process control practice can misbehave when faced with four specific data attributes. The experiment quantifies the limitations of EWMA, and indicates that its performance is greatly enhanced when the more fundamental approach pre-processes its data. EWMA is not being singled out results are generalizable to other methods. The last two sections summarize findings and draw conclusions.