The benefits of using a run-to-run control system for overlay and CD control have been well documented. However, before any these benefits can be achieved, one must first integrate the run-to-run control system into the existing automation and manufacturing execution system (MES) environment. Integration details that are overlooked during the planning stages often times create unnecessary challenges down the road that can delay reaching advantageous control results. INFICON has developed a novel methodology of documenting process and integration requirements. This method, termed Use Case Review, congregates the appropriate resources from the supplier and the customer to review and customize a predetermined set of documents that describe the run-to-run controller. Each use case contains a flow diagram and a detailed sequence of transactions documenting the actors (Automation PC, Process Equipment, MES, etc.) and variables (Lot ID, Process Level ID, Recipe ID, etc.) involved. The combined set of use cases covers all aspects of integrating a lithography run-to-run controller. During the implementation of NVS ARGUS, TOWER Semiconductor Ltd. benefited from use case review and customization.
Traditional semiconductor manufacturing relies on a fixed process recipe combined with classic statistical process control to monitor the production process. Leading edge manufacturing processes continue to require increasingly stringent critical dimension and overlay control, which in turn demands innovative methods for process control. Meeting tighter process specifications, while maintaining productivity, dictates implementation of Advanced Process Control (APC) methods. An active control method exercised in APC enables the user to modify recipe variables in order to compensate for various disturbances such as drift or step changes in tool operation, or in the conditions of incoming product. The automated version of this control methodology is termed Run-to-Run (R2R) control. R2R control systems compensate for many of the dynamic issues that stand in the way of high level tool dependability, leading to benefits such as compensation for process variation, improved overlay control, rework reduction, reduction in the use of send-ahead wafers, and increased exposure tool availability.
For R2R systems, the integrity of the data from metrology tools is critical. In an automated Fab environment, data is fed directly from measurement tools into databases, where it is used to generate feedback corrections on subsequent production material. Metrology measurements are often based on pattern recognition at the measurement site. Therefore, problems with pattern recognition can lead to flyer data, which in turn may impact the quality of data used in the feedback loop. Using operators to inspect and approve each measurement is costly.
In a foundry environment, where multiple products are manufactured, an additional challenge is introduced. Historical data used to generate feedback can often be out of date when the product is combined with tool status. Routine Preventive Maintenance (PM) procedures may require updating some machine constant values that are related to overlay performance. In these cases, the R2R controller should be “Reset” and a new send-ahead wafer should be used.
At Tower, a R2R control system, which provides overlay process corrections, was integrated into the production environment. Overlay performance metrics were monitored before and after system introduction to show the benefit of R2R control. Additional work was done to characterize the performance benefit of introducing advanced data filters and tool PM data into the same R2R control system. Results from the additional work show how effectively identifying and removing outliers can improve data integrity, and how tool PM data can be used to appropriately respond to step functions following exposure tool PM adjustments.
With each new technology node, there is as usual a corresponding tightening of the overlay requirements. To achieve these requirements in production there is increasingly a need to apply APC strategies, in order to control overlay. However, in order to control overlay successfully using such APC strategies, it is critical to have a thorough understanding of all the sources of overlay error, both grid and intrafield, that contribute to the total overlay budget. Without this thorough understanding, it becomes difficult to establish whether the APC strategy is actually reducing the sources of overlay variation, or in the worst case, actually responsible for their increase. In this paper we present an analysis of the sources of overlay error for three ASML step and scan tools, rank their relative significance and develop a methodology for controlling them by means of an APC strategy. The analysis is based on data collected over a period of more than four months using a baseline monitor. Stability is monitored both with and without feedback corrections from an APC system, in order to optimize the APC strategy. From the analysis we propose a knowledge based APC methodology, using feedback optimization, for overlay control of ASML step and scan exposure tools.
Many state-of-the-art fabs are operating with increasingly diversified product mixes. For example, at Cypress
Semiconductor, it is not unusual to be concurrently running multiple technologies and many devices within each
technology. This diverse product mix significantly increases the difficulty of manually controlling overlay process
corrections. As a result, automated run-to-run feedforward-feedback control has become a necessary and vital
component of manufacturing.
However, traditional run-to-run controllers rely on highly correlated historical events to forecast process corrections.
For example, the historical process events typically are constrained to match the current event for exposure tool, device,
process level and reticle ID. This narrowly defined process stream can result in insufficient data when applied to lowvolume
or new-release devices.
The run-to-run controller implemented at Cypress utilizes a multi-level query (Level-N) correlation algorithm, where
each subsequent level widens the search criteria for available historical data. The paper discusses how best to widen the
search criteria and how to determine and apply a known bias to account for tool-to-tool and device-to-device differences.
Specific applications include offloading lots from one tool to another when the first tool is down for preventive
maintenance, utilizing related devices to determine a default feedback vector for new-release devices, and applying bias
values to account for known reticle-to-reticle differences. In this study, we will show how historical data can be
leveraged from related devices or tools to overcome the limitations of narrow process streams. In particular, this paper
discusses how effectively handling narrow process streams allows Cypress to offload lots from a baseline tool to an
Driven by overlay shrinks and increasing product diversification in advanced fabs, automatic control of correctable overlay coefficients has become critical to semiconductor manufacturing. Although numerous reports have shown the compelling benefits of automatic run-to-run feedback control, one important issue has received very little attention to date. In many state-of-the-art fabs, reticle to wafer alignment is performed against marks that were printed at the first-or zero-level, whereas overlay is still measured between a target level and one or two reference levels. In many cases, perturbations of the reference level are unknown at the time of target level exposure. In this study, we will show how the perturbations of the reference level can impact overlay controllability at cascading levels (levels where overlay is measured against the reference level, but exposure tool alignment is done to the zero level). We will also show that once the perturbation is understood, it can be accounted for at the time of exposure, thus presenting an opportunity for additional overlay improvement.
Numerous reports have shown consistent evidence that automated run-to-run feedback control of overlay correctable coefficients will provide clear benefits when applied to fabs with steady-stream process flows that are associated with low part count WIP profiles. When the same methods are deployed in unsteady-flow, higher part count operations the results have been mixed. Within these high part count operations, some process flow streams show improvement while others do not. Attempts at optimizing the feedback loop have failed to achieve desirable result for all process streams - some process streams would benefit while others would lose ground. In this study, we will show how a structural change in the run-to-run control algorithm provided a breakthrough in both performance and understanding of the underlying system dynamics. The first step was to recognize the fundamental difference between reticle-sourced overlay errors versus tool and process sourced errors. The recognized difference was that the reticle-sourced errors were highly stable over long periods of time, thus enabling deconvolution of reticle effects from the higher frequency tool and process effects. The second step was to recognize that the frequent reticle changes that occur in a high part count fab could be modeled as a feedforward disturbance rather than as discriminates in defining and dividing process streams.
Overlay lot disposition algorithms in lithography occupy some of the highest leverage decision points in the microelectronic manufacturing process. In a typical large volume sub-0.18micrometers fab the lithography lot disposition decision is made about 500 times per day. Each decision will send a lot of wafers either to the next irreversible process step or back to rework in an attempt to improve unacceptable overlay performance. In the case of rework, the intention is that the reworked lot will represent better yield (and thus more value) than the original lot and that the enhanced lot value will exceed the cost of rework. Given that the estimated cost of reworking a critical-level lot is around 10,000 (based upon the opportunity cost of consuming time on a state-of-the-art DUV scanner), we are faced with the implication that the lithography lot disposition decision process impacts up to 5 million per day in decisions. That means that a 1% error rate in this decision process represents over 18 million per year lost in profit for a representative sit. Remarkably, despite this huge leverage, the lithography lot disposition decision algorithm usually receives minimal attention. In many cases, this lack of attention has resulted in the retention of sub-optimal algorithms from earlier process generations and a significant negative impact on the economic output of many high-volume manufacturing sites. An ideal lot- dispositioning algorithm would be an algorithm that results into the best economic decision being made every time - lots would only be reworked where the expected value (EV) of the reworked lot minus the expected value of the original lot exceeds the cost of the rework: EV(reworked lot)- EV(original lot)>COST(rework process) Calculating the above expected values in real-time has generally been deemed too complicated and maintenance-intensive to be practical for fab operations, so a simplified rule is typically used.