Since the beginning of the optical lithography simulation, the mask, though actually with limited thickness, is always considered as purely Two-Dimensional, or in other word, the mask thickness is infinitely small. This is always a good approximation for the real mask when the critical dimension is relative large and the Numerical Aperture of the optical imaging system is smaller than 0.7, for, the image distortion induced by the mask thickness and profile under such situation is negligible. Even adopting infinite thin mask approximation described by Kirchoff approach, accurate simulation results can be achieved. However, for higher NA microlithography process, the polarization of the illumination light and the profile of the mask become important factor that will be reflected in the final image formation. Thus, the infinite thin mask approximation will have larger deviation from real world process with the technology goes into 65nm node and beyond.
To describe the 3D mask effect exactly, Maxwell electric-magnetic filed equations should be adopted. However, to solve the Maxwell equations mathematically is obviously a horrible work, since the pattern on the mask can be extremely complicated. Fortunately, there are still a lot of algorithms through which numerical results of the Maxwell equations can be achieved. We have developed a tool based on the Finite-difference time-domain method (FDTD), which is developed by K. S. Yee in 1966. Second-order approximation of Mur’s absorbing boundary condition is used to enhance the convergence of the calculation. We here will demonstrate some simulation results given by our tool, including how the profile (footing, undercutting and so on) of the mask affects the final image formed on wafer level and the comparison of the results given by Kirchoff approach and FDTD approach. Brief summary will also given for the 3D Mask Effect in the image formation.
As the advent of advanced process technology such as 65nm and below, the designs become more and more sensitive to the variation of manufacturing process. Though the complicated design rules can guarantee process margin for the most layout environments, some layouts that pass the DRC still have narrow process windows. An effective layout optimization approach based on Litho Friendly Design (LFD), one of Mentor Graphics' products, was introduced to enhance design layout manufacturability. Additional to process window models and production proven Optical Proximity Correction (OPC) recipes, the LFD design kits are also generated and needed, which with the kits and rules people should guarantee no process window issues in a design if the design passes the check of these rules via the kits. Lastly, a real 65nm product Metal layer was applied full chip OPC and post-OPC checks to process variation. Some narrow process window layouts were detected and identified, then optimized for larger process window based on the advices provided by LFD. Both simulation and in-line data showed that the DOFs were improved after the layout optimization without changing the area, timing and power of the original design.
In our continued pursuit to keep up with Moor's Law we are encountering lower and lower k1
factors resulting in increased sensitivity to lithography / OPC un-friendly designs, mask rule
constraints and OPC setup file errors such as bad fragmentation, sub-optimal site placement, and
poor convergence during the OPC application process. While the process has become evermore
sensitive and more vulnerable to yield loss, the incurred costs associated with such losses is
continuing to increase in the form of higher reticle costs, longer cycle times for learning, increased
costs associated with the lithography tools, and most importantly lost revenue due to bringing a
product to market late. This has resulted in an increased need for virtual manufacturing tools that
are capable of accurately simulating the lithography process and detecting failures and weak points
in the layout so they can be resolved before committing a layout to silicon and / or identified for
inline monitoring during the wafer manufacturing process. This paper will attempt to outline a
verification flow that is employed in a high volume manufacturing environment to identify, prevent,
monitor and resolve critical lithography failures and yield inhibitors thereby minimizing how much
we succumb to the aforementioned semiconductor manufacturing vulnerabilities.
The most important task of the microlithography process is to make the manufacturable process latitude/window, including dose latitude and Depth of Focus, as wide as possible. Thus, to perform a thorough source optimization during process development is becoming more critical as moving to high NA technology nodes. Furthermore, Optical proximity correction (OPC) are always used to provide a common process window for structures that would, otherwise, have no overlapping windows. But as the critical dimension of the IC design shrinks dramatically, the flexibility for applying OPC also decreases. So a robust microlithography process should also be OPC-friendly. This paper demonstrates our work on the illumination optimization during the process development. The Calibre ILO (Illumination Optimization) tool was used to perform the illumination optimization and provided plots of DOF vs. various parametric illumination settings. This was used to screen the various illumination settings for the one with optimum process margins. The resulting illumination conditions were then implemented and analyzed at a real wafer level on our 90/65nm critical layers, such as Active, Poly, Contact and Metal. In conclusion, based on these results, a summary is provided highlighting how OPC can get benefit from proper illumination optimization.
To achieve advanced contact layer printing, there always are two key factors need to be handled: resolution and
through-pitch common process window. Among all solutions, the most common approach is off-axis illumination
(OAI) + attenuated phase-shift mask (att-PSM) + sub-resolution assistant features (SRAF). With adequate/high
Numerical Aperture (NA) and OAI settings of the leading edge scanners, the resolution should not be a problem,
while even with att-PSM + SRAF, the through-pitch common photo process window still leaves much to be desired.
This phenomenon is due to the existence of forbidden pitch - under certain illumination condition, there always exists
a pitch range which has no spacing for insertion of SRAF while contrast is still poor and needs some special treatment
to enhance the image qualities.
This invention and study is to use special SRAF, we call DAF (Diagonal sub-resolution Assistant Feature), to enhance
the process performance of forbidden pitches. The main methodology is to select the so-called "forbidden-pitch"
structures from the whole database, then apply our DAF rules. After that, apply Conventional sub-resolution Assistant
Feature (CAF) rules on post-DAF full-chip database, finally comes OPC treatment. With this approach, we
demonstrate excellent results on 65nm contact layer, showing no forbidden pitch and sufficient large through-pitch
photo common process window via simple OAI (ArF, 0.82NA, 1/2Ann.) + att-PSM + SRAF.
All OPC model builders are in search of a physically realistic model that is adequately calibrated and contains the information that can be used for process predictions and analysis of a given process. But there still are some unknown physics in the process and wafer data sets are not perfect. Most cases even using the average values of different empirical data sets will still take inaccurate measurements into the model fitting process (as Fig.1), which makes the fitting process more time consuming and also may cause losing convergence and stability. This work is to weight different wafer data points with a weighting function. The weighting function is dependent on the deviation (or range or other statistical index) values for each measurable symmetric feature in the sampling space of the model fitting. Using this approach, we can filter wrong information of the process and make the OPC model more accurate (as Fig.2). NanoScope-Modeler is the platform we used in this study, which has been proven to have an excellent performance on 0.13μm, 90nm and 65nm production and development models setup. Leveraging its automatic optical-tuning function, we practiced the best weighting approach to achieve the most efficient and convergent tuning flow.
Sub-resolution assist feature (SRAF) is widely used to improve lithographic performance. Rule-based SRAF insertion has been working well for one dimensional cases but becomes quite complex for 2-dimensional arbitrary layout. In addition, the best rule generation involves a large amount of simulation and empirical data collection. Therefore model-based SRAF insertion is much more desirable especially for 65nm node and below. In this work we use the newly developed pixel inversion method for a true model-based SRAF insertion. We'll extend our work from contact layer to lines and spaces layer to demonstrate the capability of this method for all critical layers of 65nm node. This method will be used in combination with model-based OPC to achieve the required overlapping process window and CD control. Furthermore, the manufacture issues such as mask making time and mask inspection will be examined and reported.
With the critical dimension of IC design decreases dramatically, to meet the yield target of the manufacture process, resolution enhancement technologies become extremely important nowadays. For 90nm technology node and below, sub rule assistant feature (SRAF) are usually employed to enhance the robustness of the micro lithography process. SRAF is really a powerful methodology to push the process limit for given equipment conditions. However, there is also a drawback of the SRAF. It is very hard to check the reasonability of the SRAF location, especially when SRAF is applied on full chips. This work is trying to demonstrate a model-based approach to do full-chip check of the SRAF insertion rule. First, we try to capture the lithography process information through real empirical wafer data. Then we try to check every SRAFs location and to find any hot spot that has the risk of being printed out on the wafer. Based on this approach, we can then not only apply full chip check to reduce the printability of SRAF. Furthermore, combined with DRC tools, we can find SRAFs that are inserted unreasonably and then apply modification on them.
With the critical dimension of IC design decreases dramatically, to meet the yield target of the manufacture process, resolution enhancement technologies become extremely important nowadays. For 90nm technology node and below, sub rule assistant feature (SRAF) are usually employed to enhance the robustness of the micro lithography process. SRAF is really a powerful methodology to push the process limit for given equipment conditions. However, there is also a drawback of the SRAF. It is very hard to predict the printability of the SRAFs, especially when SRAF is applied on full chips. This work is trying to demonstrate a new approach to check the printability of the SRAF on full-chip level. First, we try to capture the lithography process information through real empirical wafer data. Then we try to determine the margin of the conditions for which SRAFs can be printed out on the wafer. Based on all the information, we can then apply full chip optical rule check (ORC) to check the printability of SRAF. By this approach, the printout risk of the SRAF can be reduced effectively with acceptable time consuming.
Model-Based OPC has become a standard practice and centerpiece for 130nm technology node and below. And every model builder is trying to setup a physically realistic model that is adequately calibrated contains the information which can be used for process predictions and analysis of a given process. But there still are some unknown/not-well-understood physics in the process such as line edge roughness (LER). The LER is one of the most worrisome non-tool-related obstacles faced by next-generation lithography. Nowadays, considerable effort is devoted to moderating its effects, as well as understanding its impact on devices. It is a persistent problem for 193 nm micro-lithography and will carry us for at least three generations, culminating with immersion lithography. Some studies showed LER has several sources and forms. It can be quantified by an LER measurement with a top-down CD measurement. However, there are other ways in which LER shows up, such as line breakage results from insufficient resist or mask patterning processes, line-width aspect ratio or just topography. Here we collected huge amount of line-width ADI CD datasets together with LER for each edge. And try to show even using the average value of different datasets will take the inaccuracy of measurement into the modeling fitting process, which makes the fitting process more time consuming and might cause losing convergence and stableness. This work is to weight different wafer data points with a weighting function. The weighting function is dependent on the LER value for each One-dimension feature in the sampling space of the modeling fitting. By this approach, we can filter wrong information of the process and make the OPC model more accurate. Further more, we will introduce this factor (LER) into variable threshold modeling parameters and see its differentiations between other Variable Threshold model forms.
This paper presents the results of applying ILT to SMIC's first 65nm tape out. ILT mathematically determines the mask features that produce the desired on-wafer results for best pattern fidelity, largest process window or an desired combination of both. SMIC applied this technology to its first 65nm tape-out to study its performance and benefits for deep sub-wavelength lithography. SMIC selected 3 SRAM designs as the first set of test cases, because SRAM bit-cells contain features which are lithographically challenging. Firstly, three experiments were performed to optimize the illumination and mask design of a pair of layers by optimizing exposure energy, enabling SRAF, and enforcing mask constraints. Secondly, mask manufacturability (including fracturing and writing time) and wafer print performance of ILT was studied. Thirdly, mask patterns generated by both conventional Optical Proximity Correction (OPC) and ILT, both using only their optical models, were placed on the mask side-by-side. The results demonstrated that ILT achieved better CD accuracy and produced significantly larger process window than conventional OPC.
As critical dimension decreases rapidly, scattering bars are widely implemented to increase lithographic common process window. However, collecting rules for applying scattering bar is extremely time-consuming, because of huge numbers of scattering bar split conditions should be considered. The objective of this work is to use Calibrated OPC model to simulate and insert scattering bars for hole-layers. Maximum/optimized process margin can be achieved (under fixed process condition) by calculating the EPE variation due to dose and focus variation at different sets of sub design rule assistant feature conditions, which we call pseudo process window simulation. Then one theoretically best condition for applying SRAF can be found. According this best condition, we can dramatically narrow down the search range of the SRAF rules in wafer-lever experiments. As a result, technology development cycle time can be shortened exponentially. And finally, the simulation data of our work will be shown and compared down to wafer level.
SMIC is a pure-play IC foundry, as foundry culture Turn-Around Time is the most important thing FABs concern about. And aggressive tape out schedule required significant reduction of GDS to mask flow run time. So the objective of this work is to evaluate an OPC methodology and integrated mask data preparation flow on runtime performance via so-called 1-IO-tape-out platform. By the way, to achieve fully automated OPC/MDP flow for production. To evaluate, we choose BEOL layers since they were the ones hit most by runtime performance -- not like FEOL, for example, Poly to CT layers there're still some non-critical layers in the between, OPC mask makings & wafer schedules are not so tight. BEOL, like M2, V2,then M3 V3 and so on, critical layer OPC mask comes one by one continuously. Hence, that's why we pick BEOL layers. And the integrated flow we evaluated included 4 layers of metal with MB-OPC and 6 layers of Via with R-B OPC. Our definition of success to this work is to improve runtime performance at least of larger than 2x. At meantime, of course, we can not sacrifice the model accuracy, so maintaining equal or better model accuracy and OPC/mask-data output quality is also a must. For MDP, we also test the advantage of OASIS and compared with GDS format.
In this paper, we evaluated and investigated techniques for performing fast full-chip post-OPC verification using a commercial product platform. A number of databases from several technology nodes, i.e. 0.13um, 0.11um and 90nm are used in the investigation. Although it has proven that for most cases, our OPC technology is robust in general, due to the variety of tape-outs with complicated design styles and technologies, it is difficult to develop a "complete or bullet-proof" OPC algorithm that would cover every possible layout patterns. In the evaluation, among dozens of databases, some OPC databases were found errors by Model-based post-OPC checking, which could cost significantly in manufacturing - reticle, wafer process, and more importantly the production delay. From such a full-chip OPC database verification, we have learned that optimizing OPC models and recipes on a limited set of test chip designs may not provide sufficient coverage across the range of designs to be produced in the process. And, fatal errors (such as pinch or bridge) or poor CD distribution and process-sensitive patterns may still occur. As a result, more than one reticle tape-out cycle is not uncommon to prove models and recipes that approach the center of process for a range of designs. So, we will describe a full-chip pattern-based simulation verification flow serves both OPC model and recipe development as well as post OPC verification after production release of the OPC. Lastly, we will discuss the differentiation of the new pattern-based and conventional edge-based verification tools and summarize the advantages of our new tool and methodology: 1). Accuracy: Superior inspection algorithms, down to 1nm accuracy with the new "pattern based" approach 2). High speed performance: Pattern-centric algorithms to give best full-chip inspection efficiency 3). Powerful analysis capability: Flexible error distribution, grouping, interactive viewing and hierarchical pattern extraction to narrow down to unique patterns/cells.
Accurately and efficiently verifying the device layout is a crucial step in semiconductor manufacturing. A single missed design violation carries the potential for a disastrous and avoidable yield loss. Typically, design rule checking (DRC) is accomplished by validating drawn layout geometries against pre-determined rules, the specifics of which are derived empirically or from lithographic first principles. These checks are intrinsically rigid, and, taken together, a set of DRC rules only approximate the manufacturable design space in the crudest manner. Process-specific effects are entirely neglected. But for leading-edge technologies, process variations significantly impact the manufacturability of a design, so traditional DRC becomes increasingly difficult to implement, or worse, speciously inaccurate. Fortunately, the rise of Optical Proximity Correction (OPC) has given manufacturers a means to accurately model optical and process effects, and, therefore, an opportunity to introduce this information into the layout validation flow. We demonstrate an enhanced, full-chip DRC technique, which utilizes process models to locate marginal or bad design features and classify them according to severity.
To shorten the turn around time and reduce the amount of effort for SRAF insertion and optimization on any arbitrary layout, a new model-based SRAF insertion and optimization flow is developed. It is based on the pixel-based mask optimization technique  to find the optimal mask shapes that result in the best image contrast. The contrast-optimized mask is decomposed into main features and assist features. The decomposed assist features are then run through a simplification process for shot count reduction to improve mask writing throughput. Model-based Optical Proximity Correction (OPC) is applied finally to achieve required pattern fidelity for the current technology. In this flow, main features and assist features are allowed to be optimized simultaneously such that the effect of SRAF optimization and Optical Proximity Correction (OPC) are achieved. Since the objective of the mask optimization is the image fidelity, and there is no light coming through assist features (in dark field case), the assist features were ensured not to print even with high dose. The results on 65nm/contact layer showed this approach greatly reduced the total time and effort required for SRAF placement optimization compared to rule-based method, with better lithographic performance for various layout types when compared to rule-based approach.
This paper presents SMIC's first 65nm tape out results, in particularly, using ILT. ILT mathematically determines the mask features that produce the desired on-wafer results with best wafer pattern fidelity, largest process window or both. SMIC applied it to its first 65nm tape-out to study ILT performance and benefits for deep sub-wavelength lithography. SMIC selected 3 SRAM designs as the first test case, because SRAM bit-cells contain features which are challenging lithographically. Mask patterns generated from both conventional OPC and ILT were placed on the mask side-by-side. Mask manufacturability (including fracturing, writing time, inspection, and metrology) and wafer print performance of ILT were studied. The results demonstrated that ILT achieved better CD accuracy, produced substantially larger process window than conventional OPC, and met SMIC's 65nm process window requirements.
Using a commercialized product Calibre OPC platform, optical and process models were built that accurately predict wafer-level phenomena for a sub-90nm poly process. The model fidelity relative to nominal wafer data demonstrates excellent result, with EPE errors in the range of ±2nm for pitch features and ±7 for line-end features. Furthermore, these models accurately predict defocus and off-dose wafer data. Overlaying SEM images with model-predicted print images for critical structures shows that the models are stable and accurate, even in areas especially prone to pinching or bridging. In addition, process window ORC is shown to identify potential failure points within some representative designs, allowing the mask preparation shop to easily identify these areas within the fractured data. And finally, the data and images of mask hotspots will be shown and compared down to wafer level.
Semiconductor foundries need to have a single, standard mask preparation procedure to deal with the large number of designs they receive. This data is typically of two sorts; the random logic over which they have little control of how the design intent is represented; and cells from dense arrays such as memory, often with design rule violations, whose OPC correction needs to be precisely optimized to achieve best yield and device performance. Occasionally the input data will contain sub-resolvable notches and extensions, which while not violating DRC specifications, would, if filled, result in DRC violations. This may be due to a non DFM aware automated layout tool, or a designer aggressively trying to minimize circuit density. In practice it is worthwhile to clean up these notches to ease OPC correction. Doing this should not result in printability errors as these notches typically represent a more complex curved design intent that cannot be accurately represented due to the restrictions imposed by the limited number of polygon edge directions available for layout. Similarly, memory cell layouts often have significant implied curvature. These may only be corrected properly if the OPC target point is defined precisely for each individual segment. In general, letting the OPC correction engine correct a layout defined by a realistic, curved target shape gives better quality corrections with greater process window. The challenge for the OPC engineer working in a foundry is therefore to determine a clean-up methodology for incoming data and to correctly apply the design intent, where necessary, from the original pre-cleanup data. A programmable OPC engine gives the user flexibility in optimizing the set of rules embedded in the OPC cleanup and correction recipes. These embed within them the algorithms to interpret the rounding on the desired silicon image not only for line-ends and corners of random logic but also the more complex curved silicon images and tolerances required by memory cells.
Extensive usage of Litho RET, Etch trimming and OPC techniques has become common practice in the integrated patterning flow for 90nm and beyond. In this paper, we will discuss our approach to use OPC for both etch and litho through-pitch bias correction for a 90nm contact layer. In stead of using conventional lumped model, [J.P. Stirniman, M.L. Rieger, SPIE Proc. Optical/Laser Microlithography X, Vol. 3051, p294, 1997], we introduced an alternative modeling approach to reduce our model correction into: Corrected Mask Layout = Tmask-1 (Toptical-1 (Tetch-1 (Design Layout) ) ). Post OPC checking using Synopsys SiVl platform shows that CD 3σ = 7.82nm of through-pitch OPC residual error. This study also shows that integrated patterning flow combined with LRC tools is useful to provide feedback to the designer and highlight some patterning process limitation that is design dependent.
As semiconductor manufacturing moves to the 90nm node and below, shrinking feature sizes and increasing IC complexity have combined to significantly stretch out the time needed to optimize and qualify process anchored OPC models and recipes. Process distortion and non-linearity become non-trivial issues and conspire to reduce the quality of the resulting corrections. Additionally, optimizing the OPC model and recipe on a limited set of test chip designs may not provide sufficient coverage across the range of designs to be produced in the process. Finally, the increased complexity of the transformation of the target pattern into a corrected mask pattern also increases the probability of system lithography errors. Fatal errors (pinch or bridge) or poor CD distribution may still occur. As a result, more than one reticle tape-out cycle is non uncommon to prove models and recipes that approach the center of process for a range of designs. In this paper, we describe a full-chip simulation based verification flow using a commercialized product that serves both OPC model and recipe development as well as post OPC verification after production release of the OPC.
As IC design rules shrink dramatically while the wavelength reduction in exposure systems can not keep up, extensive usage of Litho RET, Etch trimming and OPC techniques has become common practice in the integrated patterning flow. We examined a large number of CD measurement datasets of 90nm Contact layer ADI and AEI CD. As the etch bias is not a constant through pitch, AEI contribution has to be incorporated in the OPC model. Based on these datasets, we tried to develop a non-constant AEI model. In this paper, we investigated various strategies to streamline OPC modeling. Multiple regression method is used to fit CTR and CTE models. It was revealed that an extra long range Loading Kernel, additional to a well-fitted ADI model, may not successfully meet the fitting criteria we want. Mainly due to the fact that models with too many eigenvectors would have a tendency to over-fit-and-correct CD curves. We introduced an alternative approach by limiting the number of parameters in our model OPC algorithm. We achieved a 90nm Contact Model with OPC empirical data fitting error within +-2nm. Lastly, the wafer verification datasets showed only 3σ = 7.82 nm of through-pitch OPC residual error by using this Constant Threshold Etch Model, compared to simulation residue error 3σ of 8 nm.
Model-Based Optical Proximity Correction has become a standard practice for 130nm technology node and below. A physically realistic model that is adequately calibrated contains the information that can be used for process predictions and analysis of a given process. But there still are some unknown physics in the process, that’s why we need to recommend some methodologies for implementing calibrated models for low k1 process. On the other hand, line-end is one of the most difficult 2-D configurations to model and simulate accurately because of intrinsic localized lower/higher threshold compared with 1-D structures. This problem is quite unavoidable especially when people keep constant threshold modeling approach. The objective of this study is to provide a methodology for different line-end modeling gauge types and positions, and still maintain constant threshold modeling. Here, we choose a 0.7NA ArF process empirical dataset for modeling experiments. Among all gauge types and modeling algorithms, the off-center 10% of main feature line-width gauge type with constant threshold model has overall best performance due to:
1)Quick convergent model fitting time;
2)Best common fitting, simulation and correction results;
3)More stable than variable-threshold model.