Now it comes to the 45nm technology node, which should be the first generation of the immersion micro-lithography. And the brand-new lithography tool makes many optical effects, which can be ignored at 90nm and 65nm nodes, now have significant impact on the pattern transmission process from design to silicon. Among all the effects, one that needs to be pay attention to is the mask pellicle effect's impact on the critical dimension variation. With the implement of hyper-NA lithography tools, light transmits the mask pellicle vertically is not a good approximation now, and the image blurring induced by the mask pellicle should be taken into account in the computational microlithography. In this works, we investigate how the mask pellicle impacts the accuracy of the OPC model. And we will show that considering the extremely tight critical dimension control spec for 45nm generation node, to take the mask pellicle effect into the OPC model now becomes necessary.
A dilemmatic trade-off that all OPC engineers are facing everyday is the convergence of the OPC result and
the control of the OPC iteration times. Theoretically, infinite times of OPC iterations are needed to achieve a
convergent and stable correction result. But actually there should always be a cut-off for the iteration time, for turnaround-
time is always an important criteria for IC fabs. But considering the design layout becomes more complicated
and pattern density becomes higher with the shrinkage of the critical dimension, fragmentation control during the
OPC procedure is also becoming more and more sophisticated. Thus, to achieve a convergent correction result for all
OPC fragments within limited correction iteration times now becomes a big challenge to OPC engineers. This work
presents our study in a new OPC iteration control methodology. It can help to find an algorithm that always
converges, and reduce the excessive use of parameter setting, commands and other involvement by the user. With
this, we can reduce the run time required to obtain a convergent OPC solution.
Low K1 photolithography process increases the complexity of RET applications in IC
designs. As technology node shrinks, pattern density is much denser along with much
smaller geometry dimensions. Model-based OPC (Optical Proximity Correction) and
post-OPC verification require more complex models and through process window
compensated approaches, which significantly increase computational burden. Both
lithographical challenges and computational complexity associated with 45nm process
and below create a need for advanced capabilities on commercial OPC tools. To
answer those challenges, hardware-accelerated OPC solution made a debut to solve
runtime bottleneck issues, but they came in with very expensive price tags. As today,
there are no explorations on the linkage between design styles and layout pattern OPC
This paper introduces a new OPC flow with pattern-centric approach to leverage OPC
knowledge of repeated design cells and patterns to achieve fast full chip OPC
convergence, shorter cycle time, better OPC quality, and eventually lead to high
manufacturing yields. In this paper, the main concepts of pattern-based OPC flow are
demonstrated in 65nm customer memory designs. Pattern-based OPC is a natural
extension of Anchor's pattern-centric approaches in DFM (Design for Manufacturing)
Within the past several years, IC design and manufacture technology node transits rapidly from 0.13um to 65nm and 45nm. Whatever the technology node is, the same goal that both the designer and the manufacturer put most of their effort on is how to improve the chip yield as high as possible. A bunch of evidences have shown that the final yield is extremely related to the pattern transfer from design to wafer. But with the critical dimension shrinks, the largest challenge that the whole industry meets is how to keep high fidelity while transferring the patterns. Since the process window is now very limited even with the assistance of kinds of resolution enhancement technology, a tiny process deviation may cause large critical dimension variation, which will result in significant device character change. Micro-lithography combined with Optical proximity correct is supposed to be the most critical stage in pattern transfer stage. But conventional OPC always use nominal model, which will not take random process variation into account during applying OPC. This work will demonstrate our experiment in OPC with process window model, which is then proved to have obvious improvement in pattern fidelity.
With the design rule shrinks rapidly, full chip robust Optical Proximity Correction (OPC) will definitely need longer
time due to the increasing pattern density. Furthermore, to achieve a perfect OPC control recipe becomes more
difficult. For, the critical dimension of the design features is deeply sub exposure wavelength, and there is only
limited room for the OPC correction. Usually very complicated fragment commands need to be developed to handle
the shrinking designs, which can be infinitely complicated. So when you finished debug a sophisticated fragment
scripts, you still cannot promise that the script is universal for all kinds of design. So when you find some hot spot
after you apply OPC correction for certain design. The only thing you can do is to modify your fragmentation script
and try to re-apply OPC on this design. But considering the increasing time that is needed for applying full chip OPC
nowadays, re-apply OPC will definitely prolong the tape-out time. We here demonstrate an approach, through which
we can automatically fix some simple hotspots like pinch, bridging. And re-run OPC for the full chip is not necessary
now. However, this work is only the early study of the auto-fix of post OPC hot spots. There is still a long way need
to go to provide a perfect solution of this issue.
More robust Optical Proximity Correction (OPC) model is highly required with integrated circuits' CD (Critical
Dimension) being smaller. Generally a lot of wafer data of line-end features need to be collected for modeling.
Scanning Electron Microscope (SEM) images are sources that include vast 2D information. Adding SEM images
calibration into current model flow will be preferred. This paper presents a method using Mentor Graphics' Calibre
SEMcal and ContourCal to integrated SEM calibration into model flow. Firstly simulated contour is generated and
aligned with SEM image automatically. Secondly contour is edited by fixing the gap etc. CD measurement spots are
applied also to get a more accurate contour. Lastly the final contour is extracted and inputted to the model flow. EPE
will be calculated from SEM image contour. Thus a more stable and robust OPC model is generated. SEM calibration
can accommodate structures such as asymmetrical CDs, line end pullbacks and corner rounding etc and save a lot of
time on measuring line end wafer CD.
All OPC model builders are in search of a physically realistic model that is adequately calibrated and contains the information that can be used for process predictions and analysis of a given process. But there still are some unknown physics in the process and wafer data sets are not perfect. Most cases even using the average values of different empirical data sets will still take inaccurate measurements into the model fitting process, which makes the fitting process more time consuming and also may cause losing convergence and stability.
The Image quality is one of the most worrisome obstacles faced by next-generation lithography. Nowadays, considerable effort is devoted to enhance the contrast, as well as understanding its impact on devices. It is a persistent problem for 193nm micro-lithography and will carry us for at least three generations, culminating with immersion lithography.
This work is to weight different wafer data points with a weighting function. The weighting function is dependent on the Normal image log slope (NILS), which can reflect the image quality. Using this approach, we can filter wrong information of the process and make the OPC model more accurate.
CalibreWorkbench is the platform we used in this study, which has been proven to have an excellent performance on 0.13um, 90nm and 65nm production and development models setup. Leveraging its automatic optical-tuning function, we practiced the best weighting approach to achieve the most efficient and convergent tuning flow.
To achieve advanced contact layer printing, there always are two key factors need to be handled: resolution and
through-pitch common process window. Among all solutions, the most common approach is off-axis illumination
(OAI) + attenuated phase-shift mask (att-PSM) + sub-resolution assistant features (SRAF). With adequate/high
Numerical Aperture (NA) and OAI settings of the leading edge scanners, the resolution should not be a problem,
while even with att-PSM + SRAF, the through-pitch common photo process window still leaves much to be desired.
This phenomenon is due to the existence of forbidden pitch - under certain illumination condition, there always exists
a pitch range which has no spacing for insertion of SRAF while contrast is still poor and needs some special treatment
to enhance the image qualities.
This invention and study is to use special SRAF, we call DAF (Diagonal sub-resolution Assistant Feature), to enhance
the process performance of forbidden pitches. The main methodology is to select the so-called "forbidden-pitch"
structures from the whole database, then apply our DAF rules. After that, apply Conventional sub-resolution Assistant
Feature (CAF) rules on post-DAF full-chip database, finally comes OPC treatment. With this approach, we
demonstrate excellent results on 65nm contact layer, showing no forbidden pitch and sufficient large through-pitch
photo common process window via simple OAI (ArF, 0.82NA, 1/2Ann.) + att-PSM + SRAF.
The most important task of the microlithography process is to make the manufacturable process latitude/window, including dose latitude and Depth of Focus, as wide as possible. Thus, to perform a thorough source optimization during process development is becoming more critical as moving to high NA technology nodes. Furthermore, Optical proximity correction (OPC) are always used to provide a common process window for structures that would, otherwise, have no overlapping windows. But as the critical dimension of the IC design shrinks dramatically, the flexibility for applying OPC also decreases. So a robust microlithography process should also be OPC-friendly. This paper demonstrates our work on the illumination optimization during the process development. The Calibre ILO (Illumination Optimization) tool was used to perform the illumination optimization and provided plots of DOF vs. various parametric illumination settings. This was used to screen the various illumination settings for the one with optimum process margins. The resulting illumination conditions were then implemented and analyzed at a real wafer level on our 90/65nm critical layers, such as Active, Poly, Contact and Metal. In conclusion, based on these results, a summary is provided highlighting how OPC can get benefit from proper illumination optimization.
As the advent of advanced process technology such as 65nm and below, the designs become more and more sensitive to the variation of manufacturing process. Though the complicated design rules can guarantee process margin for the most layout environments, some layouts that pass the DRC still have narrow process windows. An effective layout optimization approach based on Litho Friendly Design (LFD), one of Mentor Graphics' products, was introduced to enhance design layout manufacturability. Additional to process window models and production proven Optical Proximity Correction (OPC) recipes, the LFD design kits are also generated and needed, which with the kits and rules people should guarantee no process window issues in a design if the design passes the check of these rules via the kits. Lastly, a real 65nm product Metal layer was applied full chip OPC and post-OPC checks to process variation. Some narrow process window layouts were detected and identified, then optimized for larger process window based on the advices provided by LFD. Both simulation and in-line data showed that the DOFs were improved after the layout optimization without changing the area, timing and power of the original design.
In our continued pursuit to keep up with Moor's Law we are encountering lower and lower k1
factors resulting in increased sensitivity to lithography / OPC un-friendly designs, mask rule
constraints and OPC setup file errors such as bad fragmentation, sub-optimal site placement, and
poor convergence during the OPC application process. While the process has become evermore
sensitive and more vulnerable to yield loss, the incurred costs associated with such losses is
continuing to increase in the form of higher reticle costs, longer cycle times for learning, increased
costs associated with the lithography tools, and most importantly lost revenue due to bringing a
product to market late. This has resulted in an increased need for virtual manufacturing tools that
are capable of accurately simulating the lithography process and detecting failures and weak points
in the layout so they can be resolved before committing a layout to silicon and / or identified for
inline monitoring during the wafer manufacturing process. This paper will attempt to outline a
verification flow that is employed in a high volume manufacturing environment to identify, prevent,
monitor and resolve critical lithography failures and yield inhibitors thereby minimizing how much
we succumb to the aforementioned semiconductor manufacturing vulnerabilities.
All OPC model builders are in search of a physically realistic model that is adequately calibrated and contains the information that can be used for process predictions and analysis of a given process. But there still are some unknown physics in the process and wafer data sets are not perfect. Most cases even using the average values of different empirical data sets will still take inaccurate measurements into the model fitting process (as Fig.1), which makes the fitting process more time consuming and also may cause losing convergence and stability. This work is to weight different wafer data points with a weighting function. The weighting function is dependent on the deviation (or range or other statistical index) values for each measurable symmetric feature in the sampling space of the model fitting. Using this approach, we can filter wrong information of the process and make the OPC model more accurate (as Fig.2). NanoScope-Modeler is the platform we used in this study, which has been proven to have an excellent performance on 0.13μm, 90nm and 65nm production and development models setup. Leveraging its automatic optical-tuning function, we practiced the best weighting approach to achieve the most efficient and convergent tuning flow.
In this paper, we evaluated and investigated techniques for performing fast full-chip post-OPC verification using a commercial product platform. A number of databases from several technology nodes, i.e. 0.13um, 0.11um and 90nm are used in the investigation. Although it has proven that for most cases, our OPC technology is robust in general, due to the variety of tape-outs with complicated design styles and technologies, it is difficult to develop a "complete or bullet-proof" OPC algorithm that would cover every possible layout patterns. In the evaluation, among dozens of databases, some OPC databases were found errors by Model-based post-OPC checking, which could cost significantly in manufacturing - reticle, wafer process, and more importantly the production delay. From such a full-chip OPC database verification, we have learned that optimizing OPC models and recipes on a limited set of test chip designs may not provide sufficient coverage across the range of designs to be produced in the process. And, fatal errors (such as pinch or bridge) or poor CD distribution and process-sensitive patterns may still occur. As a result, more than one reticle tape-out cycle is not uncommon to prove models and recipes that approach the center of process for a range of designs. So, we will describe a full-chip pattern-based simulation verification flow serves both OPC model and recipe development as well as post OPC verification after production release of the OPC. Lastly, we will discuss the differentiation of the new pattern-based and conventional edge-based verification tools and summarize the advantages of our new tool and methodology: 1). Accuracy: Superior inspection algorithms, down to 1nm accuracy with the new "pattern based" approach 2). High speed performance: Pattern-centric algorithms to give best full-chip inspection efficiency 3). Powerful analysis capability: Flexible error distribution, grouping, interactive viewing and hierarchical pattern extraction to narrow down to unique patterns/cells.
Model-Based Optical Proximity Correction has become a standard practice for 130nm technology node and below. A physically realistic model that is adequately calibrated contains the information that can be used for process predictions and analysis of a given process. But there still are some unknown physics in the process, that’s why we need to recommend some methodologies for implementing calibrated models for low k1 process. On the other hand, line-end is one of the most difficult 2-D configurations to model and simulate accurately because of intrinsic localized lower/higher threshold compared with 1-D structures. This problem is quite unavoidable especially when people keep constant threshold modeling approach. The objective of this study is to provide a methodology for different line-end modeling gauge types and positions, and still maintain constant threshold modeling. Here, we choose a 0.7NA ArF process empirical dataset for modeling experiments. Among all gauge types and modeling algorithms, the off-center 10% of main feature line-width gauge type with constant threshold model has overall best performance due to:
1)Quick convergent model fitting time;
2)Best common fitting, simulation and correction results;
3)More stable than variable-threshold model.