State-of-the-art OPC recipes for production semiconductor manufacturing are fine-tuned, often artfully crafted parameter sets are designed to achieve design fidelity and maximum process window across the enormous variety of patterns in a given design level. In the typical technology lifecycle, the process for creating a recipe is iterative. In the initial stages, very little to no “real” design content is available for testing. Therefore, an engineer may start with the recipe from a previous node; adjust it based on known ground rules and a few test patterns and/or scaled designs, and then refine it based on hardware results. As the technology matures, more design content becomes available to refine the recipe, but it becomes more difficult to make major changes without significantly impacting the overall technology scope and schedule. The dearth of early design information is a major risk factor: unforeseen patterning difficulties (e.g. due to holes in design rules) are costly when caught late.
To mitigate this risk, we propose an automated flow that is capable of producing large-scale realistic design content, and then optimizing the OPC recipe parameters to maximize the process window for this layout. The flow was tested with a triple-patterned 10nm node 1X metal level. First, design-rule clean layouts were produced with a tool called Layout Schema Generator (LSG). Next, the OPC recipe was optimized on these layouts, with a resulting reduction in the number of hotspots. For experimental validation, the layouts were placed on a test mask, and the predicted hotspots were compared with hardware data.
Design Technology Co-Optimization (DTCO) becomes more important with every new technology node. Complex patterning issues can no longer wait to be detected experimentally using test sites because of compressed technology development schedules. Simulation must be used to discover complex interactions between an iteration of the design rules, and a simultaneous iteration of an intended patterning technology. The problem is often further complicated by an incomplete definition of the patterning space. The DTCO process must be efficient and thoroughly interrogate the legal design space for a technology to be successful. In this paper we present our view of DTCO, called Design and Patterning Exploration. Three emphasis areas are identified and explained with examples: Technology Definition, Technology Learning, and Technology Refinement. The Design and Patterning Exploration flows are applied to a logic 1.3x metal routing layer. Using these flows, yield limiting patterns are identified faster using random layout generation, and can be ruled out or tracked using a database of problem patterns. At the same time, a pattern no longer in the set of rules should not be considered during OPC tuning. The OPC recipe may then be adjusted for better performance on the legal set of pattern constructs. The entire system is dynamic, and users must be able to access related teams output for faster more accurate understanding of design and patterning interactions. In the discussed example, the design rules and OPC recipe are tuned at the same time, leading to faster design rule revisions, as well as improved patterning through more customized OPC and RET.
Achieving faster Turn-Around-Time (TAT) is one of the most attractive objectives for the silicon wafer
manufacturers despite the technology node they are processing. This is valid for all the active technology
nodes from 130nm till the cutting edge technologies. There have been several approaches adopted to cut
down the OPC simulation runtime without sacrificing the OPC output quality, among them is using
stronger CPU power and Hardware acceleration which is a good usage for the advancing powerful
processing technology. Another favorable approach for cutting down the runtime is to look deeper inside
the used OPC algorithm and the implemented OPC recipe. The OPC algorithm includes the convergence
iterations and simulation sites distribution, and the OPC recipe is in definition how to smartly tune the OPC
knobs to efficiently use the implemented algorithm. Many previous works were exposed to monitoring the
OPC convergence through iterations and analyze the size of the shift per iteration, similarly several works
tried to calculate the amount of simulation capacity needed for all these iterations and how to optimize it
for less amount.
The scope of the work presented here is an attempt to decrease the number of optical simulations by
reducing the number of control points per site and without affecting OPC accuracy. The concept is proved
by many simulation results and analysis. Implementing this flow illustrated the achievable simulation
runtime reduction which is reflected in faster TAT. For its application, it is not just runtime optimization,
additionally it puts some more intelligence in the sparse OPC engine by eliminating the headache of
specifying the optimum simulation site length.
OPC models have been improving their accuracy over the years by modeling more error sources in the
lithographic systems, but model calibration techniques are improving at a slower pace. One area of
modeling calibration that has garnered little interest is the statistical variance of the calibration data
set. OPC models are very susceptible to parameter divergence with statistical variance, but modest caution
is given to the data variance once the calibration sequence has started. Not only should the calibration data
be a good representation of the design intent, but measure redundancy is required to take into consideration
the process and metrology variance. Considering it takes five to nine redundant measurements to generate
a good statistical distribution for averaging and it takes tens of thousands of measurements to mimic the
design intent, the data volume requirements become overwhelming. Typically, the data redundancy is
reduced due to this data explosion, so some level of variance will creep into the model-tuning process.
This is a feasibility study for treatment of data variance during model calibration. This approach was
developed to improve the model fitness for primary out-of-specification features present in the calibration
test pattern by performing small manipulations of the measured data combined with data weighting during
the model calibration process. This data manipulation is executed in image-parameter groups (Imin, Imax,
slope and curvature) to control model convergence. These critical-CD perturbations are typically fractions
of nanometers, which is consistent with the residual variance of the statically valid data set. With this datamanipulation
approach the critical features are pulled into specification without diverging other feature
This paper will detail this model calibration technique and the use of imaging parameters and weights to
converge the model for key feature types. It will also demonstrate its effectiveness on realistic
Virtual manufacturing that is enabled by rapid, accurate, full-chip simulation is a main pillar in achieving successful
mask tape-out in the cutting-edge low-k1 lithography. It facilitates detecting printing failures before a costly and time-consuming
mask tape-out and wafer print occur. The OPC verification step role is critical at the early production phases
of a new process development, since various layout patterns will be suspected that they might to fail or cause
performance degradation, and in turn need to be accurately flagged to be fed back to the OPC Engineer for further
learning and enhancing in the OPC recipe. At the advanced phases of the process development, there is much less
probability of detecting failures but still the OPC Verification step act as the last-line-of-defense for the whole RET
In recent publication the optimum approach of responding to these detected failures was addressed, and a solution was
proposed to repair these defects in an automated methodology and fully integrated and compatible with the main
RET/OPC flow. In this paper the authors will present further work and optimizations of this Repair flow.
An automated analysis methodology for root causes of the defects and classification of them to cover all possible causes
will be discussed. This automated analysis approach will include all the learning experience of the previously
highlighted causes and include any new discoveries. Next, according to the automated pre-classification of the defects,
application of the appropriate approach of OPC repair (i.e. OPC knob) on each classified defect location can be easily
selected, instead of applying all approaches on all locations. This will help in cutting down the runtime of the OPC repair
processing and reduce the needed number of iterations to reach the status of zero defects. An output report for existing
causes of defects and how the tool handled them will be generated. The report will with help further learning and
facilitate the enhancement of the main OPC recipe. Accordingly, the main OPC recipe can be more robust, converging
faster and probably in a fewer number of iterations. This knowledge feedback loop is one of the fruitful benefits of the
Automatic OPC Repair flow.
To maximize the process window and CD control of main features, sizing and placement rules for sub-resolution assist
features (SRAF) need to be optimized, subject to the constraint that the SRAFs not print through the process window.
With continuously shrinking target dimensions, generation of traditional rule-based SRAFs is becoming an expensive
process in terms of time, cost and complexity. This has created an interest in other rule optimization methodologies, such
as image contrast and other edge- and image-based objective functions.
In this paper, we propose using an automated model-based flow to obtain the optimal SRAF insertion rules for a design
and reduce the time and effort required to define the best rules. In this automated flow, SRAF placement is optimized by
iteratively generating the space-width rules and assessing their performance against process variability metrics. Multiple
metrics are used in the flow. Process variability (PV) band thickness is a good indicator of the process window
enhancement. Depth of focus (DOF), the total range of focus that can be tolerated, is also a highly descriptive metric for
the effectiveness of the sizing and placement rules generated. Finally, scatter bar (SB) printing margin calculations
assess the allowed exposure range that prevents scatter bars from printing on the wafer.
Device extraction and the quality of device extraction is becoming of increasing concern for integrated
circuit design flow. As circuits become more complicated with concomitant reductions in geometry, the
design engineer faces the ever burgeoning demand of accurate device extraction. For technology nodes of
65nm and below approximation of extracting the device geometry drawn in the design layout
polygons might not be sufficient to describe the actual electrical behavior for these devices, therefore
contours from lithographic simulations need to be considered for more accurate results.
Process window variations have a considerable effect on the shape of the device wafer contour, having an
accurate method to extract device parameters from wafer contours would still need to know which
lithographic condition to simulate. Many questions can be raised here like: Are contours that represent the
best lithography conditions just enough? Is there a need to consider also process variations? How do we
include them in the extraction algorithm?
In this paper we first present the method of extracting the devices from layout coupled with lithographic simulations. Afterwards a complete flow for circuit time/power analysis using lithographic contours is described. Comparisons between timing results from the conventional LVS method and Litho aware method are done to show the importance of litho contours considerations.
The OPC treatment of aerial mage ripples (local variations in aerial contour relative to constant target edges) is one of the growing issues with very low-k1 lithography employing hard off-axis illumination. The maxima and minima points in the aerial image, if not optimally treated within the existing model based OPC methodologies, could induce severe necking or bridging in the printed layout. The current fragmentation schemes and the subsequent site simulations are rule-based, and hence not optimized according to the aerial image profile at key points. The authors are primarily exploring more automated software methods to detect the location of the ripple peaks as well as implementing a simplified implementation strategy that is less costly. We define this to be an adaptive site placement methodology based on aerial image ripples. Recently, the phenomenon of aerial image ripples was considered within the analysis of the lithography process for cutting-edge technologies such as chromeless phase shifting masks and strong off-axis illumination approaches [3,4]. Effort is spent during the process development of conventional model-based OPC with the mere goal of locating these troublesome points. This process leads to longer development cycles and so far only partial success was reported in suppressing them (the causality of ripple occurrence has not yet fully been explored). We present here our success in the implementation of a more flexible model-based OPC solution that will dynamically locate these ripples based on the local aerial image profile nearby the features edges. This model-based dynamic tracking of ripples will cut down some time in the OPC code development phase and avoid specifying some rule-based recipes. Our implementation will include classification of the ripples bumps within one edge and the allocation of different weights in the OPC solution. This results in a new strategy of adapting site locations and OPC shifts of edge fragments to avoid any aggressive correction that may lead to increasing the ripples or propagating them to a new location. More advanced adaptation will be the ripples-aware fragmentation as a second control knob, beside the automated site placement.
At the 65 nm node and beyond, printing the dense and isolated pitches as well as the 2D patterns within tight tolerance across the full range of known process conditions becomes a major challenge, and even more critical in the context of double exposure masks. Post-OPC simulation at nominal conditions is not sufficient to accurately assess and disposition severe errors and monitor residual proximity effects and their implications such as channel length variation.
In this paper, we explore a methodology that adopts multiple simulations to model the variability in the lithography process. This approach is predicting the process behavior by the modulation of the related lithography parameters, such as: dose, focus, and overlay. The goal is to identify the unacceptable deviation of the printed image from the designed target due to process variations. The method also provides a better statistical evaluation of the quality and robustness of the implemented Resolution Enhancement Techniques (RET) & Design for Manufacturability (DfM) solution.