Multiple-Patterning Technology (MPT) is still the preferred choice over EUV for the advanced technology nodes, starting the 20nm node. Down the way to 7nm and 5nm nodes, Self-Aligned Multiple Patterning (SAMP) appears to be one of the effective multiple patterning techniques in terms of achieving small pitch of printed lines on wafer, yet its yield is in question. Predicting and enhancing the yield in the early stages of technology development are some of the main objectives for creating test macros on test masks. While conventional yield ramp techniques for a new technology node have relied on using designs from previous technology nodes as a starting point to identify patterns for Design of Experiment (DoE) creation, these techniques are challenging to apply in the case of introducing an MPT technique like SAMP that did not exist in previous nodes.
This paper presents a new strategy for generating test structures based on random placement of unit patterns that can construct more meaningful bigger patterns. Specifications governing the relationships between those unit patterns can be adjusted to generate layout clips that look like realistic SAMP designs. A via chain can be constructed to connect the random DoE of SAMP structures through a routing layer to external pads for electrical measurement. These clips are decomposed according to the decomposition rules of the technology into the appropriate mandrel and cut masks. The decomposed clips can be tested through simulations, or electrically on silicon to discover hotspots.
The hotspots can be used in optimizing the fabrication process and models to fix them. They can also be used as learning patterns for DFM deck development. By expanding the size of the randomly generated test structures, more hotspots can be detected. This should provide a faster way to enhance the yield of a new technology node.
Proc. SPIE. 8327, Design for Manufacturability through Design-Process Integration VI
KEYWORDS: Photovoltaics, Metals, Image processing, 3D modeling, Scanning electron microscopy, Design for manufacturing, Optical proximity correction, Semiconducting wafers, Process modeling, Chemical mechanical planarization
As a result, low fidelity patterns due to process variations can be detected and eventually corrected by designers as early
in the tape out flow as right after design rule checking (DRC); a step no longer capable to totally account for process
constraints anymore. This flow has proven to provide a more adequate level of accuracy when correlating systematic
defects as seen on wafer with those identified through LFD simulations. However, at the 32nm and below, still distorted
patterns caused by process variation are unavoidable. And, given the current state of the defect inspection metrology
tools, these pattern failures are becoming more challenging to detect. In the framework of this paper, a methodology of
advanced process window simulations with awareness of chip topology is presented. This method identifies the expected
focal range different areas within a design would encounter due to different topology.
A dynamic feedback controller for Optical Proximity Correction (OPC) in a random logic layout using ArF
immersion Lithography is presented. The OPC convergence, characterized by edge placement error (EPE), is
subjected to optimization using optical and resist effects described by calibrated models (Calibre®
simulation platform). By memorizing the EPE and Displacement of each fragment from the preceding OPC
iteration, a dynamic feedback controller scheme is implemented to achieve OPC convergence in fewer iterations.
The OPC feedback factor is calculated for each individual fragment taking care of the cross-MEEF (mask error
enhancement factor) effects. Due to the very limited additional computational effort and memory consumption,
the dynamic feedback controller reduces the overall run time of the OPC compared to a conventional constant
feedback factor scheme. In this paper, the dynamic feedback factor algorithm and its implementation, as well
as testing results for a random logic layout, are compared and discussed with respect to OPC convergence and
The process of preparing a sample plan for optical and resist model calibration has always been tedious. Not only
because it is required to accurately represent full chip designs with countless combinations of widths, spaces and
environments, but also because of the constraints imposed by metrology which may result in limiting the number of
structures to be measured. Also, there are other limits on the types of these structures, and this is mainly due to the
accuracy variation across different types of geometries. For instance, pitch measurements are normally more accurate
than corner rounding. Thus, only certain geometrical shapes are mostly considered to create a sample plan. In addition,
the time factor is becoming very crucial as we migrate from a technology node to another due to the increase in the
number of development and production nodes, and the process is getting more complicated if process window aware
models are to be developed in a reasonable time frame, thus there is a need for reliable methods to choose sample plans
which also help reduce cycle time.
In this context, an automated flow is proposed for sample plan creation. Once the illumination and film stack are defined,
all the errors in the input data are fixed and sites are centered. Then, bad sites are excluded. Afterwards, the clean data
are reduced based on geometrical resemblance. Also, an editable database of measurement-reliable and critical structures
are provided, and their percentage in the final sample plan as well as the total number of 1D/2D samples can be
predefined. It has the advantage of eliminating manual selection or filtering techniques, and it provides powerful tools
for customizing the final plan, and the time needed to generate these plans is greatly reduced.
In the modern photolithography process for manufacturing integrated circuits, geometry dimensions need to
be realized on silicon which are much smaller than the exposure wavelength. Thus Resolution
Enhancement Techniques have an indispensable role towards the implementation of a successful
technology process node. Finding an appropriate RET recipe, that answers the needs of a certain
fabrication process, usually involves intensive computational simulations. These simulations have to reflect
how different elements in the lithography process under study will behave. In order to achieve this,
accurate models are needed that truly represent the transmission of patterns from mask to silicon.
A common practice in calibrating lithography models is to collect data for the dimensions of some test
structures created on the exposure mask along with the corresponding dimensions of these test structures on
silicon after exposure. This data is used to tune the models for good predictions.
The models will be guaranteed to accurately predict the test structures that has been used in its tuning.
However, real designs might have a much greater variety of structures that might not have been included in
the test structures.
This paper explores a method for compiling the test structures to be used in the model calibration process
using design layouts as an input. The method relies on reducing structures in the design layout to the
essential unique structure from the lithography models point of view, and thus ensuring that the test
structures represent what the model would actually have to predict during the simulations.
In state of the art integrated circuit industry for transistors gate length of 45nm and beyond, the sharp distinction between
design and fabrication phases is becoming inadequate for fast product development. Lithographical information along
with design rules has to be passed from foundries to designers, as these effects have to be taken into consideration during
the design stage to insure a Lithographically Friendly Design, which in turn demands new communication channels
between designers and foundries to provide the needed litho information. In the case of fabless design houses this
requirement is faced with some problems like incompatible EDA platforms at both ends, and confidential information
that can not be revealed by the foundry back to the design house.
In this paper we propose a framework in which we will try to demonstrate a systematic approach to match any
lithographical OPC solution from different EDA vendors into CalibreTM. The goal is to export how the design will look
on wafer from the foundry to the designers without saying how, or requiring installation of same EDA tools.
In the developed framework, we will demonstrate the flow used to match all steps used in developing OPC starting from
the lithography modeling and going through the OPC recipe. This is done by the use of automated scripts that
characterizes the existing OPC foundry solution, and identifies compatible counter parts in the CalibreTM domain to
generate an encrypted package that can be used at the designers' side.
Finally the framework will be verified using a developed test case.
Achieving faster Turn-Around-Time (TAT) is one of the most attractive objectives for the silicon wafer
manufacturers despite the technology node they are processing. This is valid for all the active technology
nodes from 130nm till the cutting edge technologies. There have been several approaches adopted to cut
down the OPC simulation runtime without sacrificing the OPC output quality, among them is using
stronger CPU power and Hardware acceleration which is a good usage for the advancing powerful
processing technology. Another favorable approach for cutting down the runtime is to look deeper inside
the used OPC algorithm and the implemented OPC recipe. The OPC algorithm includes the convergence
iterations and simulation sites distribution, and the OPC recipe is in definition how to smartly tune the OPC
knobs to efficiently use the implemented algorithm. Many previous works were exposed to monitoring the
OPC convergence through iterations and analyze the size of the shift per iteration, similarly several works
tried to calculate the amount of simulation capacity needed for all these iterations and how to optimize it
for less amount.
The scope of the work presented here is an attempt to decrease the number of optical simulations by
reducing the number of control points per site and without affecting OPC accuracy. The concept is proved
by many simulation results and analysis. Implementing this flow illustrated the achievable simulation
runtime reduction which is reflected in faster TAT. For its application, it is not just runtime optimization,
additionally it puts some more intelligence in the sparse OPC engine by eliminating the headache of
specifying the optimum simulation site length.
OPC models have been improving their accuracy over the years by modeling more error sources in the
lithographic systems, but model calibration techniques are improving at a slower pace. One area of
modeling calibration that has garnered little interest is the statistical variance of the calibration data
set. OPC models are very susceptible to parameter divergence with statistical variance, but modest caution
is given to the data variance once the calibration sequence has started. Not only should the calibration data
be a good representation of the design intent, but measure redundancy is required to take into consideration
the process and metrology variance. Considering it takes five to nine redundant measurements to generate
a good statistical distribution for averaging and it takes tens of thousands of measurements to mimic the
design intent, the data volume requirements become overwhelming. Typically, the data redundancy is
reduced due to this data explosion, so some level of variance will creep into the model-tuning process.
This is a feasibility study for treatment of data variance during model calibration. This approach was
developed to improve the model fitness for primary out-of-specification features present in the calibration
test pattern by performing small manipulations of the measured data combined with data weighting during
the model calibration process. This data manipulation is executed in image-parameter groups (Imin, Imax,
slope and curvature) to control model convergence. These critical-CD perturbations are typically fractions
of nanometers, which is consistent with the residual variance of the statically valid data set. With this datamanipulation
approach the critical features are pulled into specification without diverging other feature
This paper will detail this model calibration technique and the use of imaging parameters and weights to
converge the model for key feature types. It will also demonstrate its effectiveness on realistic
Virtual manufacturing that is enabled by rapid, accurate, full-chip simulation is a main pillar in achieving successful
mask tape-out in the cutting-edge low-k1 lithography. It facilitates detecting printing failures before a costly and time-consuming
mask tape-out and wafer print occur. The OPC verification step role is critical at the early production phases
of a new process development, since various layout patterns will be suspected that they might to fail or cause
performance degradation, and in turn need to be accurately flagged to be fed back to the OPC Engineer for further
learning and enhancing in the OPC recipe. At the advanced phases of the process development, there is much less
probability of detecting failures but still the OPC Verification step act as the last-line-of-defense for the whole RET
In recent publication the optimum approach of responding to these detected failures was addressed, and a solution was
proposed to repair these defects in an automated methodology and fully integrated and compatible with the main
RET/OPC flow. In this paper the authors will present further work and optimizations of this Repair flow.
An automated analysis methodology for root causes of the defects and classification of them to cover all possible causes
will be discussed. This automated analysis approach will include all the learning experience of the previously
highlighted causes and include any new discoveries. Next, according to the automated pre-classification of the defects,
application of the appropriate approach of OPC repair (i.e. OPC knob) on each classified defect location can be easily
selected, instead of applying all approaches on all locations. This will help in cutting down the runtime of the OPC repair
processing and reduce the needed number of iterations to reach the status of zero defects. An output report for existing
causes of defects and how the tool handled them will be generated. The report will with help further learning and
facilitate the enhancement of the main OPC recipe. Accordingly, the main OPC recipe can be more robust, converging
faster and probably in a fewer number of iterations. This knowledge feedback loop is one of the fruitful benefits of the
Automatic OPC Repair flow.
As the technology shrinks toward 65nm technology and beyond, Optical Proximity Correction (OPC) becomes more
important to insure proper printability of high-performance integrated circuits. This correction involves some
geometrical modifications to the mask polygons to account for light diffraction and etch biasing. Model-based OPC has
proven to be a convenient, accurate, and efficient methodology. In this method, raw calibration data are measured from
the process. These data are used to build a VT5 resist model  that accounts for all proximity effects that attendant to
the lithography process. To ensure the reliability of the calibrated VT5 model, these data must be broad in the image
parameter space (IPS) to account for different one-dimensional and two-dimensional features for the design intent.
Failure to provide sufficient IPS (i.e. mimic the design intent) coverage during model calibration could result in
marginalizing the VT5 model during OPC, but is difficult to judge when there is enough data volume to safely
interpolate and extrapolate design intent. In this paper we introduce a new metric called Safe Interpolation Distance
(SID). This metric is a multi-dimensional metric which can be used to automatically detect the portions of the target
design that are not covered well by the desired VT5 model.
Model-Based Optical Proximity Correction (MBOPC) is now found in nearly all resolution enhancement recipes
used in leading technology integrated circuit fabrication facilities. Many masks now have critical dimensions less
than the exposure wavelength, which results in light diffraction that distorts the image projected onto the wafer. The
industry is relying more and more on MBOPC to compensate for optical effects that are induced during the exposure
of these masks. The MBOPC operation is usually the highest computational time contributor in the RET flow.
MBOPC procedures include the fragmentation of layout edges longer than a specific value into a number of sub-edges
(fragments). The software engine can move and manipulate each fragment to improve the image transferred to
the wafer. In the sparse MBOPC approach, each fragment receives one or more optical simulation sites, which is a
one-dimensional array of points where light intensity is sampled and calculated. To correctly capture the resist
behavior at each simulation site, there must be enough points to ensure extension of the site to a certain distance
from the fragment. Adding more points beyond this distance does not add any benefit, but can significantly increase
This paper presents an automated method that analyzes layouts for different technology nodes that depend on sparse
simulations as their MBOPC engine, and reports the optimized number of simulation points that need to be in the
simulation site to get the desired accuracy and optimum runtime performance.
Process models are responsible for the prediction of the latent image in the resist in a lithographic process. In order for
the process model to calculate the latent image, information about the aerial image at each layout fragment is evaluated
first and then some aerial image characteristics are extracted. These parameters are passed to the process models to
calculate wafer latent image. The process model will return a threshold value that indicates the position of the latent
image inside the resist, the accuracy of this value will depend on the calibration data that were used to build the process
model in the first place.
The calibration structures used in building the models are usually gathered in a single layout file called the test pattern.
Real raw data from the lithographic process are measured and attached to its corresponding structure in the test pattern,
this data is then applied to the calibration flow of the models.
In this paper we present an approach to automatically detect patterns that are found in real designs and have
considerable aerial image parameters differences with the nearest test pattern structure, and repair the test patterns to
include these structures. This detect-and-repair approach will guarantee accurate prediction of different layout fragments
and therefore correct OPC behavior.
The model calibration process, in a resolution enhancement technique (RET) flow, is one of the most
critical steps towards building an accurate OPC recipe. RET simulation platforms use models for predicting
latent images in the wafer due to exposure of different design layouts. Accurate models can precisely
capture the proximity effects for the lithographic process and help RET engineers build the proper recipes
to obtain high yield. To calibrate OPC models, test geometries are created and exposed through the
lithography environment that we want to model, and metrology data are collected for these geometries.
This data is then used to tune or calibrate the model parameters. Metrology tools usually provide critical
dimension (CD) data and not edge placement error (EPE - the displacement between the polygon and resist
edge) data however model calibration requires EPE data for simulation. To work around this problem, only
symmetrical geometries are used since, having this constraint, EPE can be easily extracted from CD measurements.
In real designs, it is more likely to encounter asymmetrical structures as well as complex 2D structures that
cannot easily be made symmetrical, especially when we talk about technology nodes for 65nm and beyond.
The absence of 2D and asymmetric test structures in the calibration process would require models to
interpolate or extrapolate the EPE's for these structures in a real design.
In this paper we present an approach to extract the EPE information from both SEM images and contours
extracted by the metrology tools for structures on test wafers, and directly use them in the calibration of a
55nm poly process. These new EPE structures would now mimic the complexity of real 2D designs. Each
of these structures can be individually weighed according to the data variance. Model accuracy is then
compared to the conventional method of calibration using symmetrical data only. The paper also illustrates
the ability of the new flow to extract more accurate measurement out of wafer data that are more immune to
errors compared to the conventional method.
Overlay variations between different layers in Integrated Circuits fabrication can result in poor circuit performance, even
worst it can cause circuit mal function and consequently affect process yield. Coupled with other lithographic process
variations this effect can be highly magnified. This leads to the fact that searching for interconnects hot spots should
include overlay variations into account. The accuracy of inclusion of the overlay variation effect comes at the expense of
a more complex simulation setup. Many issues should be taken into consideration including runtime, process
combinations to be considered and the feasibility of providing a hint function for correction.
In this paper we present a systematic approach for classification of interconnects durability through the lithographic
process, taking into account focus, dose and overlay variations, the approach also provides information about the cause
for the low durability that can be useful for building a more robust design.
This classification can be accessible at the layout design level. With this information in hand, designers can test the
layout while building up their circuit. Modifications to the layout for higher interconnects durability can be easily made.
These modifications would be extremely expensive if they had to be made after design house tape out.
We verify this method by showing real wafer failures, due to bad interconnect design, against interconnects' durability
classifications from our method.
For a robust OPC solution, it is important to isolate and characterize the detractors from high quality printability.
Failure in correctly rendering the design intent in silicon can have multiple causes. Model inability in predicting
lithographic and process implications is one of them. Process model accuracy is highly dependant on the quality of data
used in the calibration phase of the model. Structures encountered during the OPC simulation that have not been
included in the calibration patterns, or even structures somewhat similar to those used in calibration, are some times
incorrectly predicted. In this paper a new method for studying VT5 model coverage during OPC simulations is
investigated. The aerial image parameters for a large number of test structures used for model calibration are first
calculated. A novel sorting and data indexing algorithm is then applied to classify the computed data into fast accessible
look-up tables. These tables are loaded in the beginning of a new OPC simulation where they are used as a reference for
comparing aerial image parameters calculated for new design fragments. Such new approach enables real time
classification of design fragments based on how well covered they are by the VT5 model. Employing this method
avoids catastrophic misses in the correction phase and allows for a robust approach to MBOPC.
Device extraction and the quality of device extraction is becoming of increasing concern for integrated
circuit design flow. As circuits become more complicated with concomitant reductions in geometry, the
design engineer faces the ever burgeoning demand of accurate device extraction. For technology nodes of
65nm and below approximation of extracting the device geometry drawn in the design layout
polygons might not be sufficient to describe the actual electrical behavior for these devices, therefore
contours from lithographic simulations need to be considered for more accurate results.
Process window variations have a considerable effect on the shape of the device wafer contour, having an
accurate method to extract device parameters from wafer contours would still need to know which
lithographic condition to simulate. Many questions can be raised here like: Are contours that represent the
best lithography conditions just enough? Is there a need to consider also process variations? How do we
include them in the extraction algorithm?
In this paper we first present the method of extracting the devices from layout coupled with lithographic simulations. Afterwards a complete flow for circuit time/power analysis using lithographic contours is described. Comparisons between timing results from the conventional LVS method and Litho aware method are done to show the importance of litho contours considerations.
Conventional OPC, also known as site-based OPC, has relied on rules-based fragmentation and site placement since its inception. The issues that arose in earlier generations around imprecise site and fragmentation placement, relative to the exact location of proximity effects, has been illustrated in earlier works  but generally did not produce catastrophic results. However, when coupled with the large process biases, strong RET, and accuracy requirements for 45 nm and future nodes, this imprecision can produce catastrophic results. This work will report on efforts to use model-directed site and fragmentation placement, as well as inclusion of process window knowledge into the site-based OPC flow to address varied sources of errors and relative results with different approaches.
In addition to the conventional site-based OPC, a new breed of tool that avoids sites in favor of fully gridded, or dense, simulation is rapidly maturing. The new approach allows more intelligence to be built into the OPC engine such that fragmentation and error sampling are more automated and thus less error prone. Using the same layout data, we will also present a snapshot of the new tool's results.
Cutting edge technology node manufacturers are always researching how to increase yield while still optimally using
silicon wafer area, this way these technologies will appeal more to designers. Many problems arise with such
requirements, most important is the failure of plain layout geometric checks to capture yield limiting features in designs,
if these features are recognized at an early stage of design, it can save a lot of efforts at the fabrication end. A new trend
of verification is to couple geometric checks with lithography simulations at the designer space.
A lithography process has critical parameters that control the quality of its resulting output. Unfortunately some of these
parameters can not be kept constant during the exposure process, and the variability of these parameters should be taken
into consideration during the lithography simulations, and the lithography simulations are performed multiple times with
these variables set at the different values they can have during the actual process. This significantly affects the runtime
In this paper the authors are presenting a methodology to carefully select only needed values for varying lithography
parameters; that would capture the process variations and improve runtime due to reduced simulations. The selected
values depend on the desired variation for each parameter considered in the simulations. The method is implemented as
a tool for qualification of different design techniques.
As the industry moves toward 45nm technology node and beyond, further reduction of lithographic process window is anticipated. The consequence of this is twofold: first, the manufactured chip will have pattern sizes that are different from the designed pattern sizes and those variations may become more dominated by systematic components as the process windows shrink; second, smaller process windows will lead to yield loss as, at small dimensions, lithographic process windows are often constrained by catastrophic fails such as resist collapse or trench scumming, rather than by gradual pattern size variation. With this notion, Optical Proximity Correction (OPC) for future technology generations must evolve from the current single process point OPC to algorithms that provide an OPC solution optimized for process variability and yield. In this paper, a Process Window OPC (PWOPC) concept is discussed, along with its place in the design-to-manufacturing flow. Use of additional models for process corners, integration of process fails and algorithm optimization for a production-worthy flow are described. Results are presented for 65nm metal levels.
The OPC treatment of aerial mage ripples (local variations in aerial contour relative to constant target edges) is one of the growing issues with very low-k1 lithography employing hard off-axis illumination. The maxima and minima points in the aerial image, if not optimally treated within the existing model based OPC methodologies, could induce severe necking or bridging in the printed layout. The current fragmentation schemes and the subsequent site simulations are rule-based, and hence not optimized according to the aerial image profile at key points. The authors are primarily exploring more automated software methods to detect the location of the ripple peaks as well as implementing a simplified implementation strategy that is less costly. We define this to be an adaptive site placement methodology based on aerial image ripples. Recently, the phenomenon of aerial image ripples was considered within the analysis of the lithography process for cutting-edge technologies such as chromeless phase shifting masks and strong off-axis illumination approaches [3,4]. Effort is spent during the process development of conventional model-based OPC with the mere goal of locating these troublesome points. This process leads to longer development cycles and so far only partial success was reported in suppressing them (the causality of ripple occurrence has not yet fully been explored). We present here our success in the implementation of a more flexible model-based OPC solution that will dynamically locate these ripples based on the local aerial image profile nearby the features edges. This model-based dynamic tracking of ripples will cut down some time in the OPC code development phase and avoid specifying some rule-based recipes. Our implementation will include classification of the ripples bumps within one edge and the allocation of different weights in the OPC solution. This results in a new strategy of adapting site locations and OPC shifts of edge fragments to avoid any aggressive correction that may lead to increasing the ripples or propagating them to a new location. More advanced adaptation will be the ripples-aware fragmentation as a second control knob, beside the automated site placement.
In recent years, design for manufacturability (DfM) has become an important focus item of the semiconductor industry and many new DfM applications have arisen. Most of these applications rely heavily on the ability to model process sensitivity and here we explore the role of through-process modeling on DfM applications. Several different DfM applications are examined and their lithography model requirements analyzed. The complexities of creating through-process models are then explored and methods to ensure their accuracy presented.
At the 65 nm node and beyond, printing the dense and isolated pitches as well as the 2D patterns within tight tolerance across the full range of known process conditions becomes a major challenge, and even more critical in the context of double exposure masks. Post-OPC simulation at nominal conditions is not sufficient to accurately assess and disposition severe errors and monitor residual proximity effects and their implications such as channel length variation.
In this paper, we explore a methodology that adopts multiple simulations to model the variability in the lithography process. This approach is predicting the process behavior by the modulation of the related lithography parameters, such as: dose, focus, and overlay. The goal is to identify the unacceptable deviation of the printed image from the designed target due to process variations. The method also provides a better statistical evaluation of the quality and robustness of the implemented Resolution Enhancement Techniques (RET) & Design for Manufacturability (DfM) solution.