Traditional segment-based model-based OPC methods have been the mainstream mask layout optimization techniques in volume production for memory and embedded memory devices for many device generations. These techniques have been continually optimized over time to meet the ever increasing difficulties of memory and memory periphery patterning. There are a range of difficult issues for patterning embedded memories successfully. These difficulties include the need for a very high level of symmetry and consistency (both within memory cells themselves and between cells) due to circuit effects such as noise margin requirements in SRAMs. Memory cells and access structures consume a large percentage of area in embedded devices so there is a very high return from shrinking the cell area as much as possible. This aggressive scaling leads to very difficult resolution, 2D CD control and process window requirements. Additionally, the range of interactions between mask synthesis corrections of neighboring areas can extend well beyond the size of the memory cell, making it difficult to fully take advantage of the inherent designed cell hierarchy in mask pattern optimization. This is especially true for non-traditional (i.e., less dependent on geometric rule) OPC/RET methods such as inverse lithography techniques (ILT) which inherently have more model-based decisions in their optimizations. New inverse methods such as model-based SRAF placement and ILT are, however, well known to have considerable benefits in finding flexible mask pattern solutions to improve process window, improve 2D CD control, and improve resolution in ultra-dense memory patterns. They also are known to reduce recipe complexity and provide native MRC compliant mask pattern solutions. Unfortunately, ILT is also known to be several times slower than traditional OPC methods due to the increased computational lithographic optimizations it performs. In this paper, we describe and present results for a methodology to greatly improve the ability of ILT to optimize advanced embedded memory designs while retaining significant hierarchy and cell design symmetry, therefore, have good turnaround time and CD uniformity. This paper will explain the enhancements which have been developed in order to overcome the traditional difficulties listed above. These enhancements are in the categories of local CD control, global chip processing options, process window benefit, turn-around time and hierarchy retention.
As the industry pushes to ever more complex illumination schemes to increase resolution for next generation memory
and logic circuits; subresolution assist feature (SRAF) placement requirements become increasingly severe. Therefore
device manufacturers are evaluating improvements in SRAF placement algorithms which do not sacrifice main feature
(MF) patterning capability. AF placement algorithms can be categorized broadly as either rule-based (RB), model-based
(MB). However, combining these different algorithms into new integrated solutions may enable a more optimal overall
RBAF is the baseline AF placement method for many previous technology nodes. Although RBAF algorithm
complexity limits its use with very extreme illumination, RBAF is still a powerful option in certain scenarios. One
example is for repeating patterns in memory arrays. RBAF algorithms can be finely optimized and verified
experimentally without the building of complex models. RBAF also guarantees AF placement consistency based only
on the very local geometric environment, which is important in applications where consistent signal propagation is of
MBAF algorithms deliver the ability to reliably place assist features for enhanced process window control across a wide
variety of layout feature configurations and aggressive illumination sources. These methods optimize sophisticated AF
placement to improve main feature PW but without performing full main feature OPC. The flexibility of MBAF allows
for efficient investigations of future technology nodes as the number of interactions between local layout features
increases beyond what RBAF algorithms can effectively support
Based on hybrid approach algorithms combining features of the different algorithms using both RBAF and MBAF
methods, the generation and placement of SRAF can be a good alternative. Combining of two kinds of SRAF placement
options might result in relatively improved process window compared to an independent approach since two methods
are capable of supplement each other with a complementary advantages.
In this paper we evaluate the impact of SRAF configuration to pattern profile as well as CD margin window and
manufacturing applications of MBAF and Hybrid approach algorithms compared to the current OPC without AF. As a
conclusion, we suggest methodology to set up optimum SRAF configuration using these AF methods with regard to
This work presents software tools that enable engineers to make relevant SEM measurement decisions in the EDA
environment, presented in the optimal context for the engineer, and pass them seamlessly into the SEM environment. We
present the tools and interfaces leveraged in this solution and explore the benefits of enabling OPC modeling engineers
to make metrology-related decisions within the OPC environment. New opportunities for automation of metrologyrelated
OPC tasks are also discussed.
Pattern selection for OPC model calibration is frequently done by image parameter space (IPS) coverage methods. These ensure that the images of the chosen test patterns cover important regions of an n-dimensional parameter space spawned by image parameters, such as minimum and maximum intensity I_min, I_max, curvature, slope and image density. But such a small number of parameters is often insufficient for finding a good set of patterns for the calibration process. We present results for the extended nPS method which ensures coverage of a high dimensional parameter space with a high number of parameters, even permitting the use of all pixels of the aerial images (n >> 1000) as parameters.
Proc. SPIE. 7638, Metrology, Inspection, and Process Control for Microlithography XXIV
KEYWORDS: Metrology, Diffractive optical elements, Data modeling, Calibration, Scanning electron microscopy, Data visualization, Optical proximity correction, Statistical modeling, Process modeling, Data analysis
Modern OPC modeling relies on substantial volumes of metrology data to meet pattern coverage and precision
requirements. This data must be reviewed and cleaned prior to model calibration to prevent bad data from adversely
affecting calibration. We propose implementing specific tools in the metrology flow to improve metrology engineering
efficiency and resulting data quality. The metrology flow with and without these tools will be discussed, and the inherent tradeoffs will be identified. To demonstrate the benefit of the proposed flow, engineering efficiency and the impact of better data on model calibration will be quantified.
EUV lithography is widely viewed as a main contending technology for 16nm node device patterning.
However, EUV has several complex patterning issues which will need accurate compensation in mask
synthesis development and production steps. The main issues are: high flare levels from optical element
roughness, long range flare scattering distances, large mask topography, non-centered illumination axis
leading to shadowing effects, new resist chemistries to model very accurately, and the need for full reticle
optical proximity correction (OPC). Compensation strategies for these effects must integrate together to
create final user flows which are easy to build and deploy with reasonable time and cost. Therefore,
accuracy, usability, speed and cost are important with methods that have considerably more complexity
than current optical lithography mask synthesis flows.
In this paper we analyze the state of the art in accurate prediction and compensation of several of these
complex EUV patterning issues, and compare that to 16nm node expected production needs. Next we
provide a description of integration issues and solutions which are being implemented for 16nm EUV
process development. This includes descriptions of OPC model calibration with flare, shadowing, and
topography effects. We also propose a realistic (in terms of accuracy and mask area) flare parameter calibration flow to improve short and longer range flare correction accuracy above what can be achieved with only a measured EUV flare PSF.
Now that full-field alpha EUV scanners are available to lithographers at multiple sites around the world,
there is greatly increased demand for full-field EUV circuit and teststructure wafer images. Successful
patterning of these circuit and teststructure wafer images requires mask layout data which accurately
compensates for all expected process transformations occurring in the EUV patterning process. These
process transformations include flare, optical diffraction, resist behavior, mask shadowing, and 3D mask
electromagnetic effects. In this paper, we present a complete fullfield EUV mask data correction flow
which incorporates compensation for patterning transformations due to very long range flare, reflective
multi-layer masks, thick mask absorbers, off-axis EUV scanner illumination, field-dependent shadowing
and orientation dependent shadowing. Optimized algorithms for flare and mask effects now enable both
fast and accurate full-chip process effect compensation. Results are shown for both the 22nm and 16nm
logic device nodes. The results are presented by error component category to highlight the relative
importance of each effect.
A precise lithographic model has always been a critical component for the technique of Optical Proximity Correction
(OPC) since it was introduced a decade ago . As semiconductor manufacturing moves to 32nm and 22nm technology
nodes with 193nm wafer immersion lithography, the demand for more accurate models is unprecedented to predict
complex imaging phenomena at high numerical aperture (NA) with aggressive illumination conditions necessary for
these nodes. An OPC model may comprise all the physical processing components from mask e-beam writing steps to
final CDSEM measurement of the feature dimensions. In order to provide a precise model, it is desired that every
component involved in the processing physics be accurately modeled using minimum metrology data. In the past years,
much attention has been paid to studying mask 3-D effects, mask writing limitations, laser spectrum profile, lens pupil
polarization/apodization, source shape characterization, stage vibration, and so on. However, relatively fewer studies
have been devoted to modeling of the development process of resist film though it is an essential processing step that
cannot be neglected. Instead, threshold models are commonly used to approximate resist development behavior. While
resist models capable of simulating development path are widely used in many commercial lithography simulators, the
lack of this component in current OPC modeling lies in the fact that direct adoption of those development models into
OPC modeling compromises its capability of full chip simulation. In this work, we have successfully incorporated a
photoresist development model into production OPC modeling software without sacrificing its full chip capability. The
resist film development behavior is simulated in the model to incorporate observed complex resist phenomena such as
surface inhibition, developer mass transport, HMDS poisoning, development contrast, etc. The necessary parameters are
calibrated using metrology data in the same way that current model calibration is done. The method is validated with a
rigorous lithography process simulation tool which is based on physical models to simulate and predict effects during the
resist PEB and development process. Furthermore, an experimental lithographic process was modeled using this new
methodology, showing significant improvement in modeling accuracy in compassion to a traditional model. Layout
correction test has shown that the new model form is equivalent to traditional model forms in terms of correction
convergence and speed.
We evaluate the relationship between the number of measurements used to create each data point in an OPC model data
set and resulting model quality for target 32-nm logic node applications. Generated data sets will range from singlemeasurement,
unfiltered data sets to many-measurement averages based on filtered results. Intermediate measurementcount
averages will also be evaluated in an attempt to quantify the tradeoff between raw measurements per data point
and resulting model quality. Finally, other variations will also be considered, such as automated versus manual data
filtering. The auto-fitted OPC models will be compared to identify metrology recommendations for 32-nm logic node
OPC model calibration requires thousands of experimental data points. These are then used to calibrate an OPC model. Today, the majority of these steps are performed manually. Metrology for example involves taking the CD-SEM offline for an operator to program it. Considerable time savings is possible by writing the CD-SEM recipe offline. Experimental data preparation is also often performed manually. Manual review of thousands of data points is a tedious task prone to human errors. Here again, automation can greatly alleviate the engineering effort, reduce cycle-time and improve data quality. Data quality improvement alone has been shown to have a significant benefit to model calibration accuracy and predictability.
In this paper we present an automated solution for the currently engineering effort intensive components of the OPC model calibration flow. The flow we present is integrated inside the OPC environment. We suggest best practices identified through the implementation of an automated flow, and discuss benefits. Our results demonstrate the capability and quantify the benefits which automation brings in human effort, reduced time to accurate model and improved model quality.
High NA and Ultra-High NA (NA>1.0) applications for low k1 imaging strongly demand the adoption of polarized
illumination as a resolution enhancement technology since proper illumination polarization configuration can greatly
improve the image contrast hence pattern printing fidelity and the effectiveness of optical proximity correction (OPC).
However, current OPC/RET modeling software can only model the light source polarization of simple types, such as TE,
TM, X, Y, or sector polarization with relatively simple configuration. Realistic polarized light used in scanners is more
complex than the aforementioned simple ones. As a result, simulation accuracy and quality of the OPC result will be
compromised by the simplification of the light source polarization modeling in the traditional approach. With ever
shrinking CD error budget in the manufacturing of IC's at advanced technology nodes, more accurate and
comprehensive illumination source modeling for lithography simulations and OPC/RET is needed. On the other hand,
for polarized illumination to be fully effective, ideally all the components in the optical lithography system should not
alter the polarization state of light during its propagation from illuminator to wafer surface. In current OPC modeling
tools, it is typically assumed that the amplitude and polarization state of the light do not change as it passes through the
projection lens pupil, i.e. the polarization aberration of projection lens pupil is ignored. However, in reality, the
projection lens pupil of the scanner does change the amplitude and the polarization state to some extent, and ignorance
of projection pupil induced polarization state and amplitude changes will cause CD errors un-tolerable at the 45nm
device generation and beyond.
We developed an OPC-deployable modeling approach to model arbitrarily polarized light source and arbitrarily
polarized projection lens pupil. Based on polarization state vector descriptions of a general illumination source, this
modeling approach unifies optical simulations of unpolarized, partially polarized, and completely polarized
illuminations. The polarization aberration imposed by the projection lens pupil is modeled via Jones matrix format, and
it is applicable to arbitrary polarization aberrations imposed by any components in the lithography system that can be
characterized in Jones matrix format. Numerical experiments were performed to study CD impact from illumination
polarization and projection lens pupil polarization aberrations, and up to several nanometers impact on optical proximity
effect (OPE) was observed, which is not negligible given the extremely stringent CD error budget at 45nm node and
beyond. Based on an experimentally measured Jones matrix pupil which intrinsically provides a much better
approximation to the physical scanner projection pupil, we propose a more physics-centric methodology to evaluate the
optical model accuracy of OPC simulator.
An approach to parameter sensitivity methodology for OPC modeling is enhanced, automated, and applied to generate
production-quality models for a 32-nm logic node poly layer. Two parameter sensitivity models are generated and
compared to a baseline model from the same experimental dataset. The three models are thoroughly investigated to
demonstrate that parameter sensitivity can enhance advanced OPC models with essentially no impact on the time
required for model optimization. Results also indicate that parameter sensitivity, if used improperly, can decrease model
Test pattern data set filtering based on the concept of parameter sensitivity is proposed to reduce OPC time-to-model
requirements. The concept of parameter sensitivity-based filtering is discussed briefly, followed by a methodology to
apply the filtering to test pattern sets prior to data measurement along with a number of potential data filtering
algorithms. The proposed methodology is then applied to an experimental data set targeted for a 32nm logic process.
Qualitative observations are made on the initial data filtering, followed by quantitative metrics based on best-fit models
for each of the data filtering algorithms. Results demonstrate that a comparable model is achievable using the proposed
data filtering methods and one of the filtering algorithms.
Delays in equipment availability for both Extreme UV and High index immersion have led to a growing
interest in double patterning as a suitable solution for the 22nm logic node. Double patterning involves
decomposing a layout into two masking layers that are printed and etched separately so as to provide the
intrinsic manufacturability of a previous lithography node with the pitch reduction of a more aggressive
node. Most 2D designs cannot be blindly shrunk to run automatically on a double patterning process and so
a set of guidelines for how to layout for this type of flow is needed by designers. While certain classes of
layout can be clearly identified and avoided based on short range interactions, compliance issues can also
extend over large areas of the design and are hard to recognize. This means certain design practices should
be implemented to provide suitable breaks or performed with layout tools that are double patterning
compliance aware. The most striking set of compliance errors result in layout on one of the masks that is at
the minimum design space rather than the relaxed space intended. Another equally important class of
compliance errors is that related to marginal printability, be it poor wafer overlap and/or poor process
window (depth of focus, dose latitude, MEEF, overlay). When decomposing a layout the tool is often
presented with multiple options for where to cut the design thereby defining an area of overlap between the
different printed layers. While these overlap areas can have markedly different topologies (for instance the
overlap may occur on a straight edge or at a right angled corner), quantifying the quality of a given overlap
ensures that more robust decomposition solutions can be chosen over less robust solutions. Layouts which
cannot be decomposed or which can only be decomposed with poor manufacturability need to be
highlighted to the designer, ideally with indications on how best to resolve this issue. This paper uses an
internally developed automated double pattern decomposition tool to investigate design compliance and
describes a number of classes of non-conforming layout. Tool results then provide help to the designer to
achieve robust design compliant layout.
An alternative method of OPC model fitting based on model parameter sensitivity is presented. Theoretical advantages
are discussed, including improved model quality and time to results. The parameter sensitivity method is applied using a
basic optical model to 32nm logic node experimental data. Results include standard and parameter sensitivity model fits
using both constant and variable threshold models. The results show that the parameter sensitivity methodology enables
an overall model fit that is more physically-predictive than a standard OPC model fit.
An important outcome of the 90nm and 65nm device generations was the realization that new methods for predicting and controlling patterning were required to ensure successful transfer for all design rule compliant features through the required process window. This realization led to a strong increase in the use of CD-based and process window aware post-optical proximity correction (OPC) verification in production mask tapeouts. Accurate post-OPC verification is a necessity but many patterning issues could have been detected and removed earlier in the product development lifecycle. Of course, the 45nm and 32nm device generations are only expected to further strain the ability of device manufacturers to predict process control requirements, robust patterning design rules and first-time right reticle enhancement technology (RET) recipes. Therefore, improvements to the traditional process, OPC and design rule prediction/evaluation steps are needed.
In this paper we propose a patterning and CD control prediction methodology which incorporates not only the traditional dose, defocus and mask variation parameters but also implements RET parameter variations such as layout edge discretization, model inaccuracy, metrology error and assist feature placement. This methodology allows a more accurate prediction of process control requirements, worst case CD control layout geometries and RET subsystem accuracy/control requirements. Lithography engineers have long operated with specific (if not always fully understood) dose and focus control success requirements. To efficiently determine real worst design situations, we utilize a new methodology for quickly verifying the RET-ability of a lithography process + design rule set + OPC correction recipe based on coupling iterative layout generation with OPC testing. Our aim in this paper is to provide additional engineering rigor to the traditional experience-based OPC success requirements by looking at the total Litho + RET + metrology patterning problem and analyzing the individual component control needs.