Sub-resolution assist feature (SRAF) insertion techniques have been effectively used for a long time now to increase process latitude in the lithography patterning process. Rule-based SRAF and model-based SRAF are complementary solutions, and each has its own benefits, depending on the objectives of applications and the criticality of the impact on manufacturing yield, efficiency, and productivity. Rule-based SRAF provides superior geometric output consistency and faster runtime performance, but the associated recipe development time can be of concern. Model-based SRAF provides better coverage for more complicated pattern structures in terms of shapes and sizes, with considerably less time required for recipe development, although consistency and performance may be impacted. In this paper, we introduce a new model-assisted template extraction (MATE) SRAF solution, which employs decision tree learning in a model-based solution to provide the benefits of both rule-based and model-based SRAF insertion approaches. The MATE solution is designed to automate the creation of rules/templates for SRAF insertion, and is based on the SRAF placement predicted by model-based solutions. The MATE SRAF recipe provides optimum lithographic quality in relation to various manufacturing aspects in a very short time, compared to traditional methods of rule optimization. Experiments were done using memory device pattern layouts to compare the MATE solution to existing model-based SRAF and pixelated SRAF approaches, based on lithographic process window quality, runtime performance, and geometric output consistency.
Traditionally, optical proximity correction (OPC) on cell array patterns in memory layout uses simple bias rules to correct hierarchically-placed features, but requires intensive, rigorous lithographic simulations to maximize the wafer process latitude. This process requires time-consuming procedures to be performed on the full chip (excluding the cell arrays) to handle unique cell features and layout placements before (and even sometimes after) OPC. The time required limits productivity for both mask tapeouts and the wafer process development. In this paper, a new cell array OPC flow is introduced that reduces turnaround-time for mask tapeouts from days to hours, while maintaining acceptable OPC quality and the perfect geometric consistency on the OPC output that is critical for memory manufacturing. The flow comprises an effective sub-resolution assist features (SRAFs) insertion and OPC for both the cell array and the peripheral pattern areas. Both simulation and experimental results from actual wafer verification are discussed.
As technology advances to 45 nm node and below, the induced effects of etch process have an increasing contribution to
the device critical dimension error budget. Traditionally, original design target shapes are drawn based on the etch target.
During mask correction, etch modeling is essential to predict the new resist target that will print on the wafer. This step
is known as "Model Based Retargeting" (MBR). During the initial phase of process characterization, the sub-resolution
assist features (SRAF) are optimized whether based on the original design target shapes or based on a biased version of
the design target (resist target). The goal of the work is to study the different possibilities of SRAF placement to
maximize the accuracy and process window immunity of the final resist contour image. We will, statistically, analyze
and compare process window simulation results due to various SRAFs placements by changing the reference layer used
As the IC Industry moves towards 32nm technology node and below, it becomes important to study the impact of
process window variations on yield. PVBands is a technique to express process parameter variations such as dose, focus,
mask size, etc. However, PVBands width and area ratio alone are insufficient as a quantitative measure for judging the
PVBand performance, as it does not take into consideration how far away the contours are from the target.
In this paper, a novel mathematical formulation is developed to better judge the PVBands performance. It expresses the
PVBand width and symmetry with respect to the target through a single score. This score can be used in OPC (Optical
Proximity Correction) iterations instead of working with the nominal EPE (Edge Placement Error). Not only does this
approach provide a better measure of the PVBands performance through the value of the score, but it also presents a
straightforward method for PWOPC optimization by using the PV Score directly in the iterations.
Double patterning (DP) technology is one of the main candidates for RET of critical layers at 32nm hp. DP technology is
a strong RET technique that must be considered throughout the IC design and post tapeout flows. We present a complete
DP technology strategy including a DRC/DFM component, physical synthesis support and mask synthesis.
In particular, the methodology contains:
- A DRC-like layout DP compliance and design verification functions;
- A parameterization scheme that codifies manufacturing knowledge and capability;
- Judicious use of physical effect simulation to improve double-patterning quality;
- An efficient, high capacity mask synthesis function for post-tapeout processing;
- A verification function to determine the correctness and qualify of a DP solution;
Double patterning technology requires decomposition of the design to relax the pitch and effectively allows processing
with k1 factors smaller than the theoretical Rayleigh limit of 0.25. The traditional DP processes Litho-Etch-Litho- Etch
(LELE)  requires an additional develop and etch step, which eliminates the resolution degradation which occurs in
multiple exposure processed in the same resist layer. The theoretical k1 for a double-patterning technology applied to a
32nm half-pitch design using a 1.35NA 193nm imaging system is 0.44, whereas the k1 for a single-patterning of this
same design would be 0.22 , which is sub-resolution.
This paper demonstrates the methods developed at Mentor Graphics for double patterning design compliance and
decomposition in an effort to minimize the impact of mask-to-mask registration and process variance. It also
demonstrates verification solution implementation in the chip design flow and post-tapeout flow.
Model based optical proximity correction (MB-OPC) is essential for the production of advanced integrated circuits
(ICs). As the speed and functionality requirements of IC production necessitate continual reduction of the critical
dimension (CD), there is a heightened demand for more accurate and sophisticated OPC models.
The OPC is applied to the design data through a rule deck. The parameters in this rule deck, which we will call
"setup parameters", describe the fundamental way in which the OPC engine will distinguish which edges to move,
their restrictions to movement, and how the targets for the OPC are chosen. The optimization of these setup
parameters, by customizing how the OPC engine should treat specific designs, is an essential step that is performed
in order to maximize the benefit of the OPC model. Improper or deficient selection of the setup parameters strongly
affects the success or failure of the OPC model and engine to achieve the desired design shapes.
In this paper, the ability of setup parameter optimization to compensate for a weak OPC model, or conversely, how
inadequately selected setup parameters can cause a very good OPC model to function poorly is investigated. Our
approach is to use two OPC models: a good OPC model and a weak OPC model. The setup parameters will be
optimized for the weak OPC model to investigate any improvements in the overall OPC performance. Alternatively,
setup parameters chosen poorly will be used with the good OPC model to see how this will adversely affect the OPC
performance. A comparative study will be carried out in order to fully understand the effect of setup file parameters
on the overall OPC performance.
The general goal of this study is to help the OPC modelers and setup parameters optimizers to improve the quality
and performance of the OPC solution and weigh the tradeoffs associated with different OPC solution choices.
One of the challenges associated with shrinking design dimensions is finding photomask inspection settings which
achieve sufficient defect detection capabilities while supporting aggressive Optical Proximity Correction (OPC). The
most recent technology nodes require very aggressive and advanced Resolution Enhancement Techniques (RETs) which
involve printing small features that are challenging for mask inspection tools. We examine the problems associated with
constraining Models-Based OPC with mask inspection driven rules. We give examples of a 45nm technology node
contact layer design which will receive sub-optimal OPC treatment due to mask inspection constraints. We then take the
mask defect specification typically used for this mask layer, and use Monte Carlo simulation methods to place minimum
sized simulated defects in various locations in close proximity to these sensitive layouts. Simulations of the optimal OPC
are compared to optimal OPC with defects, and to the sub-optimal constrained OPC. Using knowledge about the
frequency of small defects on masks, one can compare the risks associated with small mask defects to the risks
associated with sub-optimal OPC. This exercise demonstrates that there are some instances where mask rules based on
inspection capabilities and defect sensitivity alone can be problematic, and that OPC requirements need to be taken into
account when choosing a defect specification and an inspection strategy. We conclude by proposing a strategy for
balancing these requirements in a practical manner.
Performing model based optical proximity correction (MB-OPC) is an essential step in the production of advanced integrated circuits manufactured with optical lithography technology. The accuracy of these models highly depends on the experimental data used in the model development and on the appropriate selection of the model parameters. The optical and resist model parameters selected during model build have a significant impact on the OPC model accuracy, run time, and stability. In order to avoid excessively high run times as well as ensure acceptable results, a compromise must be made between OPC run time and model accuracy. The modeling engineer has to optimize the necessary model parameters in order to find a good trade-off that achieves acceptable accuracy with reasonable run time. In this paper, we investigate the effect of some selected optical and resist model parameters on the OPC model accuracy, run time, and stability.
Current state-of-the-art OPC (optical proximity correction) for 2-dimensional features consists of optimized
fragmentation followed by site simulation and subsequent iterations to adjust fragment locations and
minimize edge placement error (EPE). Internal and external constraints have historically been available in
production quality code to limit the movement of certain fragments, and this provides additional control for
OPC. Values for these constraints are left to engineering judgment, and can be based on lithography
process limitations, mask house process limitations, or mask house inspection limitations. Often times
mask house inspection limitations are used to define these constraints. However, these inspection
restrictions are generally more complex than the 2 degrees of freedom provided in existing standard OPC
software. Ideally, the most accurate and robust OPC software would match the movement constraints to
the defect inspection requirements, as this prevents over-constraining the OPC solution.
This work demonstrates significantly improved 2-D OPC correction results based on matching movement
constraints to inspection limitations. Improvements are demonstrated on a created array of 2D designs as
well as critical level chip designs used in 45nm technology. Enhancements to OPC efficacy are proven for
several types of features. Improvements in overall EPE (edge placement error) are demonstrated for
several different types of structures, including mushroom type landing pads, iso crosses, and H-bar
structures. Reductions in corner rounding are evident for several 2-dimensional structures, and are shown
with dense print image simulations. Dense arrays (SRAM) processed with the new constraints receive
better overall corrections and convergence. Furthermore, OPC and ORC (optical rules checking)
simulations on full chip test sites with the advanced constraints have resulted in tighter EPE distributions,
and overall improved printing to target.
The OPC treatment of aerial mage ripples (local variations in aerial contour relative to constant target edges) is one of the growing issues with very low-k1 lithography employing hard off-axis illumination. The maxima and minima points in the aerial image, if not optimally treated within the existing model based OPC methodologies, could induce severe necking or bridging in the printed layout. The current fragmentation schemes and the subsequent site simulations are rule-based, and hence not optimized according to the aerial image profile at key points. The authors are primarily exploring more automated software methods to detect the location of the ripple peaks as well as implementing a simplified implementation strategy that is less costly. We define this to be an adaptive site placement methodology based on aerial image ripples. Recently, the phenomenon of aerial image ripples was considered within the analysis of the lithography process for cutting-edge technologies such as chromeless phase shifting masks and strong off-axis illumination approaches [3,4]. Effort is spent during the process development of conventional model-based OPC with the mere goal of locating these troublesome points. This process leads to longer development cycles and so far only partial success was reported in suppressing them (the causality of ripple occurrence has not yet fully been explored). We present here our success in the implementation of a more flexible model-based OPC solution that will dynamically locate these ripples based on the local aerial image profile nearby the features edges. This model-based dynamic tracking of ripples will cut down some time in the OPC code development phase and avoid specifying some rule-based recipes. Our implementation will include classification of the ripples bumps within one edge and the allocation of different weights in the OPC solution. This results in a new strategy of adapting site locations and OPC shifts of edge fragments to avoid any aggressive correction that may lead to increasing the ripples or propagating them to a new location. More advanced adaptation will be the ripples-aware fragmentation as a second control knob, beside the automated site placement.