This paper presents an automated DFM solution to generate Bit Line Pattern Dummy (BLPD) for memory devices. Dummy shapes are aligned with memory functional bits to ensure uniform and reliable memory device. This paper will present a smarter approach that uses an analysis based technique for adding the dummy shapes that have different types according to the space available. Experimental results based on layout of Mobile dynamic random access memory (DRAM).
Controlling critical dimension (CD) of implant blocking layers during photolithography has been challenging due to reflection caused by wafer topography. Unexpected reflection which comes from wafer topography makes severe CD variation on mask patterns of implant layer. Using bottom antireflective coatings(BARCs) can reduce the topography effect, but it could also damage wafer surface during BARCs dry etching. Developable BARCs(D-BARCs) could be alternative solution for wafer topography effect. However there are some issues that should be considered in D-BARCs process such as sensitive temperature control and managing defects. There are also papers introducing model based topography aware OPC as a solution for wafer topography effect implant layer. But building topography aware OPC model is very complex and it takes too much time to build.
In this paper, we will introduce experimental results of wafer topography effect using various test patterns and propose a simple method that could effectively reduce wafer topography effect.
As design rule of devices are getting smaller, it is hard to obtain enough process window like DOF, EL. In aspect of
device integration, lithography processes which are included in etching process became more and more important. It has
been claimed that photo resist profile is closely related with etch bias and vertical profile. Resist top-loss and bottom
slope seriously affect after-etching profile. In order to address these problems, new model based verification method is
necessary for preventing hot spots.
In this paper, we propose more practical method of model based verification using rigorous simulation and wafer
verification results. Highly accurate model is obtained by physical model fitting with minimal experimental data set.
After that, virtual data are extracted from rigorous simulation model for applying full chip model based verification
modeling. Basically, 2 data sets will be needed for verification of 2-level model, for detecting resist top-loss and bottom-slope.
Finally this article shows comparison results of model based verification and real wafer inspection. And also, we
try to prove that the newly proposed method is another good candidate to address existing problems such as pinching and
bridging after post etching and CMP process.
The insertion of SRAF(Sub-Resolution Assist Feature) is one of the most frequently used method to enlarge the process window area. In most cases, the size of SRAF is proportional to the focus margin of drawn patterns. However, there is a trade-off between the SRAF size and SRAF printing, because SRAF is not supposed to be patterned on a wafer. For this reason, a lot of OPC engineers have been tried to put bigger and more SRAFs within the limits of the possible. The fact that many papers about predicting SRAF printability have been published recent years reflects this circumstance. Pattern dummy is inserted to enhance the lithographic process margin and CD uniformity unlike CMP dummy for uniform metal line height. It is ordinary to put pattern dummy at the designated location under consideration of the pitch of real patterns at design step. However, it is not always desirable to generate pattern dummies based on rules at the lithographic point of view. In this paper, we introduce the model based pattern dummy insertion method, which is putting pattern dummies at the location that model based SRAF is located. We applied the model based pattern dummy to the layers in logic devices, and studied which layer is more efficient for the insertion of dummies.
There are strong demands for techniques which are able to extend application of ArF immersion lithography.
Especially, the leading edge techniques are required to make very small hole patterns below 50nm. Several
techniques such as double patterning technique, free-form illumination and resist shrinkage technology are
considered as viable candidates. Most of all, NTD (Negative Tone Development) is being regarded as the most
promising technology for the realization of small hole patterns
When NTD process is applied, hole patterns are defined by island type features on the reticle and consequently its
optical performance shows better result compared with PTD (Positive Tone Development) process. However it is
still difficult to define extremely small hole patterns below 40nm, new combination process of NTD with RELACS
is being introduced to overcome resolution limitation. NTD combined with RELACS, which is the most advanced
lithography technology, definitely enable us to generate smaller size hole patterns on the wafer.
A chemical shrinkage technology, RELACS (Resolution Enhancement Lithography Assisted by Chemical Shrink),
utilizes the cross linking reaction catalyzed by the acid component existing in a predefined resist pattern. In case of
PTD combined with RELACS process, we already know that CD change after the shrinkage process is not
influenced by duty ratio. So we could easily reflect the RELACS bias to meet the CD target during OPC (Optical
Proximity Correction) procedure.
But NTD combined with RELACS process was not understood clearly, nor verified. It requires more investigation
of physical behavior during combined process in order to define exact hole patterns. The newly introduced process
might require additive OPC modeling procedure to satisfy target CD when NTD RELACS bias has different values
according to pitch and shape.
This study is going to include the investigation on two types of resist shrinkage process, PTD and NTD. The
optimized OPC methodology will be discussed through the evaluation on simple array hole patterns and random
As EUV lithography nears pilot-line stage, photolithography modeling becomes increasingly important in order for
engineers to build viable, production-worthy processes. In this paper, we present a comprehensive, calibrated
lithography model that includes optical effects such as mask shadowing and flare, combined with a stochastic resist
model that can predict effects such as line-edge roughness. The model was calibrated to CD versus pitch data with
varying levels of flare, as well as dense lines with varying degrees of mask shadowing. We then use this model to
investigate several issues critical to EUV. First, we investigate EUV photoresist technology: the impact of
photoelectron-PAG exposure kinetics on photospeed, and then we examine the trade-off between LWR and photospeed
by changing quencher loading in the photoresist model. Second, we compare the predicted process windows for dense
lines as flare and lens aberrations are reduced from the levels in the current alpha tools to the levels expected in the beta
tools. The observed interactions between optical improvements and resist LWR indicate that a comprehensive model is
required to provide a realistic evaluation of a lithography process.
In this paper, we will evaluate model assisted rule base SRAF. Model assisted rule base SRAF
combines the advantage of both model based SRAF and rule base SRAF to ensure high process margin
without the mask making difficulty with stable wafer output. Model will assist in generating a common rule
for rule based SRAF. Method to extract the rule from the models will first be discussed. Model assisted rule
based SRAF will be applied to 3Xnm DRAM contact. Evaluation and analysis of the simulated and actual
wafer result will be discussed. Our wafer result showed that by applying Model assisted rule based SRAF
showed nearly equal performance to models based SRAF with clearly better stability and mask fabrication
Scanner mismatch has become one of the critical issues in high volume memory production. There are several
components that contribute to the scanner CD mismatch. One of the major components is illumination pupil difference
between scanners. Because of acceleration of dimensional shrinking in memory devices, the CD mismatch became more
critical in electrical performance and process window.
In this work, we demonstrated computational lithography model based scanner matching for sub 3x nm memory devices.
We used ASML XT:1900Gi as a reference scanner and ASML NXT:1950i as the to-be-matched scanner. Wafer
metrology data and scanner specific parameters are used to build a computational model, and determine the optimal
settings by model simulation to minimize the CD difference between scanners. Nano Geometry Research (NGR) was
used as a wafer CD metrology tool for both model calibration and matching result verification. The extracted pupil
parameters from measured source map from both before and after matching are inspected and analyzed. Simulated and
measured process window changes by applying the matching sub-recipe are also evaluated.
It is necessary to apply extreme illumination condition on real device as minimum feature size of the device
shrinks. As k1 decrease, ultra extreme illumination has to be used. However, in case of using this illumination, CD and
process windows dramatically fluctuate as pupil shapes slightly changes. For past several years, Pupil Fit Modeling
(PFM) was developed in order to analyze pupil shape parameters which are independent from each others. The first
object in this work is to distinguish pupil shape of different scanner by separating more parameters. According to pupil
parameter analysis, the major factors of CD or process window difference between two scanner systems obviously
appear. Due to correlation between pupil parameter and scanner knob, pupil parameter analysis would be clearly
identified which scanner knob should be compensated. The second object is to define specification of each parameter by
using analysis of CD budget for each pupil parameters. Using periodic monitoring of pupil parameter which is controlled
by previous specification, scanner system in product lines can be maintained at ideal state. Additionally, OPC model
accuracy enhancement should be obtained by using highly accurate fitted pupil model. Recently, other application of
pupil model is reported for improvement of OPC and model based verification model accuracy. Such as modeling using
average optics and hot spot detection of scanner specific model are easily adopted by using pupil fit model. Therefore,
applications of pupil fit parameter for process model are very useful for improvement of model accuracy.
In our study, the quantity of model accuracy enhancement using PFM is investigated and analyzed. OPC and
hotspot point detection capability results with pupil fit model would be shown. Also, in this paper, trends of CD and
process window for each scanner parameter are evaluated by using pupil fit model. As of results, we were able to find
which pupil parameter has influence in critical layer CD and application of this result resulted in better accuracy in
detecting hotspot for model based verification.
As K1 factor for mass-production of memory devices has been decreased to almost its theoretical limit, the process
window of lithography is getting much smaller and the production yield has become more sensitive to even small
variations of the process in lithography. So it is necessary to control the process variations more tightly than ever. In
mass-production, it is very hard to extend the production capacity if the tool-to-tool variation of scanners and/or scanner
stability through time is not minimized. One of the most critical sources of variation is the illumination pupil. So it is
critical to qualify the shape of pupils in scanners to control tool-to-tool variations.
Traditionally, the pupil shape has been analyzed by using classical pupil parameters to define pupil shape, but these
basic parameters, sometimes, cannot distinguish the tool-to-tool variations. It has been found that the pupil shape can be
changed by illumination misalignment or damages in optics and theses changes can have a great effect on critical
dimension (CD), pattern profile or OPC accuracy. These imaging effects are not captured by the basic pupil parameters.
The correlation between CD and pupil parameters will become even more difficult with the introduction of more
complex (freeform) illumination pupils.
In this paper, illumination pupils were analyzed using a more sophisticated parametric pupil description (Pupil Fit
Model, PFM). And the impact of pupil shape variations on CD for critical features is investigated. The tool-to-tool
mismatching in gate layer of 4X memory device was demonstrated for an example. Also, we interpreted which
parameter is most sensitive to CD for different applications. It was found that the more sophisticated parametric pupil
description is much better compared to the traditional way of pupil control. However, our examples also show that the
tool-to-tool pupil variation and pupil variation through time of a scanner can not be adequately monitored by pupil
parameters only, The best pupil control strategy is a combination of pupil parameters and simulated CD using measured
illumination pupils or modeled pupils.
In this study, in order to accurately predict the shadowing and flare effect of EUVL, we compared
and analyzed the wafer and simulation result of the shadowing and flare effect of the EUV alpha demo tool at
IMEC. Flare distribution of the EUV Alpha Demo tool was measured and was used in simulation tool to
simulate several test case wafer result. Also, shadowing effect of the in-house created mask was measured
and compared with simulation result to match the predictability of the simulation tool.
Shadowing test comparison of wafer to simulation showed that simulation with resist model
showing better overall fitness to actual wafer result. Both aerial and resist model simulation result was within
2.33nm to wafer result. Measured wafer CD to simulation CD comparison for flare showed that average error
RMS of 3 test cases was 0.52, 2.05 and 3.47 nm for each test case respectively. In order to have higher
accuracy for flare simulation, larger diameter size for flare profile is necessary. Also from shadow test, resist
model better fit the wafer trend than using only the aerial image for simulating shadowing effect. EUV tool
showed very promising result for sub 30nm DRAM critical layer printing ability and with proper flare and
shadowing correction, reasonable result is expected for sub 30 and beyond critical layers of DRAM using
EUV lithography. Further work will be done to compensate flare and shadowing effect of EUV.
One of the major issues introduced by development of Extreme Ultra Violet Lithography (EUV) is high level of flare and shadowing introduced by the system. Effect of the high level flare degrades the aerial images and may introduce unbalanced Critical Dimension Uniformity (CDU) and so on. Also due to formation of the EUV tool, shadowing of the pattern is another concern added from EUVL. Shadowing of the pattern will cause CD variation for pattern directionality and position of the pattern along the slit. Therefore, in order to acquire high resolution wafer result, correction of the shadowing and flare effect is inevitable for EUV lithography.
In this study, we will analyze the effect of shadowing and flare effect of EUV alpha demo tool at IMEC. Simulation and wafer testing will be analyzed to characterize the effect of shadowing on angle and slit position of the pattern. Also, flare of EUV tool will be plotted using Kirk's disappearing pad method and flare to pattern density will also be analyzed. Additionally, initial investigation into actual sub 30nm Technology DRAM critical layer will be performed. Finally simulation to wafer result will be analyzed for both shadowing and flare effect of EUV tool.
During the past few years, new technology brought about new problems we face today due to
shrinkage of the feature size. Some of the problems such as Mask Error Enhancement Factor (MEEF), overlay
control, and so on are crucial because large MEEF can make it difficult to satisfy CD target, and bring about
large CD variation. Moreover, it can also lead to degraded CD uniformity which would have an undesired
influence on device properties. Recently, 2-D random contact hole is getting crucial because it normally has
very large MEEF and cause asymmetric proximity effect which can cause large CD variation, and
misalignment of layer-to-layer. In other words, the method of optical proximity correction and building
accurate OPC model for 2-D random contact hole pattern could be key factor obtaining better CD uniformity
with enhanced overlay margin. Furthermore, in order to get very tangible performance, design based
metrology system (DBM) is used to evaluate process performance. Design based metrology systems are able
to extract information of whole chip CD variation. On top of that, OPC abnormality can be identified and
design feedback can be also disclosed.
In this paper, we will investigate novel method for sub 45nm 2-D random contact hole printing.
First, optical proximity effect (OPE) for two dimensional layout will be investigated. Second, the results of
Variable Threshold Modeling (VTM) for various slit contact hole patterns will be analyzed. Third, model
based verification will be done and analyzed through full-chip before creating full-chip mask. Finally, sub
45nm 2-D random contact hole printing performance will be presented by DBM.
Recently, the dramatic acceleration in dimensional shrink of DRAM memory devices has been observed. For sub 60 nm memory device, we suggest the following method of optical proximity correction (OPC) to enhance the critical dimension uniformity (CDU). In order to enhance CD variation of each transistor, hundreds of thousand transistor CD data were used through design based metrology (DBM) system. In a traditional OPC modeling method, it is difficult to realize enhancement of CD variation on chip because of the limitation of OPC feedback data.
Even though optical properties are surely understood from recent computational lithography models, there are so many abnormalities like mask effect, thermal effect from the wafer process, and etch bias variation of the etching process. Especially, etch bias is too complicate to predict since it is related to variations such as space among adjacent patterns, the density of neighboring patterns and so on.
In this paper, process proximity correction (PPC) adopting the pattern to pattern matching method is used with huge amount of CD data from real wafer. This is the method which corrects CD bias with respect to each pattern by matching the same coordinates. New PPC method for enhancement of full chip CD variation is proposed which automatically corrects off-targeted feature by using full chip CD measurement data of DBM system. Thus, gate CDU of sub 60 nm node is reduced by using new PPC method. Analysis showed that our novel PPC method enhanced CD variation of full chip up to 20 percent.
As the semiconductor feature size continues to shrink, electrical resistance issue is becoming one of the
industry's dreaded problems. In order to overcome such problem, many of the top semiconductor manufacturers have
turned there interest to copper process. Widely known, copper process is the trench first damascene process which
utilize dark tone mask instead of widely used clear tone mask. Due to unfamiliarity and under development of dark tone
mask technology compared to clear tone mask, many have reported patterning defect issues using dark tone mask.
Therefore, necessity of DFM for design that meets both dark and clear tone is very large in development of copper
process based device.
In this study, we will propose a process friendly Design For Manufacturing (DFM) rule for dual tone mask.
Proposed method guides the layout rule to give same performance from both dark tone and clear tone mask from same
design layout. Our proposed method will be analyzed on photolithography process margin factors such as Depth Of
Focus (DOF) and Exposure Latitude (EL) on sub 50nm Flash memory interconnection layer.
Model based OPC has been generally used to correct proximity effects down to ~50 nm critical dimensions at
k1 values around 0.3. As design rules shrink and k1 drops below 0.3, however; it is very hard to obtain enough process
window and acceptable MEEF (Mask Error Enhancement Factor) with conventional model based OPC. Recently, ILT
(Inverse Lithography Technology) has been introduced and has demonstrated wider process windows than conventional
OPC. The ILT developed by Luminescent uses level-set methods to find the optimal photo mask layout, which
maximizes the process window subject to mask manufacturing constraints.
We have evaluated performance of ILT for critical dimensions of 55nm, printed under conditions
corresponding to k1 ~ 0.28. Results indicated a larger process window and better pattern fidelity than obtained with
other methods. In this paper, we present the optimization procedures, model calibration and evaluation results for 55 nm
metal and contact layers and discuss the possibilities and the limitations of this new technology.
In the past when design rule is not tight, CD-based OPC modeling was acceptable. But shrinkage of design rule
eventually led to small process window, which in part increased MEEF(Mask Error Enhancement Factor). Hence, data
for OPC modeling have also become more complex and diverse in order to characterize the critical OPC models. The
number of measurement points for OPC model evaluation has increased to several hundred points per layer, and
metrology requests for realizing pattern shapes on the wafer are no longer simple one-dimensional measurements.
Traditional CD-based OPC modeling is based on 1 dimensional parameter fitting and has limited information. Due to
this reason, the accuracy of the model has intrinsic limitations. Recently, development of modeling methodology
resulted in SEM image calibration. SEM image calibration use SEM image to calibrate large volume 2 dimensional
information. SEM image calibration is based on real SEM image which has several thousands of CD information. It
needs only SEM images instead of several hundred CD data, so data feedback is more easy. But this approach makes it
difficult to achieve confidential level for predictability because SEM image is restricted to local region. And modeling
accuracy is highly dependent on SEM image quality and local position.
In this paper, we propose SEM image calibration method that feeds back SEM image calibrated model to
model-based verification. By using this method, modeling accuracy is increased and better post OPC verification can be
made. We will discuss the application result on sub-60nm device and the feasibility of this approach.
As the minimum transistor length is getting smaller, the variation and uniformity of transistor length seriously effect
device performance. So, the importance of optical proximity effects correction (OPC) and resolution enhancement
technology (RET) cannot be overemphasized. However, OPC process is regarded by some as a necessary evil in device
performance. In fact, every group which includes process and design, are interested in whole chip CD variation trend and
CD uniformity, which represent real wafer.
Recently, design based metrology systems are capable of detecting difference between data base to wafer SEM image.
Design based metrology systems are able to extract information of whole chip CD variation. According to the results,
OPC abnormality was identified and design feedback items are also disclosed. The other approaches are accomplished on
EDA companies, like model based OPC verifications. Model based verification will be done for full chip area by using
well-calibrated model. The object of model based verification is the prediction of potential weak point on wafer and fast
feed back to OPC and design before reticle fabrication. In order to achieve robust design and sufficient device margin,
appropriate combination between design based metrology system and model based verification tools is very important.
Therefore, we evaluated design based metrology system and matched model based verification system for optimum
combination between two systems. In our study, huge amount of data from wafer results are classified and analyzed by
statistical method and classified by OPC feedback and design feedback items. Additionally, novel DFM flow would be
proposed by using combination of design based metrology and model based verification tools.
As the k1 factor and minimum feature sizes decrease, the use of optical proximity correction (OPC) is increasing and is getting more complex. The complexity increases the possibility of correction errors like improper placement of edges in the OPC output data such that the printed results will deviate from target design.
In this paper we will describe new modeling method by using 2-dimensional test structures for model based verification of post OPC data. Recently, most of the semiconductor companies implement a system for model based verification (MBV) for post OPC data into a manufacturing data flow.
In case of model based verification, the most important thing is the accuracy of model which is used to detect the potential hot spot and critical errors like pinching-bridging errors and CD variation. For good model accuracy, process change has to be feedback to the model generation step by injecting real wafer information. Therefore, optimization process of 2-dimensional data set is needed.
We proposed new modeling method by using optimization process of calibration data set which consists of 2-dimensional structures. Also, we present results of MBV and discuss about constraints and considerations of model based verification.
Recently, as the design rule shrinks so does the CD tolerance. Therefore, the importance of simulation and OPC accuracy is increasing. In the past, when pattern size was large, rule-based OPC was acceptable but as the design rule shrinks accuracy of OPC turned to model-based OPC and almost all device uses this method. Model-based OPC is based on parameter fitting it has Model-Residual-Error (MRE). Due to this error the accuracy of the model has limitations. Usually variable-threshold or vector model is applied to the model in order to cut down the MRE. But still, size of the MRE is too large compared to CD tolerance. Currently, further development of model-based OPC resulted in creation of both model and rule-based OPC. This is called Hybrid OPC method. Hybrid-OPC method is based on model OPC but MRE can be lowered using rule bias to retarget the design data. But this method makes it difficult to retarget the design data in that rule biasing result is hard to predict after the model-based OPC operation.
In this paper, we propose New Hybrid OPC method that feeds back the MRE calibrated data set to model-based OPC method. By using this method, better OPC model can be made. We will be presenting the result after the method has been applied on sub-60nm device and the capability of this method.
As the minimum feature size of memory devices are getting smaller, model-based OPC accuracy requirements call for highly accurate process modeling and modeling strategies. Therefore, model-based OPC verification process required high accuracy due to unexpected errors on low-k process scheme.
Model of model-based OPC verification (MBV) process has to be accurate in order to detect potential hot spot and human errors, which includes physical design rule violation, mask fabrication rule violation and DB handling errors, and has also suitable speed of fast feedback to OPC and design side in view of DFM.
Recently, model-based OPC tools have progressively advanced in term of modeling. Nevertheless, because we applied extreme off-axis illumination on sub-70nm gate levels, model can not exactly predict the wafer results and have low accuracy.
In this paper, we evaluated several commercial model-based OPC verification (MBV) tools for sub 70nm memory device and compare review results with real wafer results. With the results, we analyze and discuss the major factor for poor OPC and MBV model accuracy for low-k process. Also we will be discussing about suitable speed of feedback to OPC and design part in terms of methods for analysis and categorization of huge number of errors.
We are focus on these two goals for MBV and discuss major factors for consideration. Finally, we would like to suggest optimized procedure for OPC verification by using calibrated models on sub-70nm memory Device.
In recent years, more burden is placed on OPC(Optical Proximity effect Correction) and ORC(Optical Rule Check) like never before due to low process margin caused by adoption of "Low K1" technology on lithography process. Normally, chip is composed of cell, core and periphery regions. Each of these regions has different characteristics patterning wise but usually the region with high density has much more chance for pinch, bridge or killing error and also has small process window. So verification of OPCed data must be highly accurate with fast operation speed. In this paper we developed full chip based ORC(Optical Rule Check)which satisfies both need, accuracy and speed. The result of pinch, bridge and small process window verification of Hybrid ORC will be shown followed by comparison of rule and model ORC methods.
In this paper, we will discuss feasibility of KrF and ArF technology to overcome 100 nm node. Simulation and experiment for this study were performed in view of mask error factor. Lithography simulation was done by Hyundai OPC Simulation Tool (HOST) based on diffused aerial image model (DAIM). In the case of k1 factor below 0.33, the photolithography process has no margin because of higher MEF value. Therefore, numerical aperture for KrF and ArF need to have over 0.95 and 0.75 respectively for 100 nm node. Actually, it is impossible to make exposure system with 0.95 NA. The mask error factor gave severe influence on the lithographic performance. To overcome 100 nm node, ArF lithography technology is more appropriate than KrF lithography considering MEF concept.
In order to improve the overlay accuracy in electron lithography, we have investigated the optimization of alignment key that included a ratio of alignment key according to scanning beam size, an optimum key depth/width, and material's dependency. The alignment repeatability of key, has the same ratio with scanning beam size, appears good results as compared with the other ratio. Scanning beam size also correlates with an alignment key width. As a process sequence of CMOS device, the key width of under layer is changed by the thickness of deposited materials, because of the deposition on side-wall. Therefore, the scanning beam size should be optimized for each step. In each material, there exists the critical thickness not affecting on the alignment reading repeatability. The standard deviation, which is calculated by measurement of key position with critical thickness, is less than 20 nm. We have results of the critical thickness of various materials. SiO2 and Si3N4 do not affect on the alignment signal, but doped WSix, Al, and doped poly-silicon are very sensitive because of back-scattering electrons. Using the optimized align key of WSix/doped poly-Si, the standard deviation was less than 10 nm. Otherwise, non-conducting layer must be etched more than 7000 angstrom. In this case, the standard deviation is larger than that of conducting materials, as more than 20 nm. We have results of optimum condition of alignment key in order to enhance the overlay accuracy. The standard deviation of total overlay accuracy is less than 50 nm which corresponds to 150 nm design rules device fabrication.