As the technology node for the semiconductor manufacturing approaches advanced nodes, the scattering-bars (SBs) are more crucial than ever to ensure a good on-wafer printability of the line space pattern and hole pattern. The main pattern with small pitches requires a very narrow PV (process variation) band. A delicate SB addition scheme is thus needed to maintain a sufficient PW (process window) for the semi-iso- and iso-patterns. In general, the wider, longer, and closer to main feature SBs will be more effective in enhancing the printability; on the other hand, they are also more likely to be printed on the wafer; resulting in undesired defects transferable to subsequent processes. In this work, we have developed a model based approach for the scattering-bar printing avoidance (SPA). A specially designed optical model was tuned based on a broad range of test patterns which contain a variation of CDs and SB placements showing printing and non-printing scattering bars. A printing threshold is then obtained to check the extra-printings of SBs. The accuracy of this threshold is verified by pre-designed test patterns. The printing threshold associated with our novel SPA model allows us to set up a proper SB rule.
In advanced technological nodes, the photoresist absorbs light, which is reflected by underlying topography during optical lithography of implantation layers. Anti-reflective coating (ARC) helps to suppress the reflections, but ARC removal may damage transistors, not to mention its relatively high cost. Therefore ARC is usually not used, and topography modeling becomes obligatory for printing implantation shapes. Furthermore, presence of Fin Field Effect Transistors (FinFETs) makes modeling of non-uniform substrate reflections exceptionally challenging.
In realistic designs, the same implantation shape may be found in a vertical or in a rotated horizontal orientation. This creates two types of relationships between the critical dimension (CD) and FinFET, namely parallel to and perpendicular to the fins. The measurement data shows that CDs differ between these two orientations. This discrepancy is also revealed by our Rigorous Optical Topography simulator. Numerical experiments demonstrate that the shape orientation may introduce CD differences of up to 45 nm with a 248 nm illumination for 14 nm technology. These differences are highly dependent on the enclosure (distance between implantation shape and active area). One of the major causes of the differences is that in the parallel orientation the shape is facing solid sidewalls of fins, while the perpendicular oriented shape “sees” only perforated sidewalls of the fin structure, which reflect much less energy.
Meticulously stated numerical experiments helped us to thoroughly understand anisotropic behavior of CD measurement. This allowed us to more accurately account for FinFET-related topography effects in the compact implantation modeling for optical proximity corrections (OPC). This improvement is validated against wafer measurement data.
Optical Proximity Correction (OPC) has continually improved in accuracy over the years by adding more physically based models. Here, we further extend OPC modeling by adding the Analytical Linescan Model (ALM) to account for systematic biases in CD-SEM metrology. The ALM was added to a conventional OPC model calibration flow and the accuracy of the calibrated model with the ALM was compared to the standard model without the ALM using validation data. Without using any adjustable parameters in the ALM, OPC validation accuracy was improved by 5%. While very preliminary, these results give hope that modeling metrology could be an important next step in OPC model improvement.
Resist profile shapes become important for 22nm node and beyond as the process window shrinks. Degraded profile shapes for example may induce etching failures. Rigorous resist simulators can simulate a 3D resist profile accurately but they are not fast enough for correction or verification on a full chip. Compact resist models are fast but have traditionally modeled the resist in two dimensions. They provide no information on the resist loss and sidewall angle. However, they can be extended to predict resist profiles by proper setting of optical parameters and by accounting for vertical effects. Large resist shrinkages in NTD resists can also be included in the compact model. This article shows how a compact resist model in Calibre can be used to predict resist profiles and resist contours at arbitrary heights.
As critical dimensions decrease for 32-nm node and beyond, the resist loss increases and resist patterns become more vulnerable to etching failures. Traditional optical proximity correction (OPC) models only consider two-dimensional (XY) contours and neglect height (Z) variations. Rigorous resist simulators can simulate a three-dimensional (3-D) resist profile, but they are not fast enough for correction or verification on a full chip. However, resist loss for positive-tone resists is mainly driven by optical intensity variations, which are accurately modeled by the optical portion of an OPC model. We show that a compact resist model can be used to determine resist loss by properly selecting the optical image plane for calibration. The model can then be used to identify toploss hotspots on a full chip and, in some cases, for correction of these patterns. In addition, the article will show how the model can be made more accurate by accounting for some 3-D effects like diffusion through height.
As Critical Dimension (CD) sizes decrease for 32 nm node and beyond, resist loss increases and resist patterns
become more vulnerable to etching failures. Traditional OPC models only consider 2D contours and neglect height
variations. Rigorous resist simulators can simulate a 3D resist profile but they are not fast enough for correction or
verification on a full chip. However, resist loss for positive tone resists is mainly driven by optical intensity
variations which are accurately modeled by the optical portion of an OPC model. In this article, we show that a
CalibreTM CM1 resist model can be used to determine resist loss by properly selecting the optical image plane for
calibration. The model can then be used to identify toploss hotspots on a full chip and in some cases to correction of
these patterns. In addition, the article will show how the model can be made more accurate by accounting for some
3D effects like diffusion through height.
This paper extends the state of the art by demonstrating performance improvements in the Domain
Decomposition Method (DDM) from a physical perturbation of the input mask geometry. Results from four
testcases demonstrate that small, direct modifications in the input mask stack slope and edge location can result in
model calibration and verification accuracy benefit of up to 30%. All final mask optimization results from this
approach are shown to be valid within measurement accuracy of the dimensions expected from manufacture. We
highlight the benefits of a more accurate description of the 3D EMF near field with crosstalk in model calibration
and impact as a function of mask dimensions. The result is a useful technique to align DDM mask model accuracy
with physical mask dimensions and scattering via model calibration.
The Domain Decomposition Method (DDM) for approximating the impact of 3DEMF effects was introduced nearly ten years ago as an approach to deliver good accuracy for rapid simulation of full-chip applications. This approximation, which treats mask edges as independent from one another, provided improved model accuracy over the traditional Kirchhoff thin mask model for the case of alternating aperture phase shift masks which featured severe mask topography. This aggressive PSM technology was not widely deployed in manufacturing, and with the advent of thinner absorbing layers, the impact of mask topography has been relatively well contained through the 32 nm technology node, where Kirchhoff mask models have proved effective. At 20 nm and below, however, the thin mask approximation leads to larger errors, and the DDM model is seen to be effective in providing a more accurate representation of the aerial image. The original DDM model assumes normal incidence, and a subsequent version incorporates signals from oblique angles. As mask dimensions become smaller, the assumption of non-interacting mask edges breaks down, and a further refinement of the model is required to account for edge to edge cross talk. In this study, we evaluate the progression of improvements in modeling mask 3DEMF effects by comparing to rigorous simulation results. It is shown that edge to edge interactions can be accurately accounted for in the modified DDM library. A methodology is presented for the generation of an accurate 3DEMF model library which can be used in full chip OPC correction.
In this work, 3D mask modeling capabilities of Calibre will be used to assess mask topography impact on EUV imaging.
The EUV mask absorber height and the non-telecentric illumination at mask level, modulate the captured intensity from
the shadowed mask area through the reflective optics on to the wafer, named as the mask shadowing effect. On the other
hand, thinning the mask absorber height results in unwanted background intensity, or called flare. A true compromise
has to be taken into account for the height parameter of a EUV mask absorber. We will discuss the state-of-the-art 3D
mask modeling capabilities, and will present methodologies to tackle the described EUV mask shadowing effect in
Calibre software. The findings will be validated against experiments on ASML's NXE:3100 EUV scanner at imec.
Masks with two different absorber heights will be evaluated on various combinations of features containing line/space
As mask feature sizes have shrunk well below the exposure wavelength, the thin mask of Kirchhoff approximation
breaks down and 3D mask effects contribute significantly to the through-focus CD behavior of specific features.
While full-chip rigorous 3D mask modeling is not computationally feasible, approximate simulation methods do
enable the 3D mask effects to be represented. The use of such approximations improves model prediction capability.
This paper will look at a 28nm darkfield and brightfield layer datasets that were calibrated with a Kirchhoff model
and with two different 3D-EMF models. Both model calibration accuracy and verification fitness improvements are
realized with the use of 3D models.
Through a series of experiments and simulation studies, this paper will explore the lithographic impact of absorber
thickness choice on an EUV photomask and highlight the trade-offs that exist between thick and thin absorbers.
Fundamentally, thinning the absorber modifies the intensity and phase of light reflected from the absorber while
simultaneously decreasing in the influence of feature edge topography. The decision to deploy a thinner absorber
depends on which imaging effect has a smaller impact after practical mitigation and correction strategies are employed.
These effects and the ability to correct for them are investigated by evaluating the absorber thickness impact on
lithographic imaging performance, stray light effects, topography effects, and CD variability. Although various tradeoffs
are described, it is generally concluded that thinning the absorber thickness below around 68 nm is not
recommended for a TaBN/TaBO absorber stack.
The introduction of EUV lithography into the semiconductor fabrication process will enable a continuation
of Moore's law below the 22nm technology node. EUV lithography will, however, introduce new sources
of patterning distortions which must be accurately modeled and corrected with software. Flare caused by
scattered light in the projection optics result in pattern density-dependent imaging errors. The combination
of non-telecentric reflective optics with reflective reticles results in mask shadowing effects. Reticle
absorber materials are likely to have non-zero reflectivity due to a need to balance absorber stack height
with minimization of mask shadowing effects. Depending upon placement of adjacent fields on the wafer,
reflectivity along their border can result in inter-field imaging effects near the edge of neighboring
exposure fields. Finally, there exists the ever-present optical proximity effects caused by diffractionlimited
imaging and resist and etch process effects. To enable EUV lithography in production, it is
expected that OPC will be called-upon to compensate for most of these effects. With the anticipated small
imaging error budgets at sub-22nm nodes it is highly likely that only full model-based OPC solutions will
have the required accuracy. The authors will explore the current capabilities of model-based OPC software
to model and correct for each of the EUV imaging effects. Modeling, simulation, and correction
methodologies will be defined, and experimental results of a full model-based OPC flow for EUV
lithography will be presented.
Proc. SPIE. 7969, Extreme Ultraviolet (EUV) Lithography II
KEYWORDS: Calibration, Distortion, Scanning electron microscopy, Printing, Photomasks, Extreme ultraviolet, Extreme ultraviolet lithography, Optical proximity correction, Photoresist processing, Back end of line
For the logic generations of the 15 nm node and beyond, the printing of pitches at 64nm and below are needed.
For EUV lithography to replace ArF-based multi-exposure techniques, it is required to print these patterns in
a single exposure process. The k1 factor is roughly 0.6 for 64nm pitch at an NA of 0.25, and k1 ≈ 0.52 for
56nm pitch. These k1 numbers are of the same order at which model based OPC was introduced in KrF and
ArF lithography a decade or so earlier. While we have done earlier work that used model-based OPC for the
22nm node test devices using EUV,1 we used a simple threshold model without further resist model calibration.
For 64 nm pitch at an NA of 0.25, the OPC becomes more important, and at 56nm pitch it becomes critical.
For 15 nm node lithography, we resort to a full resist model calibration using tools that were adapted from
conventional optical lithography. We use a straight shrink 22 nm test layout to assess post-OPC printability of
a metal layer at pitches at 64 nm and 56 nm, and we use this information to correct test layouts.
The introduction of EUV lithography into the semiconductor fabrication process will enable a continuation
of Moore's law below the 22 nm technology node. EUV lithography will, however, introduce new and
unwanted sources of patterning distortions which must be accurately modeled and corrected on the
reticle. Flare caused by scattered light in the projection optics is expected to result in several nanometers of
on-wafer dimensional variation, if left uncorrected. Previous work by the authors has focused on
combinations of model-based and rules-based approaches to modeling and correction of flare in EUV
lithography. Current work to be presented here focuses on the development of an all model-based approach
to compensation of both flare and proximity effects in EUV lithography. The advantages of such an
approach in terms of both model and OPC accuracy will be discussed. In addition, the authors will discuss
the benefits and tradeoffs associated with hybrid OPC approaches which mix both rules-based and modelbased
OPC. The tradeoffs to be explored include correction time, accuracy, and data volume.
The continuous reduction of device dimensions and densities of integrated circuits increases the demand for accurate
process window models used in optical proximity correction. Beamfocus and dose are process parameters that have
significant contribution to the overall critical feature dimension error budget. The increased number of process
conditions adds to the model calibration time since a new optical model needs to be generated for each focus condition.
This study shows how several techniques can reduce the calibration time by appropriate selection of process conditions
and features while maintaining good accuracy. Experimental data is used to calibrate models using a reduced set of data.
The resulting model is compared with the model calibrated using the full set of data. The results show that using a
reduced set of process conditions and using process sensitive features can yield a model as accurate as the model
calibrated using the full set but in a shorter amount of time.
The critical role of flare in extreme ultraviolet (EUV) lithography is well known. In this work, the implementation of a robust flare metrology is discussed, and the proposed approach is qualified both in terms of precision and accuracy. The flare measurements are compared to full-chip simulations using a simplified single fractal point-spread function (PSF), and the parameters of the analytical PSF are optimized by comparing the simulation output to the experimental results. After flare map calibration, the matching of simulation and experiment in the flare range from 4 to 12% is quite good, clearly indicating an offset of about 3%. The origin of this offset is attributed to the presence of DUV light. An experimental estimate of the DUV component is found in good agreement with the predicted value.
With the push toward 32 nm half-pitch, OPC models will need to account for a wider range of sources of imaging
variability in order to meet the CD budget requirements. The effects of chromatic aberration on imaging have been a
recent area of interest but little work has been done to include this effect in OPC models. Chromatic aberrations in the
optical system give rise to a blurring of the intensity distribution in the imaging plane even for highly line-narrowed
immersion laser sources. The resulting focus blur can introduce a feature-dependent CD bias of several nanometers.
Usually, the empirical components of the resist model can reduce or completely compensate for this imaging effect.
However, it is not well known if including a more physical image model over a large range of laser bandwidth conditions
will improve the OPC accuracy or process-variability robustness.
This study demonstrates the correlation of physical laser bandwidth perturbations with perturbations of the optical model
in Calibre. The laser bandwidth is experimentally perturbed to obtain several sets of CD measurements for different
bandwidths. These are then used in model calibration with the corresponding perturbation in the optical model. Finally,
we quantify the improvement in model accuracy obtained when including an input of laser bandwidth.
As patterning technology advances beyond 45-nm half-pitch, the process window shrinks dramatically even with
advanced resolution enhancement techniques. Beamfocus represents one of the process parameters that has a significant
contribution to the overall critical feature dimension error budget. In building an optical model for proximity correction,
the final model quality strongly depends on matching the focus used in the simulation to the experimental focus
conditions. In this paper, we present a new method to determine the best beamfocus and verify its accuracy using actual
test pattern measurements.
While lithography R&D community at large has already gotten the mind set for 32nm, all eyes are on 22nm node.
Current consensus is to employ computational lithography to meet wafer CD uniformity (CDU) requirement. Resolution
enhancement technologies (RET) and model OPC are the two fundamental components for computational lithography.
Today's full-chip CDU specifications are already pushing physical limits at extreme lithography k1 factor. While
increasingly aggressive RET either by double exposure or double patterning are enabling imaging performance, for CDU
control we need ever more accurate OPC at a greater computational efficiency.
In this report, we discuss the desire for wanting more robust and accurate OPC models. One important trend is to have
predictive OPC models allowing accurate OPC results to be obtained much faster, shortening the qualification process
for exposure tools. We investigate several key parameters constitute to accuracy achievable in computational
lithography. Such as the choice of image pixel size, numbers of terms needed for transmission cross coefficients (TCC),
and "safe" ambit radius for assuring accurate CD prediction. Selections of image pixel size and "safe" ambit radius
together determine % utilization for 2D fast Fourier transformation (FFT) for efficient full-chip OPC computation. For
IC manufacturing beyond ArF, we made initial observations and estimations on EUV computational lithography. These
discussions pave the way for developing a computational lithography roadmap extends to the end of Moore's Law. This
computational lithography roadmap aims to be a complement for the current ITRS roadmap on what does it take to
achieve CD correction accuracy.
The accuracy of a fast 3D thick mask model is evaluated for 6% AttPSM having sub-resolution assist features (SRAF).
The main features and SRAFs are designed to print 40nm lines or spaces on wafer (k1~0.28) through pitch from 100nm
to 500nm. The resulting optimum SRAF sizes vary from 10nm to 48nm depending on the main feature pitch, mask tone
and illuminator shape. The model accuracy is evaluated on both main feature CDs and SRAF side lobe intensities by
comparing with a rigorous model. The fast 3D model shows improvements in both areas over thin mask model,
particularly in SRAF printability prediction.
In the application of model-based optical proximity correction (OPC) to a full chip layout, lithography simulators
require fast imaging algorithms to quickly obtain the critical dimensions (CDs) of the printed features. Model accuracy
is frequently traded-off for speed in order to shorten the computation time for full chip design. The sum-of-coherent
systems approximation represents the current standard for fast image computation. This approximation decomposes the
optical system response function in the Hopkins imaging equation into a sum of products of its eigenfunctions, or
kernels, via singular value decomposition. The partially-coherent optical imaging system is then represented as a sum of
images formed by coherently illuminated optical systems with transfer functions corresponding to the kernels of the
optical system response. The eigenvalues usually decay quickly, depending on the properties of the optical system.
Current models will typically use the first few dominant kernels since each additional kernel adds to the computational
time. However, there is no general guideline that indicates where to cut off the series in order to obtain the necessary
accuracy. In this paper, we propose a generally applicable heuristic for choosing the number of kernels.
We describe a few heuristics that show how to truncate the number of kernels that are included in a lithography model
calibration, resulting in a more efficient model for OPC treatment. The heuristics are based on various eigenvalue
measures such as the energy or the degree of coherence and express the CD error as a function of these measures. The
heuristics then show the number of kernels needed for a given accuracy.