Publisher’s Note: This paper, originally published on 30-March, 2017, was replaced with a corrected/revised version on 6-April, 2017. If you downloaded the original PDF but are unable to access the revision, please contact SPIE Digital Library Customer Service for assistance.
At advanced nodes, definition of design rules and process options must be tightly optimized to deliver the best tradeoff performance, power, area and manufacturability. However, implementation platforms don’t typically have access to process information and process teams don’t have design knowledge, and optimization loops required for Design-Technology-Co-Optimization (DTCO) are either impossible or at best long and expensive for fabless design house.
Joining forces, ASML, IMEC and Cadence Design Systems developed an In-design and signoff lithography physical analysis well suited for 7/5nm and below. The Tachyon OPC+ engine used by IMEC 7/5nm process has been integrated in Cadence Litho Physical Analyzer (LPA) to perform lithography checks using the foundry process models, recipes, and hotspot detectors. This flow leverages existing LPA infrastructure for both custom and digital design platforms, as well as standalone signoff.
Depending upon the end application, LPA could be launched either from place & route or custom layout or standalone. LPA processes first the design database to identify hierarchy, decompose the layout for coloring and apply pattern matching to identify location requiring simulation. The layout is then passed to the Tachyon OPC tool to perform optical process correction and model-based litho verification that is validated on Silicon. The hotspots and contours are processed by LPA for generation of hotspot marker and fixing guidelines and provide all this information to the design environment.
The flow has been developed and demonstrated to work on IMEC 7nm, and can be ported to smaller or larger technologies. The paper will present the result of this In-design and signoff lithography physical analysis flow, how DTCO and design teams can add manufacturability to PPA.
Current patterning technology for manufacturing memory devices is being developed towards enabling high density
and high resolution capability. However, as applying high resolution technology results in decreased process margin,
OPC has to compensate for such effect. Since the process margin is decreased greatly for contact layers, technologies
such as RBAF (Rule-Based Assist Feature), MBAF (Model-Based Assist Feature), and ILT (Inverse Lithography
Technology) are considered to maximize the process margin [1, 2, 3]. Although ILT is the best solution in terms of
process margin, it has several disadvantages such as long OPC run-time, mask complexity, and unstable mask fidelity.
MBAF method is a good compromise for more advanced techniques mitigating those risks (but not eliminating it),
which is why it is often used for contact layers.
When setting up the rules for RBAF, not all patterns are considered. Thus, applying RBAF for contact layers may
result in decreased process margin for certain patterns since the same rule is applied globally. MBAF, on the other hand,
can maximize the process margin for various patterns as it generates AF (Assist Feature) to locations that maximize the
margin for the patterns considered. However, MBAF method is very sensitive to even a slight change of a target, which
influences the locations of the AF. This leads to generating different OPCed CD of the main features, even for those that
should not be affected by the changed target. Once the OPCed CD is changed, it is impossible to obtain the same mask
CD even when the mask is manufactured with the same method. If this case occurs during mass production, the entire
layer needs to be confirmed after each revision which leads to unnecessary time loss.
In this paper, we suggest a new OPC method to prevent this issue. With this flow, OPCed shapes of unchanged
patterns remain the same while only the changed targets are OPCed and replaced into the corresponding location, while
the boundaries between those regions are corrected using a model based boundary healing. This method can reduce the
overall OPCTAT as well as the time spent in verifying the entire layout after each revision. Details of these results will
be described in this paper. After further studies, this flow can also be applied to ILT.
At the 20nm technology node, it is challenging for simple resolution enhancements techniques (RET) to achieve sufficient process margin due to significant coupling effects for dense features. Advanced computational lithography techniques including Source Mask Optimization (SMO), thick mask modeling (M3D), Model Based Sub Resolution Assist Features (MB-SRAF) and Process Window Solver (PW Solver) methods are now required in the mask correction processes to achieve optimal lithographic goals. An OPC solution must not only converge to a nominal condition with high fidelity, but also provide this fidelity over an acceptable process window condition. The solution must also be sufficiently robust to account for potential scanner or OPC model tuning. In many cases, it is observed that with even a small change in OPC parameters, the mask correction could have a big change, therefore making OPC optimization quite challenging. On top of this, different patterns may have significantly different optimum source maps and different optimum OPC solution paths. Consequently, the need for finding a globally optimal OPC solution becomes important. In this work, we introduce a holistic solution including source and mask optimization (SMO), MB-SRAF, conventional OPC and Co-Optimization OPC, in which each technique plays a unique role in process window enhancement: SMO optimizes the source to find the best source solution for all critical patterns; Co-Optimization provides the optimized location and size of scattering bars and guides the optimized OPC solution; MB-SRAF and MB-OPC then utilizes all information from advanced solvers and performs a globally optimized production solution.
In previous work1, we introduced a new technology called Flexible Mask Optimization (FMO) that was successfully used for localized OPC correction. OPC/RET techniques such as model-based assist feature and process-window-based OPC solvers have become essential for addressing critical patterning issues at 2× and lower technology nodes. With an FMO flow, critical patterns were identified, classified and corrected in localized areas only, using advanced techniques. One challenge with this flow is that once the hotspots are identified, a user still has to come up with OPC solutions to address the hotspots. This process can be cumbersome and time consuming as different types of hotspots with new designs may require different recipes, causing delays to tapeout. What is required is a robust, powerful and automated OPC technique that can handle various types of hotspots, so an automatic hotspot correction flow can be established. In this work, we introduce a new cost-function-based OPC technique called Co-optimization OPC that can be used to correct various types of hotspots with minimum tuning effort. In this approach, the OPC solver simultaneously solves for all the segments in a patch including main and sub-resolution assist features (SRAF), applying additional user-defined cost function constraints such as MEEF, PV band, MRC and SRAF printability. Unlike conventional OPC solvers, Cooptimization solvers can also move and grow SRAFs, which further improves the process window. The key benefit of the Co-optimization OPC solution is that it can be used in a standard recipe to resolve many different hotspots encountered across various designs for a given layer. In this study, we demonstrate that Co-optimization OPC can be successfully used to address various types of hotspots across designs for selected 2× nm node line/space layers, as an example. These layers have been particularly challenging as they use single-exposure lithography with k1 around 0.3. Aggressive RET solutions are required to address the patterning challenges for this layer. Finally, we will report on implementation of the Co-Optimization OPC Recipe within the FMO framework for hotspot correction.
This paper investigates the application of source-mask optimization (SMO) techniques for 28 nm logic device and
beyond. We systematically study the impact of source and mask complexity on lithography performance. For the source,
we compare SMO results for the new programmable illuminator (ASML's FlexRay) and standard diffractive optical
elements (DOEs). For the mask, we compare different mask-complexity SMO results by enforcing the sub-resolution
assist feature (SRAF or scattering bar) configuration to be either rectangular or freeform style while varying the mask
manufacturing rule check (MRC) criteria. As a lithography performance metric, we evaluate the process windows and
MEEF with different source and mask complexity through different k1 values. Mask manufacturability and mask writing
time are also examined. With the results, the cost effective approaches for logic device production are shown, based on
the balance between lithography performance and source/mask (OPC/SRAF) complexity.
Application specific aberration as a result of localized heating of lens elements during exposure has become more
significant in recent years due to increasing low k1 applications. Modern scanners are equipped with sophisticated lens
manipulators that are optimized and controlled by scanner software in real time to reduce this aberration. Advanced lens
control options can even optimize lens manipulators to achieve better process window and overlay performance for a
given application. This is accomplished by including litho metrics as part of the lens optimization process. Litho metrics
refer to any lithographic properties of interest (i.e., CD variation, image shift, etc...) that are sensitive to lens aberrations.
But, there are challenges that prevent effective use of litho metrics in practice. There are often a large number of critical
device features that need monitoring and the associated litho metrics (e.g., CD) generally show strong non-linear
response to Zernikes. These issues greatly complicate the lens control algorithm, making real-time lens optimization
difficult. We have developed a computational method to address these issues. It transforms the complex physical litho
metrics into a compact set of linearized "virtual" litho metrics, ranked by their importance to process window. These
new litho metrics can be readily used by the existing scanner software for lens optimization. Both simulations and
experiments showed that the litho metrics generated by this method improved aberration control.
Photomask inspection requires a combination of high resolution and high throughput. Scanning electron microscopy (SEM) has excellent resolution but at high throughput yields noisy images. Hence we are developing algorithms for extracting pattern information from noisy SEM images.
One big challenge in processing SEM images is edge extraction. SEM images have their own characteristics so many existing edge extraction algorithms based on gradient signal analysis do not work well in that they either yield strong signal for non-edge areas or yield weak signal for true edge areas. We describe several new edge extraction algorithms targeting noisy SEM images. The essence of these new algorithms is analyzing the "ridge" signal, i.e., the bright stripes.
We first propose edge extraction based on second-order polynomial regression. Based on the observation that the pixel values around edges in SEM images behave approximately as second-order polynomial functions of coordinates, we compute the "ridge" signal using the coefficients of such polynomial functions obtained from regression. This algorithm generally yields very accurate estimation of the edge locations, especially for straight edges.
In the approach based on second-order polynomial regression, it is implicitly assumed that the edge is (approximately) straight. We thus propose a further improvement on this algorithm and assume that edge shapes can be well approximated by second-order curves, even at sharp turns. This approximation leads to a fourth-order polynomial regression with better performance around edges with a sharp turn.
A third algorithm is based on image segmentation. Image segmentation, which is mostly used in image content analysis, is defined as the partition of a digital image into multiple regions (sets of pixels) so that the objects of interest are separated from the background. In our approach, we adapt image segmentation to edge extraction. In particular, we apply a fast segmentation algorithm to separate the bright area from the dark area, and use the difference of average pixel values as the "ridge" signal. The advantage of this approach is that no assumption on the edge shape is involved and the computational complexity is low.
Finally, we propose a hybrid algorithm combining the segmentation approach and the polynomial regression approach, yielding a "segmentation-assisted" algorithm that incorporates the advantages of both approaches. Simulation on a wide range of SEM image types yields quite satisfactory results, even for very noisy images. We will present detailed algorithm flows and demonstrate extraction results from real images.
To minimize or eliminate lithography errors associated with optical proximity correction, integrated circuit manufacturers need an accurate, predictive, full-chip lithography model which can account for the entire process window (PW). We have validated the predictive power of a novel focus-exposure modeling methodology with wafer data collected across the process window at multiple customer sites. Tachyon Focus-Exposure Modeling (FEM) first-principle, physics-driven simulations deliver accurate and predictive full-chip lithography modeling for producing state-of-the-art circuits.
Lithography simulation is an integral part of semiconductor manufacturing. It is not only required in lithography process development, but also in RET design, RET verification, and process latitude analysis, from library cells to full-chip tape out. Two RET design checking flows are examined and compared. In the first flow, an image contour is simulated from post-OPC, GDSII data at best focus and exposure conditions. RET design defects are identified by comparing the calculated contours with the pre-OPC design data. To check lithography manufacturability across the typical IC process window, the second RET verification flow simulates image contours at multiple focus and exposure conditions. These RET design checking flows are implemented on new platform that combines a hardware accelerated computational engine with a new analysis method to numerically evaluate the lithographic printing and mask manufacturing challenges for a given design layout. The algorithm approach in this new system is based on image processing which is fundamentally different from conventional edge-based analysis. Specific examples of a mask aware RET verification flow leveraging this new platform and method will be provided with speed and accuracy benchmarks. Through the high speed computation of lithographic images from full chip data, many opportunities for novel and cost effective post layout lithography verification options become available. By combining the new platform with analysis steps relevant in leading edge photomask manufacturing, it may become possible to reduce the risks inherent in advanced technology tape outs while improving layout to mask fabrication cycle time and cost.
Lithography simulation is an increasingly important part of semiconductor manufacturing due to the decreasing k1 value. It is not only required in lithography process development, but also in RET design, RET verification, and process latitude analysis, from library cells to full-chip. As the design complexity grows exponentially, pure software based simulation tools running on general-purpose computer clusters are facing increasing challenges in meeting today’s requirements for cycle time, coverage, and modeling accuracy. We have developed a new lithography simulation platform (TachyonTM) which achieves orders of magnitude speedup as compared to traditional pure software simulation tools. The platform combines innovations in all levels of the system: algorithm, software architecture, cluster-level architecture, and proprietary acceleration hardware using application specific integrated circuits. The algorithm approach is based on image processing, fundamentally different from conventional edge-based analysis. The system achieves superior model accuracy than conventional full-chip simulation methods, owing to its ability to handle hundreds of TCC kernels, using either vector or scalar optical model, without impacting throughput. Thus first-principle aerial image simulation at the full-chip level can be carried out within minutes. We will describe the hardware, algorithms and models used in the system and demonstrate its applications of the full chip verification purposes.
A two-dimensional self-calibration experiment obtains Cartesian traceability for high-precision tools. The calibration procedure incorporates group theory principles to solve our industry's two-dimensional calibration problem. With group theory, a Cartesian system is obtainable through mathematics; thus, eliminating the need for any certified standards. The calibration algorithm was developed by Jun Ye at Stanford University and funded by the Semiconductor Research Corporation (SRC) with collaboration from Hewlett Packard (HP) and IBM. The data was collected from Leica's LMS2000 and LMS2020 systems.
A single-mode optical fiber is a convenient and efficient transmission medium for optical signal. However, the optical insertion phase written on the light field by the fiber is very sensitive to the surrounding environment, such as temperature or acoustic pressure. This phase-noise modulation tends to corrupt the original delta-looking Hz-level optical spectrum by broadening it toward the kilohertz domain. Here we describe a simple and effective technique for accurate cancellation of such induced phase noise, thus allowing fiber-based optical signal transmission in very demanding high-precision frequency-based applications where optical phase noise is critical the system is based on double-pass heterodyne measurement and digital phase division by two to obtain the correction signal for the phase compensating AOM. The underlying physical principle is the fact that an optical fiber path ordinarily possesses an excellent degree of linearity and reciprocity, such that two counterpropagating signals can experience the sam phase perturbations. Overall, the fiber's kilohertz level of broadening is reduced to sub-millihertz domain by our correction.
A frequency-doubled Nd:YAG laser has been stabilized to hyperfine transitions in molecular iodine near 532 nm via modulation transfer spectroscopy. This technique, together with the low noise of the source, yields excellent SNR (500 in a d kHz bandwidth); thus, an impressive frequency stability is achieved. The nearly systematic-free resonance signals obtained by modulation transfer spectroscopy give a correspondingly encouraging reproducibility, estimated to be about +/- 300 Hz. With two such stabilized lasers were found a pressure shift of only -1.3 kHz/Pa over the range 0.4-4.0 Pa and a power-dependent frequency shift of 2.1 kHz/mW. We have also measured the absolute frequency of the component a10 in the transition R(56)32-0 using the D2 line in Rb at 780 nm and an iodine-stabilized 633 nm He-Ne laser as references. The measured frequency is 563 260 223.471 MHz +/- 40 kHz. In turn, the absolute frequency of the D2 line was measured via the frequency difference between the D2 line and the two-photon transition 5S1/2 - 5D5/2 at 778 nm in Rb. Thus we now have realized a pure frequency measurement of this interval and of the 532 nm frequency.
Even at 0.5 micrometers design rules, the specifications on 5X reticles are set extremely tightly, on the grounds that ULSI patterns are so complex that tight specifications are essential to obtain acceptable yield. It is usually assumed that these specifications scale with the design rules, and that they should be even tighter for 1X reticles. As a consequence, it has been argued that 1X reticles for 0.25 micrometers design rules are impracticable. A statistical analysis, starting from first principles, and assuming point independent, normally distributed errors, supports the way in which mask specifications are currently set. The assumptions of spatial invariance and normal distribution are crucially important in the analysis. However, it is far from clear that they are valid. Consequently, mask specifications in general, as they are currently set, may be unnecessarily severe.