PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 9427, including the Title Page, Copyright information, Table of Contents, Authors, Introduction (if any), and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper reviews the most critical components of a ‘holistic’ DTCO flow for an advanced technology node and in doing so quantifies the differences between 7nm technology node definitions implemented with extreme ultraviolet and 193nm immersion lithography. The DTCO topics covered include: setting scaling targets for critical pitches, gear-ratios, and cell height; defining a set of patterning solutions, required RET restrictions, and resulting patterning cost; compiling physical design objectives to achieve power, performance, and area scaling; developing a set of standard cell logic cell architectures; and finally assessing achievable cell-level as well as macro-level scaling.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Achieving lithographic printability at advanced nodes (14nm and beyond) can impose significant restrictions on physical design, including large numbers of complex design rule checks (DRC) and compute-intensive detailed process model checking. Early identifying of yield-limiter hotspots is essential for both foundries and designers to significantly improve process maturity. A real challenge is to scan the design space to identify hotspots, and decide the proper course of action regarding each hotspot. Building a scored pattern library with real candidates for hotspots for both foundries and designers is of great value. Foundries are looking for the most used patterns to optimize their technology for and identify patterns that should be forbidden, while designers are looking for the patterns that are sensitive to their neighboring context to perform lithographic simulation with their context to decide if they are hotspots or not.[1] In this paper we propose a framework to data mine designs to obtain set of representative patterns of each design, our aim is to sample the designs at locations that can be potential yield limiting. Though our aim is to keep the total number of patterns as small as possible to limit the complexity, still the designer is free to generate layouts results in several million of patterns that define the whole design space. In order to handle the large number of patterns that represent the design building block constructs, we need to prioritize the patterns according to their importance. The proposed pattern classification methodology depends on giving scores to each pattern according to the severity of hotspots they cause, the probability of their presence in the design and the likelihood of causing a hotspot. The paper also shows how the scoring scheme helps foundries to optimize their master pattern libraries and priorities their efforts in 14nm technology and beyond. Moreover, the paper demonstrates how the hotspot scoring helps in improving the runtime of lithographic simulation verification by identifying which patterns need to be optimized to correctly describe candidate hotspots, so that only potential problematic patterns are simulated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A pattern-based methodology for optimizing stitches is developed based on identifying stitch topologies and replacing them with pre-characterized fixing solutions in decomposed layouts. A topology-based library of stitches with predetermined fixing solutions is built. A pattern-based engine searches for matching topologies in the decomposed layouts. When a match is found, the engine opportunistically replaces the predetermined fixing solution: only a design rule check error-free replacement is preserved. The methodology is demonstrated on a 20nm layout design that contains over 67 million, first metal layer stitches. Results show that a small library containing 3 stitch topologies improves the stitch area regularity by 4x.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Starting from the 45nm technology node, systematic defectivity has a significant impact on device yield loss with each new technology node. The effort required to achieve patterning maturity with zero yield detractor is also significantly increasing with technology nodes. Within the manufacturing environment, new in-line wafer inspection methods have been developed to identify device systematic defects, including the process window qualification (PWQ) methodology used to characterize process robustness. Although patterning is characterized with PWQ methodology, some questions remain: How can we demonstrate that the measured process window is large enough to avoid design-based defects which will impact the device yield? Can we monitor the systematic yield loss on nominal wafers? From device test engineering point of view, systematic yield detractors are expected to be identified by Automated Test Pattern Generator (ATPG) test results diagnostics performed after electrical wafer sort (EWS). Test diagnostics can identify failed nets or cells causing systematic yield loss [1],[2]. Convergence from device failed nets and cells to failed manufacturing design pattern are usually based on assumptions that should be confirmed by an electrical failure analysis (EFA). However, many EFA investigations are required before the design pattern failures are found, and thus design pattern failure identification was costly in time and resources. With this situation, an opportunity to share knowledge exists between device test engineering and manufacturing environments to help with device yield improvement. This paper presents a new yield diagnostics flow dedicated to correlation of critical design patterns detected within manufacturing environment, with the observed device yield loss. The results obtained with this new flow on a 28nm technology device are described, with the defects of interest and the device yield impact for each design pattern. The EFA done to validate the design pattern to yield correlation are also presented, including physical cross sections. Finally, the application of this new flow for systematic design pattern yield monitoring, compared to classic inline wafer inspection methods, is discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Early identification of problematic patterns in real designs is of great value as the lithographic simulation tools face significant timing challenges. To reduce the processing time such a tool selects only a fraction of possible patterns,which have a probable area of failure, with the risk of missing some problematic patterns. In this paper, we introduce a fast method to automatically extract patterns based on their structure and context, using the Voronoi diagram of VLSI design shapes. We first identify possible problematic locations, represented as gauge centers, and then use the derived locations to extract windows and problematic patterns from the design layout. The problematic locations are prioritized by the shape and proximity information of design polygons. We performed experiments for pattern selection in a portion of a 22nm random logic design layout. The design layout had 38584 design polygons (consisting of 199946 line-segments) on layer Mx, and 7079 markers generated by an Optical Rule Checker (ORC) tool. We verified our approach by comparing the coverage of our extracted patterns to the ORC generated markers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multiple patterning (triple and quadruple patterning) is being considered for use on the Middle-Of-Line (MOL) layers at the 10nm technology node and beyond.1 For robust standard cell design, designers need to improve the inter-cell compatibility for all combinations of cells and cell placements. Multiple patterning colorability checks break the locality of traditional rule checking and N-wise checks are strongly needed to verify the multiple patterning colorability for layout interaction across cell boundaries. In this work, a systematic framework is proposed to evaluate the library-level robustness over multiple patterning from two perpectives, including illegal cell combinations and full chip interactions. With efficient N-wise checks, the vertical and horizontal boundary checks are explored to predict illegal cell combinations. For full chip interactions, random benchmarks are generated by cell shifting and tested to evaluate the placement-level efforts needed to reduce the quadruple patterning to triple patterning for the MOL layer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Self-Aligned Quadruple Patterning (SAQP) will be one of the leading candidates for sub-14nm node and beyond. However, compared with triple patterning, making a feasible standard cell placement has following problems. (1) When coloring conflicts occur between two adjoining cells, they may not be solved easily since SAQP layout has stronger coloring constraints. (2) SAQP layout cannot use stitch to solve coloring conflict. In this paper, we present a framework of SAQP-aware standard cell placement considering the above problems. When standard cell is placed, the proposed method tries to solve coloring conflicts between two cells by exchanging two of three colors. If some conflicts remain between adjoining cells, dummy space will be inserted to keep coloring constraints of SAQP. We show some examples to confirm effectiveness of the proposed framework. To our best knowledge, this is the first framework of SAQP-aware standard cell placement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work addresses the difficulties in creating a manufacturable M2 layer based on an SADP process for N10/N7 and proposes a couple of solutions. For the N10 design, we opted for a line staggering approach in which each line-end ends on a contact. We highlight the challenges to obtain a reasonable process window, both in simulation as on based on exposures on wafer. The main challenges come from a very complex keep mask, consisting of complicated 2D structures which are very challenging for 193i litho. Therefore, we propose a solution in which we perform a traditional LELE process on top of a mandrel layer. Towards N7 we show that a line staggering approach starts to break down and design needs to allow better process window for lithography by having metal lines ending in an aligned fashion. has many challenges and we propose to switch to a line cut approach. A more lithography friendly approach is needed for design where the lines end at aligned points so that the process window can be enhanced.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As the technology continues to shrink below 14nm, triple patterning lithography (TPT) is a worthwhile lithography methodology for printing dense layers such as Metal1. However, this increases the complexity of standard cell design, as it is very difficult to develop a TPT compliant layout without compromising on the area. Hence, this emphasizes the importance to have an accurate stitch generation methodology to meet the standard cell area requirement as defined by the technology shrink factor. In this paper, we present an efficient auto TPT stitch guidance generation technique for optimized standard cell design. The basic idea here is to first identify the conflicting polygons based on the Fix Guidance [1] solution developed by Synopsys. Fix Guidance is a reduced sub-graph containing minimum set of edges along with the connecting polygons; by eliminating these edges in a design 3-color conflicts can be resolved. Once the conflicting polygons are identified using this method, they are categorized into four types [2] - (Type 1 to 4). The categorization is based on number of interactions a polygon has with the coloring links and the triangle loops of fix guidance. For each type a certain criteria for keep-out region is defined, based on which the final stitch guidance locations are generated. This technique provides various possible stitch locations to the user and helps the user to select the best stitch location considering both design flexibility (max. pin access/small area) and process-preferences. Based on this technique, a standard cell library for place and route (P and R) can be developed with colorless data and a stitch marker defined by designer using our proposed method. After P and R, the full chip (block) would contain the colorless data and standard cell stitch markers only. These stitch markers are considered as “must be stitch” candidates. Hence during full chip decomposition it is not required to generate and select the stitch markers again for the complete data; therefore, the proposed method reduces the decomposition time significantly.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
LELECUT type triple patterning lithography is one of the most promising techniques in the next generation lithography. To prevent yield loss caused by overlay error, LELECUT mask assignment which is tolerant to overlay error is desired. In this paper, we propose a method that obtains an LELECUT assignment which is tolerant to overlay error. The proposed method uses positive semide_nite relaxation and randomized rounding technique. In our method, the cost function that takes the length of boundary of features determined by the cut mask into account is introduced.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
At 7nm and beyond, designers need to support scaling by identifying the most optimal patterning schemes for their designs. Moreover, designers can actively help by exploring scaling options that do not necessarily require aggressive pitch scaling. In this paper we will illustrate how MOL scheme and patterning can be optimized to achieve a dense SRAM cell; how optimizing device performance can lead to smaller standard cells; how the metal interconnect stack needs to be adjusted for unidirectional metals and how a vertical transistor can shift design paradigms. This paper demonstrates that scaling has become a joint design-technology co-optimization effort between process technology and design specialists, that expands beyond just patterning enabled dimensional scaling.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As the industry pushes to ever more complex illumination schemes to increase resolution for next generation memory and logic circuits, sub-resolution assist feature (SRAF) placement requirements become increasingly severe. Therefore device manufacturers are evaluating improvements in SRAF placement algorithms which do not sacrifice main feature (MF) patterning capability. There are known-well several methods to generate SRAF such as Rule based Assist Features (RBAF), Model Based Assist Features (MBAF) and Hybrid Assisted Features combining features of the different algorithms using both RBAF and MBAF. Rule Based Assist Features (RBAF) continue to be deployed, even with the availability of Model Based Assist Features (MBAF) and Inverse Lithography Technology (ILT). Certainly for the 3x nm node, and even at the 2x nm nodes and lower, RBAF is used because it demands less run time and provides better consistency. Since RBAF is needed now and in the future, what is also needed is a faster method to create the AF rule tables. The current method typically involves making masks and printing wafers that contain several experiments, varying the main feature configurations, AF configurations, dose conditions, and defocus conditions – this is a time consuming and expensive process. In addition, as the technology node shrinks, wafer process changes and source shape redesigns occur more frequently, escalating the cost of rule table creation. Furthermore, as the demand on process margin escalates, there is a greater need for multiple rule tables: each tailored to a specific set of main-feature configurations. Model Assisted Rule Tables(MART) creates a set of test patterns, and evaluates the simulated CD at nominal conditions, defocused conditions and off-dose conditions. It also uses lithographic simulation to evaluate the likelihood of AF printing. It then analyzes the simulation data to automatically create AF rule tables. It means that analysis results display the cost of different AF configurations as the space grows between a pair of main features. In summary, model based rule tables method is able to make it much easier to create rule tables, leading to faster rule-table creation and a lower barrier to the creation of more rule tables.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
While waiting for EUV lithography to become ready for adoption, we need to create designs compatible with both EUV single exposures as well as with 193i multiple splits strategy for technology nodes 7nm and below needed to keep the scaling trend intact. However, the standard approach of designing standard cells in two-dimensional directions is no more valid owing to insufficient resolution of 193-i scanner. Therefore, we propose a standard cell design methodology, which exploits purely one-dimensional interconnect.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Advanced process nodes introduce new variability effects due to increased density, new material, new device structures, and so forth. This creates more and stronger Layout Dependent effects (LDE), especially below 28nm. These effects such as WPE (Well Proximity Effect), PSE (Poly Spacing Effect) change the carrier mobility and threshold voltage and therefore make the device performances, such as Vth and Idsat, extremely layout dependent. In traditional flows, the impact of these changes can only be simulated after the block has been fully laid out, the design is LVS and DRC clean. It’s too late in the design cycle and it increases the number of post-layout iteration. We collaborated to develop a method on an advanced process to embed several LDE sources into a LDE kit. We integrated this LDE kit in custom analog design environment, for LDE analysis at early design stage. These features allow circuit and layout designers to detect the variations caused by LDE, and to fix the weak points caused by LDE. In this paper, we will present this method and how it accelerates design convergence of advanced node custom analog designs by detecting early-on LDE hotspots on partial or fully placed layout, reporting contribution of each LDE component to help identify the root cause of LDE variation, and even providing fixing guidelines on how to modify the layout and to reduce the LDE impact.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To break through 1-D IC layout limitations, we develop computationally efficient 2-D layout decomposition and stitching techniques which combine the optical and self-aligned multiple patterning (SAMP) processes. A polynomial time algorithm is developed to decompose the target layout into two components, each containing one or multiple sets of unidirectional features that can be formed by a SAMP+cut/block process. With no need of connecting vias, the final 2-D features are formed by directly stitching two components together. This novel patterning scheme is considered as a hybrid approach as the SAMP processes offer the capability of density scaling while the stitching process creates 2-D design freedom as well as the multiple-CD/pitch capability. Its technical advantages include significant reduction of via steps and avoiding the interdigitating types of multiple patterning (for density multiplication) to improve the processing yield. The developed decomposition and synthesis algorithms are tested using 2-D layouts from NCSU open cell library. Statistical and computational characteristics of these public layout data are investigated and discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Design Interaction with Metrology: Joint Session with Conferences 9424 and 9427
In-line CD and overlay metrology specifications are typically established by starting with design rules and making certain assumptions about error distributions which might be encountered in manufacturing. Lot disposition criteria in photo metrology (rework or pass to etch) are set assuming worst case assumptions for CD and overlay respectively. For example poly to active overlay specs start with poly endcap design rules and make assumptions about active and poly lot average and across lot CDs, and incorporate general knowledge about poly line end rounding to ensure that leakage current is maintained within specification. There is an opportunity to go beyond generalized guard band design rules to full-chip, design-specific, model-based exploration of worst case layout locations. Such an approach can leverage not only the above mentioned coupling of CD and overlay errors, but can interrogate all layout configurations for both layers to help determine lot-specific, design-specific CD and overlay dispositioning criteria for the fab. Such an approach can elucidate whether for a specific design layout there exist asymmetries in the response to misalignment which might be exploited in manufacturing. This paper will investigate an example of two-layer model-based analysis of CD and overlay errors. It is shown, somewhat non-intuitively, that there can be small preferred misalignment asymmetries which should be respected to protect yield. We will show this relationship for via-metal overlap. We additionally present a new method of displaying edge placement process window variability, akin to traditional CD process window analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
DFM (Design and Litho Optimization): Joint Session with Conferences 9426 and 9427
The continuous scaling of semiconductor devices is quickly outpacing the resolution improvements of lithographic exposure tools and processes. This one-sided progression has pushed optical lithography to its limits, resulting in the use of well-known techniques such as Sub-Resolution Assist Features (SRAF’s), Source-Mask Optimization (SMO), and double-patterning, to name a few. These techniques, belonging to a larger category of Resolution Enhancement Techniques (RET), have extended the resolution capabilities of optical lithography at the cost of increasing mask complexity, and therefore cost. One such technique, called Inverse Lithography Technique (ILT), has attracted much attention for its ability to produce the best possible theoretical mask design. ILT treats the mask design process as an inverse problem, where the known transformation from mask to wafer is carried out backwards using a rigorous mathematical approach. One practical problem in the application of ILT is the resulting contour-like mask shapes that must be “Manhattanized” (composed of straight edges and 90-deg corners) in order to produce a manufacturable mask. This conversion process inherently degrades the mask quality as it is a departure from the “optimal mask” represented by the continuously curved shapes produced by ILT. However, simpler masks composed of longer straight edges reduce the mask cost as it lowers the shot count and saves mask writing time during mask fabrication, resulting in a conflict between manufacturability and performance for ILT produced masks1,2. In this study, various commonly used metrics will be combined into an objective function to produce a single number to quantitatively measure a particular ILT solution’s ability to balance mask manufacturability and RET performance. Several metrics that relate to mask manufacturing costs (i.e. mask vertex count, ILT computation runtime) are appropriately weighted against metrics that represent RET capability (i.e. process-variation band, edge-placement-error) in order to reflect the desired practical balance. This well-defined scoring system allows direct comparison of several masks with varying degrees of complexities. Using this method, ILT masks produced with increasing mask constraints will be compared, and it will be demonstrated that using the smallest minimum width for mask shapes does not always produce the optimal solution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the field of model design and selection, there is always a risk that a model is over-fit to the data used to train the model. A model is well suited when it describes the physical system and not the stochastic behavior of the particular data collected. K-fold cross validation is a method to check this potential over-fitting to the data by calibrating with k-number of folds in the data, typically between 4 and 10. Model training is a computationally expensive operation, however, and given a wide choice of candidate models, calibrating each one repeatedly becomes prohibitively time consuming. Akaike information criterion (AIC) is an information-theoretic approach to model selection based on the maximized log-likelihood for a given model that only needs a single calibration per model. It is used in this study to demonstrate model ranking and selection among compact resist modelforms that have various numbers and types of terms to describe photoresist behavior. It is shown that there is a good correspondence of AIC to K-fold cross validation in selecting the best modelform, and it is further shown that over-fitting is, in most cases, not indicated. In modelforms with more than 40 fitting parameters, the size of the calibration data set benefits from additional parameters, statistically validating the model complexity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Lithography is a technology to make circuit patterns on a wafer. UV light diffracted by a photomask forms optical images on a photoresist. Then, a photoresist is melt by an amount of exposed UV light exceeding the threshold. The UV light diffracted by a photomask through lens exposes the photoresist on the wafer. Its lightness and darkness generate patterns on the photoresist. As the technology node advances, the feature sizes on photoresist becomes much smaller. Diffracted UV light is dispersed on the wafer, and then exposing photoresists has become more difficult. Exposure source optimization, SO in short, techniques for optimizing illumination shape have been studied. Although exposure source has hundreds of grid-points, all of previous works deal with them one by one. Then they consume too much running time and that increases design time extremely. How to reduce the parameters to be optimized in SO is the key to decrease source optimization time. In this paper, we propose a variation-resilient and high-speed cluster-based exposure source optimization algorithm. We focus on image log slope (ILS) and use it for generating clusters. When an optical image formed by a source shape has a small ILS value at an EPE (Edge placement error) evaluation point, dose/focus variation much affects the EPE values. When an optical image formed by a source shape has a large ILS value at an evaluation point, dose/focus variation less affects the EPE value. In our algorithm, we cluster several grid-points with similar ILS values and reduce the number of parameters to be simultaneously optimized in SO. Our clustering algorithm is composed of two STEPs: In STEP 1, we cluster grid-points into four groups based on ILS values of grid-points at each evaluation point. In STEP 2, we generate super clusters from the clusters generated in STEP 1. We consider a set of grid-points in each cluster to be a single light source element. As a result, we can optimize the SO problem very fast. Experimental results demonstrate that our algorithm runs speed-up compared to a conventional algorithm with keeping the EPE values.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we develop statistical models to investigate SRAM yield performance and circuit variability in the presence of self-aligned multiple patterning (SAMP) process. It is assumed that SRAM fins are fabricated by a positivetone (spacer is line) self-aligned sextuple patterning (SASP) process which accommodates two types of spacers, while gates are fabricated by a more pitch-relaxed self-aligned quadruple patterning (SAQP) process which only allows one type of spacer. A number of possible inverter and SRAM structures are identified and the related circuit multi-modality is studied using the developed failure-probability and yield models. It is shown that SRAM circuit yield is significantly impacted by the multi-modality of fins’ spatial variations in a SRAM cell. The sensitivity of 6-transistor SRAM read/write failure probability to SASP process variations is calculated and the specific circuit type with the highest probability to fail in the reading/writing operation is identified. Our study suggests that the 6-transistor SRAM configuration may not be scalable to 7-nm half pitch and more robust SRAM circuit design needs to be researched.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Fin-FET Technology scaling to sub 7nm node, using 193 immersion scanner is restricted due to reduced margins for process. The cost of the process and complexity of designs is increasing due to multi-patterning to achieve area scaling using 193i scanner. In this paper, we propose a two Fin-cut mask design for Fin-pattering of 112 SRAM (two Fins for pull-down and one Fin for pull-up and pass-gate device) cell using 193i lithography and its comparison with EUVL single print. We also propose two keep masks for middle of line patterning ,with increased height of the SRAM cell using 193i, that results in area of a uniform-Fin SRAM cell area at 7nm technology; whereas EUVL can enable non-uniform SRAM cell at reduced area. Due to unidirectional patterning, margins for VIA0 landing over MOL are drastically reduced at 42nm gate pitch and hence to improve margins, the orientation for 1st metal is proposed to be orthogonal to the gate. This results in improved performance for SRAM and reliability of the technology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
DSA Design for Manufacturability: Joint Session with Conferences 9423, 9426, and 9427
Multi-patterning (MP) is the process of record for many sub-10nm process technologies. The drive to higher densities has required the use of double and triple patterning for several layers; but this increases the cost of the new processes especially for low volume products in which the mask set is a large percentage of the total cost. For that reason there has been a strong incentive to develop technologies like Directed Self Assembly (DSA), EUV or E-beam direct write to reduce the total number of masks needed in a new technology node. Because of the nature of the technology, DSA cylinder graphoepitaxy only allows single-size holes in a single patterning approach. However, by integrating DSA and MP into a hybrid DSA-MP process, it is possible to come up with decomposition approaches that increase the design flexibility, allowing different size holes or bar structures by independently changing the process for every patterning step. A simple approach to integrate multi-patterning with DSA is to perform DSA grouping and MP decomposition in sequence whether it is: grouping-then-decomposition or decomposition-then-grouping; and each of the two sequences has its pros and cons. However, this paper describes why these intuitive approaches do not produce results of acceptable quality from the point of view of design compliance and we highlight the need for custom DSA-aware MP algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
During the yield ramp of semi-conductor manufacturing, data is gathered on specific design-related process window limiters, or yield detractors, through a combination of test structures, failure analysis, and model-based printability simulations. Case-by-case, this data is translated into design for manufacturability (DFM) checks to restrict design usage of problematic constructs. This case-by-case approach is inherently reactive: DFM solutions are created in response to known manufacturing marginalities as they are identified. In this paper, we propose an alternative, yet complementary approach. Using design-only topological pattern analysis, all possible layout constructs of a particular type appearing in a design are categorized. For example, all possible ways via forms a connection with the metal above it may be categorized. The frequency of occurrence of each category indicates the importance of that category for yield. Categories may be split into sub-categories to align to specific manufacturing defect mechanisms. Frequency of categories can be compared from product to product, and unexpectedly high frequencies can be highlighted for further monitoring. Each category can be weighted for yield impact, once manufacturing data is available. This methodology is demonstrated on representative layout designs from the 28 nm node. We fully analyze all possible categories and sub-categories of via enclosure such that 100% of all vias are covered. The frequency of specific categories is compared across multiple designs. The 10 most frequent via enclosure categories cover ≥90% of all the vias in all designs. KL divergence is used to compare the frequency distribution of categories between products. Outlier categories with unexpected high frequency are found in some designs, indicating the need to monitor such categories for potential impact on yield.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Semiconductor manufacturing technologies are becoming increasingly complex with every passing node. Newer technology nodes are pushing the limits of optical lithography and requiring multiple exposures with exotic material stacks for each critical layer. All of this added complexity usually amounts to further restrictions in what can be designed. Furthermore, the designs must be checked against all these restrictions in verification and sign-off stages. Design rules are intended to capture all the manufacturing limitations such that yield can be maximized for any given design adhering to all the rules. Most manufacturing steps employ some sort of model based simulation which characterizes the behavior of each step. The lithography models play a very big part of the overall yield and design restrictions in patterning. However, lithography models are not practical to run during design creation due to their slow and prohibitive run times. Furthermore, the models are not usually given to foundry customers because of the confidential and sensitive nature of every foundry's processes. The design layout locations where a model flags unacceptable simulated results can be used to define pattern rules which can be shared with customers. With advanced technology nodes we see a large growth of pattern based rules. This is due to the fact that pattern matching is very fast and the rules themselves can be very complex to describe in a standard DRC language. Therefore, the patterns are left as either pattern layout clips or abstracted into pattern-like syntax which a pattern matcher can use directly. The patterns themselves can be multi-layered with "fuzzy" designations such that groups of similar patterns can be found using one description. The pattern matcher is often integrated with a DRC tool such that verification and signoff can be done in one step. The patterns can be layout constructs that are "forbidden", "waived", or simply low-yielding in nature. The patterns can also contain remedies built in so that fixing happens either automatically or in a guided manner. Building a comprehensive library of patterns is a very difficult task especially when a new technology node is being developed or the process keeps changing. The main dilemma is not having enough representative layouts to use for model simulation where pattern locations can be marked and extracted. This paper will present an automatic pattern library creation flow by using a few known yield detractor patterns to systematically expand the pattern library and generate optimized patterns. We will also look at the specific fixing hints in terms of edge movements, additive, or subtractive changes needed during optimization. Optimization will be shown for both the digital physical implementation and custom design methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Under the low-k1 lithography process, lithography hotspot detection and elimination in the physical verification phase have become much more important for reducing the process optimization cost and improving manufacturing yield. This paper proposes a highly accurate and low-false-alarm hotspot detection framework. To define an appropriate and simplified layout feature for classification model training, we propose a novel feature space evaluation index. Furthermore, by applying a robust classifier based on the probability distribution function of layout features, our framework can achieve very high accuracy and almost zero false alarm. The experimental results demonstrate the effectiveness of the proposed method in that our detector outperforms other works in the 2012 ICCAD contest in terms of both accuracy and false alarm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Pattern based design rule checks have emerged as an alternative to the traditional rule based design rule checks in the VLSI verification flow [1]. Typically, the design-process weak-points, also referred as design hotspots, are classified into patterns of fixed size. The size of the pattern defines the radius of influence for the process. These fixed sized patterns are used to search and detect process weak points in new designs without running computationally expensive process simulations. However, both the complexity of the pattern and different kinds of physical processes affect the radii of influence. Therefore, there is a need to determine the optimal pattern radius (size) for efficient hotspot detection. The methodology described here uses a combination of pattern classification and pattern search techniques to create a directed graph, referred to as the Pattern Association Tree (PAT). The pattern association tree is then filtered based on the relevance, sensitivity and context area of each pattern node. The critical patterns are identified by traversing the tree and ranking the patterns. This method has plausible applications in various areas such as process characterization, physical design verification and physical design optimization. Our initial experiments in the area of physical design verification confirm that a pattern deck with the radius optimized for each pattern is significantly more accurate at predicting design hotspots when compared to a conventional deck of fixed sized patterns.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Chemical Mechanical Polishing (CMP) is the essential process for planarization of wafer surface in semiconductor manufacturing. CMP process helps to produce smaller ICs with more electronic circuits improving chip speed and performance. CMP also helps to increase throughput and yield, which results in reduction of IC manufacturer’s total production costs. CMP simulation model will help to early predict CMP manufacturing hotspots and minimize the CMP and CMP induced Lithography and Etch defects [2]. In the advanced process nodes, conventional dummy fill insertion for uniform density is not able to address all the CMP short-range, long-range, multi-layer stacking and other effects like pad conditioning, slurry selectivity, etc. In this paper, we present the flow for 20nm CMP modeling using Mentor Graphics CMP modeling tools to build a multilayer Cu-CMP model and study hotspots. We present the inputs required for good CMP model calibration, challenges faced with metrology collections and techniques to optimize the wafer cost. We showcase the CMP model validation results and the model applications to predict multilayer topography accumulation affects for hotspot detection. We provide the flow for early detection of CMP hotspots with Calibre CMPAnalyzer to improve Design-for-Manufacturability (DFM) robustness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
DFM rule based scoring is associated with manufacturability rules checking and applying the scoring to predict the yield entitlement for an IC chip design. Achieving high DFM score is one of the key requirements to get high yield. The DFM scoring methodology is currently limited to DFM recommend rules and their associated failure rates. In contrast to failure mechanism, chemical-mechanical polishing (CMP) step topography variations places an important role to it. In this paper, we present an advanced DFM analysis flow to compute DFM score that incorporate topography variation along with recommend rule scoring using complex scoring model to increase silicon yield correlation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a compact model to predict the pillar-edge-roughness (PER) effects on 3D vertical nanowire MOSFETs using the perturbation method. An analytic solution to 3D Poisson’s equation in the cylindrical coordinate with a perturbed boundary is obtained to describe the PER effects on the vertical channel potential. The induced variations of drain current, threshold voltage (Vth), and sub-threshold slope (SS) are calculated using the developed model. We also investigate the PER phase and frequency dependent behavior of the nanowire MOSFETs, and find that both phase and (angular) frequency of the PER function will significantly affect the device performance. Our model calculation results are compared with TCAD simulations and a good agreement between them is found. It is suggested that our metrology society needs to develop relevant measurement methodology to characterize the nanowire pillar-edge roughness at deep nanoscale.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As technology development advances into deep submicron nodes, it is very important not to ignore any systematic effect that can impact CD uniformity and the final parametric yield. One important challenge for OPC is in choosing the proper etch process correction flow to compensate for design-to-design etch shrink variations. Although model-based etch compensation tools have been commercially available for a few years now, rules-based etch compensation tables have been the standard practice for several nodes. In our work, we study the limitations of the rules-based etch compensation versus model-based etch compensation. We study a 10nm process and provide the details of why using Model-Based Etch Process Correction can achieve up to 15% improvement in final CD uniformity. We also provide a systematic methodology for identifying the proper etch correction technique for a given etch process and assessing the potential accuracy gain when switching to the model-based etch correction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As the IC industry moves forward to advanced nodes, especially under 28nm technology, the printability issue is becoming more and more challenging because layouts are more congested with a smaller critical feature size and the manufacturing process window is tighter. Consequently, design-process co-optimization plays an important role in achieving a higher yield in a shorter tape-out time. A great effort has to be made to analyze the process defects and build checking kits to deliver the manufacturing information, by utilizing EDA software, to designers to dig out the potential manufacturing issues and quickly identify hotspots and prioritize how to fix them according to the severity levels. This paper presents a unique hotspot pattern analysis flow that SMIC has built for advanced technology to analyze the potential yield detractor patterns in millions of patterns from real designs and rank them with severity levels related to real fab process. The flow uses Mentor Graphics Calibre® PM (Calibre® Pattern Matching) technology for pattern library creation and pattern clustering; meanwhile it incorporates Calibre® LFD (Calibre® Litho-Friendly Design) technology for accurate simulation-based lithographic hotspot checking. Pattern building, clustering, scoring, ranking and fixing are introduced in detail in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Early in a semiconductor node’s process development cycle, the technology definition is locked down using somewhat risky assumptions on what the process can deliver once it matures. In this early phase of the development cycle, detailed design rules start to be codified while the wafer patterning process is still being fine-tuned. As the process moves along the development cycle, and wafer processes are dialed-in, key yield improvement efforts focus on variability reduction. Design retargeting definitions are tweaked and finalized, and the use of finely tuned etch models to compensate for process bias are applied to accurately capture the more mature wafer process. The resulting mature patterning process is quite different from the one developed during the early stages of the technology definition. In this paper we describe an approach and flow to drive continuous improvement in the mask solution (OPC and MBSRAF) later in the process development and production readiness cycle stage. First, we establish the process window entitlement within the design-space by utilizing advanced mask optimization (MO) combined with the baseline process (i.e., model, etch compensation, and design retargeting). Second, gaps to the entitlement are used to identify and target issues with the existing OPC recipe and to drive continuous improvements to close these performance gaps across the critical design rules. We demonstrate this flow on a 20 nm contact layer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Design Technology Co-Optimization (DTCO) becomes more important with every new technology node. Complex patterning issues can no longer wait to be detected experimentally using test sites because of compressed technology development schedules. Simulation must be used to discover complex interactions between an iteration of the design rules, and a simultaneous iteration of an intended patterning technology. The problem is often further complicated by an incomplete definition of the patterning space. The DTCO process must be efficient and thoroughly interrogate the legal design space for a technology to be successful. In this paper we present our view of DTCO, called Design and Patterning Exploration. Three emphasis areas are identified and explained with examples: Technology Definition, Technology Learning, and Technology Refinement. The Design and Patterning Exploration flows are applied to a logic 1.3x metal routing layer. Using these flows, yield limiting patterns are identified faster using random layout generation, and can be ruled out or tracked using a database of problem patterns. At the same time, a pattern no longer in the set of rules should not be considered during OPC tuning. The OPC recipe may then be adjusted for better performance on the legal set of pattern constructs. The entire system is dynamic, and users must be able to access related teams output for faster more accurate understanding of design and patterning interactions. In the discussed example, the design rules and OPC recipe are tuned at the same time, leading to faster design rule revisions, as well as improved patterning through more customized OPC and RET.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Delivering mask ready OPC corrected data to the mask shop on-time is critical for a foundry to meet the cycle time commitment for a new product. With current OPC compute resource sharing technology, different job scheduling algorithms are possible, such as, priority based resource allocation and fair share resource allocation. In order to maximize computer cluster efficiency, minimize the cost of the data processing and deliver data on schedule, the trade-offs of each scheduling algorithm need to be understood. Using actual production jobs, each of the scheduling algorithms will be tested in a production tape-out environment. Each scheduling algorithm will be judged on its ability to deliver data on schedule and the trade-offs associated with each method will be analyzed. It is now possible to introduce advance scheduling algorithms to the OPC data processing environment to meet the goals of on-time delivery of mask ready OPC data while maximizing efficiency and reducing cost.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Traditional physical design verification tools employ a deck of known design rules, each of which has a pre-defined pass/fail criteria associated with it. While passing a design rule deck is a necessary condition for a VLSI design to be manufacturable, it is not sufficient. Other physical design profiling decks that attempt to obtain statistical information about the various critical dimensions in the VLSI design lack a systematic methodology for rule enumeration. These decks are often inadequate, unable to extract all the interlayer and intralayer dimensions in a design that have a correlation with process yield. The Physical Design Analyzer is a comprehensive design analysis tool built with the objective of exhaustively exploring design-process correlations to increase the wafer yield.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Starting with the sub 2Xnm node, the process window becomes smaller and tighter than before. Pattern related error budget is required for accurate critical-dimension control of Cell layers. Therefore, lithography has been faced with its various difficulties, such as weird distribution, overlay error, patterning difficulty etc. The distribution of cell pattern and overlay management are the most important factors in DRAM field. We had been experiencing that the fatal risk is caused by the patterns located in the tail of the distribution. The overlay also induces the various defect sources and misalignment issues. Even though we knew that these elements are important, we could not classify the defect type of Cell patterns. Because there is no way to gather massive small pattern CD samples in cell unit block and to compare layout with cell patterns by the CD-SEM. The CD- SEM is used in order to gather these data through high resolution, but CD-SEM takes long time to inspect and extract data because it measures the small FOV. (Field Of View) However, the NGR(E-beam tool) provides high speed with large FOV and high resolution. Also, it’s possible to measure an accurate overlay between the target layout and cell patterns because they provide DBM. (Design Based Metrology) By using massive measured data, we extract the result that it is persuasive by applying the various analysis techniques, as cell distribution and defects, the pattern overlay error correction etc. We introduce how to correct cell pattern, by using the DBM measurement, and new analysis methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a stitch database is built from various identified stitching structures in an open-cell layout library. The corresponding stitching yield models are developed for the hybrid optical and self-aligned multiple patterning (hybrid SAMP). Based on the concept of probability-of-success (POS) function, we first develop a single-stitching yield model to quantify the effects of overlay errors and cut-hole CD variations. The overhang distance designed in a stitching process (or its mean value μ) is found to be critical to the stitching yield performance and can be optimized using this yield model. We also investigate the physical significance of several process parameters such as half pitch (HP), standard deviation (σ) of the random overhang distribution, and cut-hole CD (CL). Our study shows that certain types of stitching yield are sensitive to σ and HP, while in general high yield can be achieved for a large number of stitching types we examined. To improve the yield of certain challenging stitching structures, various layout modification strategies are proposed and discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an automated DFM solution to generate Bit Line Pattern Dummy (BLPD) for memory chips. Dummy shapes are aligned with memory functional bits lines to ensure uniform and reliable memory device. This paper will present a smarter approach that uses an analysis based technique for adding the dummy fill shapes that have different types according to the space available. Experimental results based on layout of a memory test chip.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.