PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 11614, including the Title Page, Copyright information, and Table of Contents.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Introduction to SPIE Advanced Lithography conference 11614: Design-Technology Co-optimization XV.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Semiconductor demand is rapidly expanding beyond the computing and mobile markets with more products being introduced for automotive, industrial, medical, avionics, and space applications. Chips are increasingly complex with growing functionality through integration of more digital, analog/mixed-signal, and RF sub-systems. Technologies still continue to scale to ever-shrinking dimensions with novel materials and device architectures to realize new power-performance-area levels. Although these new capabilities enable diversified product opportunities, guaranteeing reliability and quality over long product lifetimes has become increasingly challenging in such applications. This paper provides an overview of reliability and product quality challenges in advanced CMOS nodes comprising finFET and fully depleted silicon-on-insulator technologies. Following an overview of intrinsic and extrinsic reliability mechanisms along with design and test methodologies for improving reliability and product quality, it addresses key reliability challenges in fully depleted technologies, such as self-heating, I/O scaling, middle-of-line reliability, dielectric-breakdown monitoring, variation, and stochastic aging. To meet these more stringent requirements in advanced technologies, chip designers and manufacturers must collaboratively optimize chip process technology, design, and test in an even more cohesive and transparent partnership.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Highly reliable technology and design are vital to the future of the semiconductor industry for three reasons namely, technology scaling, new materials and new workload. Firstly, geometry scaling of technology has exacerbated multiple reliability phenomena, and managing the reliability of these devices itself has limited performance and target design metrics. For example, interconnects are heavily resistive today at sub-7nm nodes, serving as a key bottleneck in high-performance designs. One of the main reasons is that the copper wires require thick barrier-liners that consume useful conductor area to maintain wire reliability. Worsening back-end-of-the-line electromigration (EM) at advanced nodes also forces designers to limit high-performance gates in design, therefore limiting the peak design performance.
Secondly, the rapid increase in the number of new materials introduced to further Moore’s law scaling has exposed designers to work with devices of little or unknown reliability, potentially leading to too much pessimism in guard-banding while designing for these new devices. Understanding the underlying failure mechanisms and quantifying their impact is key to determining the right design practices. The introduction of new wire materials like Cobalt/Ruthenium after almost two decades of copper wires is one such example, having non-trivial implications on how we design power delivery and implement designs today.
Lastly, the rapid growth in computing demand in the era of AI/ML has translated into new workloads that stress the underlying devices very uniquely and demand different levels of guarantee; design-for-reliability is imperative for “always-on” applications like High-Performance-Compute and mission-critical applications such as autonomous drive.
Device-level understanding and faithful modeling of both, the physical effects such as aging, time-dependent-dielectric breakdown, etc., as well as electrical mechanisms that cause transient errors in design is paramount. Aging effects at the device level are typically combated by guard banding at the design level; bias-temperature-instability (BTI) aging effects and electromigration of wires that have “healing” capabilities could be offset by balancing bias states in design. Effects such as hot-carrier-injection (HCI) that cause damage to the drain of the transistor cannot be compensated for at the design level and the time-to-failure is modeled in such cases. For transient errors (soft errors) that could corrupt stored data due to particle strikes, novel circuit design techniques are utilized to reduce their probability; for example, a “popular vote” scheme could be used by replicating logic and strategically spacing them apart; however, this would have a negative implication on the design area. Hence it is key to determine which part of the design is most affected by such faults that are heavily workload dependent. Additionally, memory blocks, flip flops, and logic blocks are uniquely impacted by such faults requiring different compensating techniques.
In this talk, a brief overview of the physical and electrical failure mechanisms at advanced nodes will be provided. Popular modeling and design practices for handling the reliability of modern designs shall be discussed and trends will be reviewed highlighting the importance of design-technology-reliability-co-optimization techniques to enable future designs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The required reliability of a system depends on its intended market. Consumer electronics have a high tolerance for faults, IBM Enterprise Systems have a near zero fault requirement, while servers and data centers have a fault tolerance in between the Enterprise and Consumer markets. To manufacture high performance computing systems that are highly reliable, IBM uses an end to end strategy which includes Reliability, Availability, and Serviceability (RAS). Reliability must be built into the system at all levels, from the transistor to the circuit to the complete system. This talk will explore aspects of IBM’s RAS strategy including technology qualification, wafer screening, module screening, and the tradeoffs between performance, reliability, and cost.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The importance of pattern-based defect study has grown with more complex processes in advanced semiconductor manufacturing. The pattern is the heart of the DPTCO Design Process Technology Co-Optimization approach. But the definition of pattern has been limited by the design rules that can be setup by an individual. Moreover, the huge volume of data points generated by any DRC Design Rule Check type of search forces user to sort and filter out most of them and keep only a manageable count. This effectively reduces the sample space of pattern-based learning. In this work we have employed a new approach of PCYM Pattern Centric Yield Manager where the high count of unique patterns and all its instances in full chip design is retained. It is a fundamental pillar of computational system for semiconductor fabrication where pattern-centric learning can be deployed to study any related process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automated generation of Layout Pattern Catalogs (LPC) has been enabled by full-chip pattern matching EDA tools, capable of searching and classifying both topological and dimensional variations in layout shapes, extracting massive datasets of component patterns from one (or more) given layouts. This work presents a novel theoretical framework for the systematic analysis of Layout Pattern Catalogs (LPC). Two algebraic structures (lattices and matroids) are introduced, allowing for the complete characterization of all LPC datasets. Technical results go beyond the general mathematical theory of combinatorial pattern spaces, demonstrating a direct path to novel physical design verification algorithms and DFM optimization applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Mask synthesis and correction flows are becoming increasingly complex in order to deal with increasingly smaller lithography, resist, and etch effects that also increase in importance with increasingly smaller feature sizes. Time-to-mask is also a significant factor in production environments which leads tapeout teams to adopt correction strategies that usually only address effects at the best process condition. As a result, users frequently find hotspots, or process failures, when performing a final lithography verification step using multiple process conditions. In many cases, under production pressure to decrease time-to-mask, tapeout teams choose to correct these hotspots in the fastest manner possible. Performing rule-based fixes to the post-correction layout is usually the fastest method available. This paper will explore using rule-based, post-correction hotspot fixes in a flow using pattern matching. Pattern matching will be used to cluster the post-correction patterns into similar types which will be fixed by different algorithms for each type. Further, pattern matching will be used to find all instances of each pattern to mark for fixing along any similar patterns that may have been missed by the lithography check, or those that received asymmetrical correction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Two-dimensional pattern matching libraries are used to define known hotspots in the design space. These libraries can then be integrated into a physical design router to search and fix such hotspots prior to the design being completed and signed off. The task of searching for similar patterns to the known hotspot involves a significant manual effort in pattern match library development. This paper demonstrates an automated and comprehensive approach to profile the available design space for similar topological patterns based on the known hotspot and automatically generate a comprehensive master pattern library to fix and address the hotspot issue. This paper presents a semi-supervised learning algorithm for developing pattern similarity metric for pattern similarity ranking and clustering.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the semiconductor fabrication process, yield is negatively impacted by defects that appear systematically within specific patterns of the physical layout design. Those defective patterns are popularly known as hotspots, and they can arise due to various causes. There are several known approaches of hotspot detection. One approach for hotspot detection is Machine Learning (ML), where known hotspot and non-hotspot patterns are used for training the model to be used afterwards in prediction of new hotspots. The objective in ML approaches is to maximize the hit rate (i.e. finding all potential hotspots) and to minimize the false alarm rate (i.e. reduce the overhead of false positives). The model’s ability to correctly classify between hotspots and non-hotspots depends on the coverage of the training data set. The real-world challenge in training a ML system to classify hotspots/non-hotspots is the imbalanced nature of the problem, where the known hotspot patterns are always in the minority class. Another challenge specific to the problem of hotspot classification is the difficulty to correctly classify non-hotspots that are similar to hotspots. These “hard-to-classify” patterns are ones with high mask error enhancement factor (MEEF), as small variations in the pattern can make it change between hotspot and non-hotspot. These two challenges cause conventional methods of handling imbalanced training datasets to be inadequate to the problem of hotspot detection. This paper will present a flow for quantified training dataset selection approach and put extra focus on the patterns that are hard to classify due to close similarity with known hotspots. Improved model accuracy is illustrated when adopting the quantified sampling approach compared to conventional sampling approaches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As feature resolution and process variations continue to shrink for new nodes of both DUV and EUV lithography, the density and number of devices on advanced semiconductor masks continue to increase rapidly. These advances cause significantly increased pressure on the accuracy and efficiency of OPC mask output. To meet manufacturing yield requirements, systematic errors from all sources are important to consider during mask synthesis. Specifically, accurately considering etch effects within OPC and ILT is becoming more critical. Mask synthesis flows have typically accounted for etch proximity effects using rule-based approaches, and the accuracy limitations of fast etch models has limited wide-spread adoption of model-based etch mask correction approaches. Several publications and industry presentations have discussed the use of neural networks or other machine learning techniques to provide improvements in both accuracy and efficiency in mask synthesis flows. In this paper, we present results of using machine learning in etch models to improve model accuracy without sacrificing TAT. Then we demonstrate an ILT based etch correction method using the machine learning etch model that converges quickly and outputs an ADI target contour to be used as the target for OPC or ILT mask correction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
By blending physical and virtual worlds, Mixed Reality (MR) is unlocking exciting new experiences and generating new paradigms in both social and professional interactions. Powered by continual advances in computer vision, graphical processing power, display technology, and input systems, MR is strengthening interactions between humans, their environment, and computers and enabling endless new opportunities. To enable a seamless and immersive experience, an MR device is a complex system, comprising of multiple optical, electrical, mechanical and computational sub-systems. Inputs from various user and environment sensors (i.e. head/eye/gesture tracking, depth scanners, etc.) are synthesized and fused with virtual content that is then projected to the user via the display subsystem. Providing a seamlessly integrated experience requires engineering a display around limitations of both optical system design, as well as that of the human visual system. After a brief introduction to Mixed Reality, this talk will aim to provide an overview of a typical MR display system, outline some of the key design parameters and constraints, and provide a few examples of navigating the trade-offs involved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Due to a slowdown in gate pitch scaling linked to fundamental physical limitation, standard cell height reduction is needed to achieve the scaling targets. The complementary FET consisting of stacked NMOS on PMOS device is evaluated for both monolithic and sequential integration. Due to double MOL level access, both CFET options combined with buried power rails reduce the standard cell track height down to 4T, also reducing the routing layer usage within the standard cell. The main advantages of sequential CFET over monolithic is the independent optimization of the top and bottom devices, and the possibility of split gate implementation which offers an area gain in complex cells such as flops, at the expenses of higher cost and process complexity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a new design architecture for advanced logic SRAM cells using six vertical transistors (with carrier transport along the Z direction), stacked one on top of each other. Virtual fabrication technology was used to identify different process integration schemes to enable the fabrication of this architecture with a competitive XY footprint at an advanced logic node: a unit cell area of 0.0093 um2 was demonstrated in this work. This study illustrates that virtual fabrication can be a key enabling element for technology pathfinding, and that it can be used to identify expected module development challenges prior to tape-out or wafer processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we describe a framework to enable the memory array simulations for Materials to Systems CooptimizationTM (MSCOTM) flows. The methodology is applied for projected 3 nm logic FinFET technology node SRAM array. To form the SRAM array, a “tiling” approach is utilized, where neighbor cells are created by copying and mirroring the first cell. Then this process is repeated to create the rest of the array. Electrical pulses are applied to the word-line and bit-line to activate the read and write operations. We demonstrate 128 ×128 SRAM array simulations and find that the farthest cell from the word-line driver is vulnerable.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Advanced CMOS SoCs with more cores and more complex memory hierarchies are hitting the memory wall, especially in intermediate cache levels (L2, L3). Managing the memory wall thus represent major challenge in the design of future systems and should include memory tech tuning, macro design and Logic-to-Memory interconnect optimization using multi-die packages and different 3D structures. To understand the benefits of 3D interconnects on Memory-on-Logic partitioning we analyze four different partitioning options of intermediate (L2) cache assuming high density CuCu hybrid bonding. We observe that the partitioning of the complete sub-system (memory macros and controller logic) is less beneficial with respect to reference 2D integration when compared to memory macro only partitioning schemes. Further, more memory macros are moved from the logic die, better the gains are (up to 40% total wirelength reduction). Such gains come at the expense of higher 3D pin count, motivating finer 3D pitches. Finally, we demonstrate design enablement of 3D aware IR-drop analysis for micro- and nano-TSVs with Buried Power Rail for Back Side power delivery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Process optimization is a required step during semiconductor technology pathfinding and device evaluation. Virtual process modeling and 3D fabrication tools can be used to study diagnostic, predictive, and prescriptive modeling of process windows and to accelerate process integration. These virtual techniques will become especially valuable as novel gate-all-around devices (GAA) are introduced to replace state-of-the-art FinFET technologies. Model calibration is needed to ensure the accuracy of any virtual fabrication model and requires wafer-based metrology data. Optical scatterometry has established its value in the FinFET era as an effective inline metrology technique due to its accuracy, throughput, and non-destructive nature. In this article, we demonstrate how spectra collected from scatterometry targets can be utilized to resolve sub-nanometer feature changes within a virtual fabrication platform. First, FEOL GAA simulations up to the SiGe epitaxial growth step were performed to establish spectral sensitivity to upstream process changes. A virtual fabrication model was subsequently calibrated using spectra with variations from earlier process steps as model parameters. These variations were accurately pinpointed for unknown spectra via least-square optimization. Additionally, machine learning methods were leveraged to provide instantaneous feedback during the inference phase. Sub-nanometer accuracy was achieved, enabling wide applications in semiconductor technology development. This newly demonstrated capability will be indispensable in GAA commercialization, where 3D metrology and process integration are ongoing challenges.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a new method applied on Multi-Project Wafer (MPW) reticles to reduce thermal-mechanical stress. Wafer bumping processes generate residual thermal-mechanical stress, causing crack and delamination across wide regions on a wafer. In this paper, we show the work achieving optimized metal layer densities and density gradients across the reticle to minimize the stress effects. We propose a new MPW chip placement flow that places chips on an MPW reticle meeting the minimum density gradients as the placement criterion. The flow also performs inter-die dummy metal fills to optimize densities with regard to each chip density. We show intentional crack-stop dummy metal rings the flow generates surrounding a chip. The dummy rings further reduce the propagation of stress cracks. We show the results of optimized density gradients and crack-stop rings across the chips on an MPW reticle.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A semiconductor device can pass functionality test in the factory, but later fail in the field due to its extensive use over time or beyond specifications. There are various physical and electrical mechanisms that can contribute to this failure. One such mechanism is the backend time-dependent-dielectric-breakdown (TDDB), where the insulating dielectric becomes conductive due to extensive exposure to electric field.
Backend TDDB is commonly discussed in the foundry design rule manuals, where users may find recommended values on the maximum allowable metal and via usage for specific reliability requirement. However, they are rarely turned into checkable rules because of various practical issues. In this paper, we first investigate what these practical issues are. Specifically, we will discuss
- device usage: the actual mission profile of how long and how hard a chip is operated;
- circuit operation: voltage amplitudes, swing and slew rates of neighboring nets;
- test pattern design: how test patterns are designed and data extrapolated to calculate actual circuit patterns.
We will then show how actual calculations of failure-in-time rate for SoCs due to backend TDDB are done. From the calculations, we will show what the typical IPs are that suffer from this failure, and the associated design implications on how to minimize the risk.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Electrical Design-for-Manufacturability (DFM) checks are developed to quantify layout enhancements and their impact on circuit performance for analog designs. A database containing circuit topologies of analog matched devices is built. Then, connectivity checks scan the schematics for topologies from the database. If a matching topology were detected, the matched devices are mapped to layout for layout matching checks. If layout mismatches are detected, electrical DFM checks are used to quantify the imbalance in terms of parasitic resistance and capacitance. The electrical DFM checks are applied to quantify the impact due to routing, fill, and DFM fixing on three, 22nm analog design blocks. Fill insertion’s contribution to RC change is the greatest followed by routing and DFM fixing, with a maximum change of 7%, 5%, and less than 1%, respectively. Symmetry-aware layout insertions preserve the matching of electrical parameters, showing zero mismatch. All designs pass electrical DFM checks as results are within the expected design tolerances.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Thin mask model has been conventionally used in optical lithography simulation. In this model the diffracted waves from the mask are assumed to be Fourier transform of the mask pattern. In EUV (Extreme UltraViolet) lithography thin mask model is not valid because the absorber thickness is comparable to the mask pattern size. Fourier transformation is not suitable for calculating the diffracted waves from thick masks. Rigorous electromagnetic simulations such as finitedifference time-domain method, rigorous coupled wave analysis and 3D waveguide model are used to calculate the diffracted waves from EUV masks. However, these simulations are highly time consuming. We reduce the calculation time by adapting a CNN (Convolutional Neural Network). We calculate the far-field diffraction amplitudes from an EUV mask by using the 3D waveguide model. We divide the diffraction amplitudes into the thin mask amplitudes (Fourier transform of the mask pattern) and the residual mask 3D amplitudes. The incident angle dependence of the mask 3D amplitude for each diffraction order is fitted by using three parameters which represent the on-axis and the off-axis mask 3D effects. We train a CNN where the inputs are 2D mask patterns and the targets are the mask 3D parameters of all diffraction orders. After the training, the CNN successfully predict the mask 3D parameters. The CNN prediction is 5,000 times faster than the electromagnetic simulation. We extend the transmission cross coefficient formula to include the off-axis mask 3D effects. Our formula is applicable to arbitrary source shapes and defocus. We can use the eigen value decomposition method to accelerate the calculation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a machine-learning-based mechanism to perform OPC, which is much more efficient than traditional OPC processes in terms of compute resources. Building a physical model for OPC takes a lot of labor and computational time, for example, model calibration requires thousands of cores for up to ten hours , and , OPC data prepare needs thousands of cores for a couple of days. We present a way to use learning to produce OPC mask designs from a large amount of lithography target data with a computationally cheap approach. Our technique uses learning based on pairs of lithography target data and OPCed mask. The impact of different learning algorithm on the quality and performance of mask prediction has been studied. We have tested multiple learning algorithm, such as PyTorch, Multilayer perceptron on IBM cloud. Preliminary evaluation of our technique on a standard contact EUV testsite shows accuracy similar to the standard processes using much less compute power.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose the use of machine learning based analytics to simplify OPC (Optical Proximity Correction) model building process which demands concurrent optimization of more than 70 parameters as nodes shrink. We first built a deep neural network architecture to predict the RMS error, for a given set of model parameters. The neural network was trained on existing OPC model parameters and corresponding output RMS data of simulations to achieve an accurate prediction of output RMS for given set of OPC model parameters. Later, a sensitivity analysis-based methodology for recursive partitioning of OPC modelling parameters was employed to reduce the total search space of OPC model simulations. This resulted in reduction of the number of OPC model iterations performed during model tuning by orders of magnitude.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is possible to achieve mass production by multiple patterning technology combing with 193 immersion scanners at 7nm technology node. The application of freeform illumination source shapes is a key enabler for continued shrink using 193 nm immersion lithography with 1.35 NA. Source and mask optimization (SMO) is the important resolution enhancement technique (RET) to optimize a satisfied freeform source. Design pattern library can be used to cognize, manage and compare all the continuous changing and iterative physical designs. Our proposed methodology can improve SMO performance by taking advantages of post-color design pattern library and pattern selection method. And process window limiters are the important guidance to optimize parameters of SMO.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
EUV single patterning opportunity for pitch 28nm metal design is explored. Bright field mask combined with a negative tone develop process is used to improve pattern fidelity and overall process window. imec N3 (Foundry N2 equivalent) logic PNR (place and route) designs are used to deliver optimized pupil through source mask optimization and evaluate OPC technology. DFM (Design For Manufacturing) related topics such as dummy metal insertion and design CD retarget are addressed together with critical design rules (e.g. Tip-to-Tip), to provide balanced design and patterning performance. Relevant wafer data are shown as a proof of above optimization process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We provide background on differences between traditional and machine learning modeling. We then discuss how these differences impact the different validation needs of traditional and machine learning OPC compact models. We then provide multiple diverse examples of how machine learning OPC compact validation modeling can be appropriately validated both for modeling-specific production requirements such as model signal/contour accuracy, predictiveness, coverage and stability; and also general OPC mask synthesis requirements such as OPC/ILT stability, convergence, etc. Finally we conclude with thoughts on how machine learning modeling methods and their required validation methods are likely to evolve for future technology nodes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work, we demonstrate our first-principles based methodology to include atomistic level simulations to evaluate the promise of different metals on the performance of MOL/BEOL interconnects. The specific metals that we focus on include Cu, Ru (both fcc and hcp), Co, and Mo where the conductivity of these metals, including the degradation from grain boundaries extracted from ab initio simulations, is included in a parasitic field solver and subsequently used to extract the interconnect parasitics of standard cells. Lithography considerations are addressed through simulations of patterned, “real” wires. PPA is evaluated through simulations of an 128x128 SRAM memory array where we find significant improvement in the read and write delay of 20% and 40%, respectively when we replace M1 with Ru(fcc).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the continuous growth in IC manufacturing complexity, developing new process nodes has become an ever increasing challenge. From the initial process node architectural explorations to initial design rule specifications to early RET development and “risk production” early NPI (New Product Introductions), critical decisions with far reaching performance and yield impact must be made. Applying innovative methods to enable early and broad engineered testing informs better architectural decisions and performance tradeoffs. Methods to identify, root-cause, categorize known yield detractors and to flag unknown potentially new risk patterns enable product yield risk mitigation and continuous learning. Accumulated learning from each step, each stage and each new product drives improved testing vehicles, better process optimization, and enhanced PDKs, all leading to more robust designs and ultimately higher performance and improved yield. In this paper, we describe innovative Machine Learning methods in DFM and DTCO Applications to improve test vehicle engineering, inform process development and accelerate process node yield ramp.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Critical Area Analysis (CAA) is an established DFM tool to assess the defect limited yield of a semiconductor design. However, several factors limit the usefulness of this tool for advanced technology nodes at 28nm and below. Specifically for metal design layers, retargeting has been a successful measure to improve defect limited yield. Retargeting will opportunistically widen lines and space them further apart where possible. Since retargeting happens during the tapeout phase it is not visible to the designer. A critical area analysis solely based on design shapes therefore underestimates defect limited yield by a substantial amount. Furthermore, CAA computation time for large designs has grown exponentially as the grid size of designs has been shrunk with each technology node. For a large design, CAA computation can take weeks and consume large computational resources. We have come up with a new and fast methodology to compute CAA that takes retargeting into account and thus gives far more realistic estimates of defect limited yield. Our method takes advantage of the fact that even large designs usually consist of millions of repetitions of similar design blocks that will report very similar CAA metrics. By training a machine learning model on representative design snippets one can come up with a flow that estimates the CAA of the full chip and runs very fast.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the development of process technology nodes, hotspot detection has become a critical process in integrated circuit physical design flow. The machine learning-based method has become a competitive candidate for layout hotspot detector with easy training and high speed. Classic methods usually define hotspot detection as a binary classification problem. However, the designer hopes to further divide the hotspot patterns into a series of levels according to their severity to identify and fix these hotspots. In this paper, we designed a multi-classifier based on the convolutional neural network to realize the detection of various levels of hotspot patterns. Unlike classic cross-entropy loss, we proposed a custom loss function to reduce the difference between false predicted levels and corresponding true levels, reducing the adverse effects caused by misclassified samples. Experimental verification results show that our hotspot detector can correctly classify various hotspots levels and has potential advantages for physical designers to fix hotspots.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With more advanced semiconductor technologies, identifying process weak points becomes more complex as multiple layers need to be taken into consideration. In recent years, traditional rule based weak point identification has been augmented by pattern matching to pinpoint and fix possible design weak points. Traditional methods of pattern definition are done by profiling the designs for weak points to capture the patterns of interest for applying opportunistic fixes. Patterns are usually handcrafted by taking process information into account, and applying fixes on the design features. Some fail modes have emerged recently that are a result of very complex multi layer interactions. These types of weak points are very difficult to define comprehensively with traditional pattern matching.
Recently, deep learning has undergone a rapid development and tools are now available that can learn based on large amounts of process data. We have harnessed this to address the problem of identifying complex weak points with low escape rates. In this paper, we provide a review on a deep learning based weak point detection flow taking retargeting/opc/orc simulations into account as training data. Using the deep learning approach, the process data is abstracted as an encrypted machine learning model, and released to designers as part of the GLOBALFOUNDRIES (GF) DRC+ tool. This tool is shipped with the PDK, and can be used to fix the design, mitigating process weak points.
This paper begins with a brief introduction to the deep learning TensorFlow model using Convolutional Neural Network (CNN) widely used for image detection. Then we focus on feature density vector (DSV) generation to extract the layout parameters and labels used for training the model. Experimental analysis is also provided to compare recall and precision metrics of POR and ML methods in detecting the weak point on a via layer at process window conditions. Our case study shows that the ML flow improves the pattern capture rate by 34% over standard hotspot detection methods. As a conclusion, we will also brief on our future work leveraging the ML flow for other weak point detections.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In semiconductor manufacturing, intellectual property (IP) cores/blocks play a dominant role in modern chip design. The driving factor for IP usage is the time-to-market benefit delivered through design reuse. Today, IP blocks include the entire range of modules, ranging from standard cells, memories, and I/O devices to CPUs. Chip designers need complex IP blocks because modern levels of integration allow chips to be a complete system on chip (SOC), not just components of systems. However, as chips become more complex, IP blocks are subject to more interactions from multiple neighboring modules in the chip. Current IP block quality assurance (QA) flows focus mainly on functional verification, performance verification, and design rule checking (DRC). The standard DRC deck checks for minimum and maximum density rules within the IP block. However, when an IP is placed in an SOC, it may encounter complex surrounding scenarios, as when a low density IP is placed next to a higher density area. During integrated circuit (IC) manufacturing, the resulting proximity effects may cause failures or electrical targeting mismatches within the IP, due to etch micro-loading and long-range CMP interactions. Designers can only locate these chemical mechanical polishing (CMP) hotspots related to IP placement in the SOC near the end of the design flow, which limits any floorplan changes to fix the hotspots. Standalone IP block QA is insufficient to detect possible layout- or floorplan-induced problems that can affect manufacturing. In this paper, we present a CMP modeling methodology to guard-band IP against topography variations that can occur after IP placement in the SOC design. We emulate low, average, and high-density scenarios surrounding the IP blocks, followed by CMP simulations and hotspot detection using silicon-calibrated CMP models. After simulation, guidelines provided to fix these CMP hotspots surrounding the IP blocks during early design stages to improve manufacturability and yield. This flow will make IPs robust from CMP hotspots that typically appear after SOC floorplanning.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The goal of this paper is to explore machine learning solutions to improve the run-time of model-based retargeting in the mask synthesis flow. The purpose of retargeting is to re-size non-lithography friendly designs so that the design geometries are shifted to a more lithography-robust design space. However, current model-based approaches can take significant run-time. As a result, this step is rarely done in production settings. Different machine learning solutions for resolution enhancement techniques (RETs) have been previously proposed. For instance, to model optical proximity correction (OPC) or inverse lithography (ILT). In this paper, we compare and expand some of these solutions. In the end, we will discuss the experimental results that can achieve a nearly 360x run-time improvement while maintaining similar accuracy to traditional retargeting techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For advanced technology nodes the design of Standard Cell Libraries is becoming increasingly challenging. This is because with shrinking cell heightsthe available routing resources in the cells are becoming a major lim iting f actor, along with the rising complexity of design rules and technology options. The challenge of designing goodcells holds for both production libraries, where many cells need to be created, as well as for DTCO experiments, where many variations of a smaller library needto be made. Both types of libraries require highly optimized cells, either to have high quality design, or to have accurate PPA assessment during DTCO. Additionally, DTCO no longer meets the needs of today’s technological cha llenges and needs to be extended from Materials to Systems. At Applied Materials we include automatedStandard Cell Librarygeneration into ourMSCOTM (Materials to Systems Co -OptimizationTM) f low. To demonstrate the power of automated Standard Cell Library generation, this paperfocuses on four experiments to a ssess the impact of advancedprocess and design rules and the choice of standard cell architectures;in particular double height cells. The experiments include different technology and architectural choices,such asthe number of tracks, poly silicon and diffusion for routing in between the rows in the cells. The results are compared in terms of theirmanufacturability and size. Ongoing work includes performance and power analysis on representative designs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Design Technology Co-Optimization (DTCO) has become a critical toolset in navigating the tradeoffs between design targets and manufacturing constraints. Some methodologies for understanding these tradeoffs include 3D design rule validation, patterning optimization, design vs. manufacturing yield studies and fully-integrated process and electrical performance modeling. In this presentation, we will discuss how fully-integrated “virtual” DTCO can be used to predict and ameliorate potential manufacturing and design issues prior to wafer-based testing. We will provide examples of how virtual DTCO can be used to predict optimal integration and patterning schemes, highlight areas of potential device failure, predict yield limiters, and gain a better understanding of how process variation can impact device performance. Our discussion will focus on process variation and parasitic impacts that are critical at 5nm and beyond, and we will share valuable insights learned from DTCO studies of next generation architectures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Systematic defects have drawn a lot of focus from the semiconductor industry, especially in the technology development and early technology ramp. However, random defects are still dominant when the technology is mature and in highvolume manufacturing. Historically, foundries have run critical area analysis on incoming designs in order to identify the yield-limiting failure modes and estimate the yield loss. However, with growing design complexity in advanced technology nodes, the calculation runtime of critical area has increased from hours to days and even week(s). Also FINFET brings their own challenges and new failure modes such as transistor-related defectivity and inter-layer interactions. Meanwhile, it has become more and more challenging to obtain accurate defect density by failure mode. In this paper, GlobalFoundries and Cadence describe the motivations that drove their partnership to develop a new generation of critical area analysis with adaptive sampling to reduce runtime while maintaining accuracy, especially while taking into account connectivity and transistor defectivity. After reviewing the principle and challenges of critical area calculation and yield estimation, two new methodologies of yield modeling using critical area analysis are given to address these challenges. The first methodology avoids the costly and complicated process of defect density calibration. The second methodology fulfills the wafer-based yield projection with critical area normalization and machine learning.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.