Performing feature extractions in convolution neural networks for deep-learning tasks is computational expensive in electronics. Fourier optics allows convolutional filtering via dot-product multiplication in the Fourier domain similar to the distributive law in mathematics. Here we experimentally demonstrate convolutional filtering exploiting massive parallelism (10^6 channels, 8-bit at 1kHz) of digital mirror display technology, thus enabling 250 TMAC/s. An FPGA-PCIe board controls the ‘weights’ and handles the data I/O, whereas a high-speed camera detects the inverse-Fourier transformed (2nd lens) data. Gen-1 processes with a total delay (including I/O) of ~1ms, while Gen-2 at 1-10ns leveraging integrated photonics at 10GHz and changing the front-end I/O to a joint-transform-correlator (JTC). These processors are suited for image/pattern recognition, super resolution for geolocalization, or real-time processing in autonomous vehicles or military decision making.
Extreme ultraviolet lithography has been adopted as the next generation lithography solution to sub 10nm technology node. However, mask blank defect is a major challenge for this technology. In this work, we explore the extended benefits of utilizing pattern deformation for EUV mask defect avoidance. In the first part of the paper, we propose a constraint programming based method that can explore pattern shift, small angle rotation, and deformation for defect avoidance. In the second part of the paper, we utilized this proposed method to explore the benefit of pattern deformation. For an 8nm polysilicon layer of an ARM Cortex M0 layout, pattern deformation combined with pattern shift was able to improve mask yield by more than 90%-point compared to pattern shift alone for a 40-defect mask.
Low throughput has been a critical issue in extreme ultraviolet (EUV) patterning due to the difficulty in increasing light source power. This limitation has driven the need for photoresists with better throughput which unfortunately come with higher line edge roughness (LER). In this work, the possibility of relaxing LER requirements for metal layer patterned by EUV lithography (EUVL) is studied. Single patterning and litho-etch litho-etch (LELE) patterning with EUVL are considered. To assess the impact of LER on design yield, analytical and simulation based modeling approaches are developed, which consider the LER induced metal wire shorts/opens and the enhanced time dependent dielectrics breakdown (TDDB) for metal wires with different geometries. The impact of LER on wire delay is studied by Elmore’s delay model.
In Additive Layer Manufacturing, the layer-based nature of the process results in error in the final object, termed staircase error. This error can be reduced by using a smaller layer thickness, and hence more layers, to print the object. However, in 3D printing with stereolithography, the print time is mostly determined by the number of layers, since the movement of the laser in the X-Y plane is much faster than the movement of the build platform. Therefore, using finer layer thicknesses can significantly increase the print time.
In this work, we propose a novel adaptive slicing algorithm that balances accuracy and print time. The proposed, near-optimal, dynamic programming (DP) based algorithm for adaptive slicing minimizes the number of layers subject to a global volumetric error constraint. Our approach reduces slice count by up to 36% (52%) compared to a state of the art adaptive slicing (uniform slicing) method under the same volumetric error. The results were tested on Formlabsunder the same volumetric error. The results were tested on Formlabs Form1+ SLA-based printers. The print time was improved by up to 32% (53%) for a selection of objects.
Directed self assembly (DSA) is a very promising patterning technology for the sub-7-nm technology nodes, especially for via/contact layers. In the graphoepitaxy type of DSA, a complementary lithography technique is used to print the guiding templates, where the block copolymer (BCP) phase-separates into regular structures. Accordingly, the design-friendliness of a DSA-based technology is affected by several factors: the complementary lithography technique, the legal guiding templates, the number of masks/exposures used to print the templates, the related design rules, the forbidden patterns (hotspots), and the characteristics of the BCP. Thus, foundries have a huge number of choices to make for a future DSA-based technology, affecting the design-friendliness, and the cost of the technology. We propose a framework for DSA technology path-finding, for via layers, to be used by the foundry as part of design and technology co-optimization. The framework optimally evaluates a DSA-based technology in which an arbitrary lithography technique is used to print the guiding templates, possibly using many masks/exposures, and provides a design-friendliness metric. In addition, if the evaluated technology is not design-friendly, the framework computes the minimum-cost technology change that makes the technology design-friendly. The framework is used to evaluate technologies like DSA+193-nm immersion (193i) lithography, DSA+extreme ultraviolet (EUV), and DSA+193i self-aligned double patterning. For example, one study showed that one mask of EUV in a DSA+EUV technology can replace three masks of 193i in a DSA+193i technology.
Directed Self Assembly (DSA) is a very promising patterning technology for the sub-7nm technology nodes, especially for via/contact layers. In the Graphoepitaxy type of DSA, a complementary lithography technique is used to print the guiding templates, where the Block Copolymer (BCP) phase-separates into regular structures. Accordingly, the design-friendliness of a DSA-based technology is affected by several factors: the complementary lithography technique, the legal guiding templates, the number of masks/exposures used to print the templates, the related design rules, the forbidden patterns (hotspots) and the characteristics of the BCP. Thus, foundries have a huge number of choices to make for a future DSA-based technology, affecting the design-friendliness and the cost of the technology. In this paper, we propose a framework for DSA technology path-finding, for via layers, to be used by the foundry as part of Design and Technology Co-optimization (DTCO). The framework optimally evaluates a DSA-based technology where an arbitrary lithography technique is used to print the guiding templates, possibly using many masks/exposures and provides a design-friendliness metric. The framework is used to evaluate technologies like DSA+193nm Immersion (193i) Lithography, DSA+Extreme Ultraviolet (EUV) and DSA+ Self-Aligned Double Patterning. For example, one study showed that one mask of EUV in a DSA+EUV technology can replace three masks of 193i in a DSA+193i technology.
Multi-patterning (MP) is the process of record for many sub-10nm process technologies. The drive to higher densities has required the use of double and triple patterning for several layers; but this increases the cost of the new processes especially for low volume products in which the mask set is a large percentage of the total cost. For that reason there has been a strong incentive to develop technologies like Directed Self Assembly (DSA), EUV or E-beam direct write to reduce the total number of masks needed in a new technology node. Because of the nature of the technology, DSA cylinder graphoepitaxy only allows single-size holes in a single patterning approach. However, by integrating DSA and MP into a hybrid DSA-MP process, it is possible to come up with decomposition approaches that increase the design flexibility, allowing different size holes or bar structures by independently changing the process for every patterning step. A simple approach to integrate multi-patterning with DSA is to perform DSA grouping and MP decomposition in sequence whether it is: grouping-then-decomposition or decomposition-then-grouping; and each of the two sequences has its pros and cons. However, this paper describes why these intuitive approaches do not produce results of acceptable quality from the point of view of design compliance and we highlight the need for custom DSA-aware MP algorithms.
With the use of subwavelength photolithography, some layouts can have low printability and, accordingly, low yield due to the existence of bad patterns even though they pass design rule checks. A reasonable approach is to select some of the candidate bad patterns as forbidden. These are the ones with a high yield impact or low routability impact, and these are to be prohibited in the design phase. The rest of the candidate bad patterns may be fixed in the postroute stage in a best-effort manner. The process developers need to optimize the process to be friendly to the patterns of high routability impact. Hence, an evaluation method is required early in the process to assess the impact of forbidding layout patterns on routability. We propose pattern-driven design rule evaluation (pattern-DRE), which can be used to evaluate the importance of patterns for the routability of the standard cells and, accordingly, select the set of bad patterns to forbid in the design. The framework can also be used to compare restrictive patterning technologies [e.g., litho-etch-litho-etch (LELE), self-aligned double patterning (SADP), self-aligned quadruple patterning (SAQP), self-aligned octuple patterning (SAOP)]. Given a set of design rules and a set of forbidden patterns, pattern-DRE generates a set of virtual standard cells; then it finds the possible routing options for each cell without using any of the forbidden patterns. Finally, it reports the routability metrics. We present a few studies that illustrate the use cases of the framework. The first study compares LELE to SADP by using a set of forbidden patterns that are allowed by LELE but not by SADP. Another study compares LELE to extreme ultraviolet lithography from the routability aspect by prohibiting patterns that have LELE native conflicts. In addition, we present a study that investigates the effect of placing the active area of the transistors close to the P/N interface instead of close to the power rails.
Defect avoidance methods are likely to play a key role in overcoming the challenge of mask blank defectivity in extreme ultraviolet (EUV) lithography. In this work, we propose an innovative EUV mask defect avoidance method. It is the first approach that allows exploring all the degrees of freedom available for defect avoidance (pattern shift, rotation and mask floorplanning). We model the defect avoidance problem as a global, nonconvex optimization problem and then solve it using a combination of random walk and gradient descent. For a 8-nm polysilicon layer of an ARM Cortex M0 layout, our method achieves a 60% point better mask yield compared to prior art in defect avoidance for a 40-defect mask. We show that pattern shift is the most significant degree of freedom for improving mask yield. Rotation and mask floorplanning can also help improve mask yield to a certain extent.
Defect avoidance methods are likely to play a key role in overcoming the challenge of mask blank defects in EUV lithography. In this work, we propose a novel EUV mask defect avoidance method. It is the first approach that allows exploring all the degrees of freedom available for defect avoidance (pattern shift, rotation and mask floorplanning). We model the defect avoidance problem as a global, non-convex optimization problem and then solve it using a combination of random walk and gradient descent. For a 8nm polysilicon layer of an ARM Cortex M0 layout, our method achieves 60% point better mask yield compared to prior art in defect avoidance for a 40-defect mask.
With the use of sub-wavelength photolithography, some layouts can have low printability and, accordingly, low yield due to the existence of bad patterns, even though they pass design rule checks. A reasonable approach is to select some of the candidate bad patterns as “forbidden”. With the use of sub-wavelength photolithography, some layouts can have low printability and, accordingly, low yield due to the existence of bad patterns, even though they pass design rule checks. A reasonable approach is to select some of the candidate bad patterns as forbidden". These are the ones with high yield-impact or low routability-impact, and these are to be prohibited in the design phase. The rest of the candidate bad patterns may be fixed in the post-route stage, in a best-effort manner. The process developers need to optimize the process to be friendly to the patterns of high routability-impact. Hence, an evaluation method is required early in the process, to assess the impact of forbidding layout patterns on routability. In this work, we propose Pattern-driven Design Rule Evaluation (Pattern-DRE), which can be used to evaluate the importance of patterns for the routability of the standard cells and, accordingly, select the set of bad patterns to forbid in the design. The framework can also be used to compare restrictive patterning technologies (e.g. LELE, SADP, SAQP, SAOP). Given a set of design rules and a set of forbidden patterns, Pattern-DRE generates a set of virtual standard cells, then it finds the possible routing options for each cell, without using any of the forbidden patterns. Finally, it reports the routability metrics. We present few studies that illustrate the use cases of the framework. The first study compares LELE to SADP, by using a set of forbidden patterns that are allowed by LELE but not by SADP. The second study investigates the area penalty as well as the SADP-compliance that we obtain if we increase the minimum gate-to-Local-Interconnect spacing design rule.
Overlay control is becoming increasingly more important with the scaling of technology. It has become even more critical and more challenging with the move toward multiple-patterning lithography, where overlay translates into CD variability. Design rules and overlay have strong interaction and can have a considerable impact on the design area, yield, and performance. We study this interaction and evaluate the overall design impact of rules, overlay characteristics, and overlay control options. For this purpose, we developed a model for yield loss from overlay that considers overlay residue after correction and the breakdown between field-to-field and within-field overlay; the model is then incorporated into a general design-rule evaluation framework to study the overlay/design interaction. The framework can be employed to optimize design rules and more accurately project overlay-control requirements of the manufacturing process. The framework is used to explore the design impact of litho-etch litho-etch double-patterning rules and poly line-end extension rule defined between poly and active layer for different overlay characteristics (i.e., within-field versus field-to-field overlay) and different overlay models at the 14-nm node. Interesting conclusions can be drawn from our results. For example, one result shows that increasing the minimum mask-overlap length by 1 nm would allow the use of a third-order wafer/sixth-order field-level overlay model instead of a sixth-order wafer/sixth-order field-level model with negligible impact on design.
Overlay control is becoming increasingly more important with the scaling of technology. It has become even more critical and more challenging with the move toward multiple-patterning lithography, where overlay translates into CD variability. Design rules and overlay have strong interaction and can have a considerable impact on the design area, yield, and performance. This paper offers a framework to study this interaction and evaluate the overall design impact of rules, overlay characteristics, and overlay control options. The framework can also be used for designing informed, design-aware overlay metrology and control strategies. In this work, The framework was used to explore the design impact of LELE doublepatterning rules and poly-line end extension rule defined between poly and active layer for different overlay characteristics (i.e., within-field vs. field-to-field overlay) and different overlay models at the 14nm node. Interesting conclusions can be drawn from our results. For example, one result shows that increasing the minimum mask-overlap length by 1nm would allow the use of a third-order wafer/sixth-order field-level overlay model instead of a sixth-order wafer/sixth-order field-level model with negligible impact on design.
Design rules (DRs) are the primary abstraction between design and manufacturing. The optimization of DRs to achieve
the correct tradeoff between scaling and yield is a key step in developing a new technology node. In this work we propose
a design-of-experiments based framework to optimize DRs, where layouts are generated for different DR values using
compaction. By analyzing the impact of DRs on layout scaling, we propose a novel Boolean minimization based approach
to reduce the number of layouts that need to be generated through compaction. This methodology provides an automated
approach to analyze several DRs simultaneously and discover area-critical DRs and DR interactions. We apply this
methodology to middle-of-line (MOL) and Metal1 layer design rules for a commercial 20nm process. Our methodology
results in 10 - 105 x reduction in the number of layouts that need to be generated through compaction, and demonstrates
the impact of MOL and Metal1 DRs on the area of some standard cell layouts.
Double patterning (DP) in a litho-etch-litho-etch (LELE) process is an attractive technique to scale the K1 factor below 0.25. For dense bidirectional layers such as the first metal layer (M1), however, density scaling with LELE suffers from
poor tip-to-tip (T2T) and tip-to-side (T2S) spacing. As a result, triple-patterning (TP) in a LELELE process has emerged
as a strong alternative. Because of the use of a third exposure/etch, LELELE can achieve good T2T and T2S scaling as
well as improved pitch scaling over LELE in case further scaling is needed. TP layout decomposition, a.k.a. TP coloring,
is much more challenging than DP layout decomposition. One of the biggest complexities of TP decomposition is that
a stitch can be between different two-mask combinations (i.e. first/second, first/third, second/third) and, consequently,
stitches are color-dependent and candidate stitch locations can be determined only during/after coloring. In this paper, we
offer a novel methodology for TP layout decomposition. Rather than simplifying the TP stitching problem by using DP
candidate stitches only (as in previous works), the methodology leverages TP stitching capability by considering additional
candidate stitch locations to give coloring higher flexibility to resolve decomposition conflicts. To deal with TP coloring
complexity, the methodology employs multiple DP coloring steps, which leverages existing infrastructure developed for
DP layout decomposition. The method was used to decompose bidirectional M1 and M2 layouts at 45nm, 32nm, 22nm,
and 14nm nodes. For reasonably dense layouts, the method achieves coloring solutions with no conflicts (or a reasonable
number of conflicts solvable with manual legalization). For very dense and irregular M1 layouts, however, the method was
unable to reach a conflict-free solution and a large number of conflicts was observed. Hence, layout simplifications for the
M1 layer may be unavoidable to enable TP for the M1 layer. Although we apply the method for TP, the method is more
general and can be applied for multiple patterning with any number of masks.
Techniques for identifying and mitigating effects of process variation on the electrical performance of integrated circuits
are described. These results are from multi-discipline, collaborative university-industry research and emphasize
anticipating sources of variation up-stream early in the circuit design phase. The lithography physics research includes
design and testing electronic monitors in silicon at 45 nm and
fast-CAD tools to identify systematic variations for entire
chip layouts. The device research includes the use of a spacer (sidewall transfer) gate fabrication process to suppress
random variability components. The Design-for-Manufacturing research includes double pattern decomposition in the
presence of bimodal CD behavior, process-aware reticle inspection, tool-aware dose trade-off between leakage and
speed, the extension of timing analysis methodology to capture across process-window effects and electrical processwindow
Fabricating defect-free mask blanks remains a major "show-stopper" for adoption of EUV lithography. One
promising approach to alleviate this problem is reticle floorplanning with the goal of minimizing the design
impact of buried defects. In this work, we propose a simulated annealing based gridded floorplanner for single
project reticles that minimizes the design impact of buried defects. Our results show a substantial improvement
in mask yield with this approach. For a 40-defect mask, our approach can improve mask yield from 53% to 94%.
If additional design information is available, it can be exploited for more accurate yield computation and further
improvement in mask yield, up to 99% for a 40-defect mask. These improvements are achieved with a limited
area overhead of 0.03% on the exposure field. Defect-aware floorplanning also reduces sensitivity of mask yield
to defect dimensions.
A process window is a collection of values of process parameters that allow a circuit to be printed and to operate under desired specifications. A conventional process window, which is determined through geometrical fidelity, geometric process window (GPW), does not account for lithography effects on electrical metrics such as delay, static noise margin (SNM), and power. In contrast to GPW, this paper introduces an electrical process window (EPW) which accounts for electrical specifications. Process parameters are considered within EPW if the performance (delay, SNM, and leakage power) of printed circuit is within desired specifications. Our experiment results show that the area of EPW is 1.5 to 8× larger than that of GPW. This implies that even if a layout falls outside geometric tolerance, the electrical performance of the circuit may satisfy desired specifications. In addition to process window evaluation, we show that EPW can be enlarged by 10% on average using gate length biasing and Vth push. We also propose approximate methods to evaluate EPW, which can be used with little or no design information. Our results show that the proposed approximation method can estimate more than 70% of the area of reference EPW. We also propose a method to extract representative layouts for large designs which can then be used to evaluate a process window, thereby improving the runtime by 49%.
Process window (PW) is a collection of values of process parameters that allow circuit to be printed and to operate under
desired specifications. Conventional process window which is determined through geometrical fidelity, geometric process
window (GPW), does not account for lithography effects on electrical metrics such as delay and power. In contrast to GPW,
this paper introduces electrical process window (EPW) which accounts for electrical specifications. Process parameters
are considered within EPW if the performance (delay and leakage power) of printed circuit is within desired specifications.
Our experiment results show that the area of EPW is 1.5~6x larger than that of GPW. This implies that even if a layout
falls outside geometric tolerance, the electrical performance of the circuit may satisfy desired specifications. In addition to
process window evaluation, we show that EPW can be enlarged by 10% on average using gate length biasing and Vth push.
We also propose approximate methods to evaluate EPW, which can be used in the absence of any design information. Our results show that the proposed approximation method can estimate more than 80% of the area of reference EPW.
Line-end pullback is a major source of patterning problems in low-k1 lithography. Lithographers have been well-served by geometric metrics such as critical dimension (CD) at a gate edge; however, the ever-rising contribution of line-end extension to layout area necessitates reduced pessimism in qualification of line-end patterning. Electrically aware metrics for line-end extension can be helpful in this regard. The device threshold voltage is, with nominal patterning, a weak function of line-end shapes. However, the electrical impact of line-end shapes can increase with overlay errors, since displaced line-end extensions can be enclosed in the transistor channel, and nonideal line-end shape will manifest as an additional gate CD variation. We propose a super-ellipse parameterization that enables exploration of a large variety of line-end shapes. Based on a gate capacitance model that includes the fringe capacitance due to the line-end extension, we model line-end-dependent incremental current Ion and Ioff to reflect inverse narrow width effect. Last, we calculate the Ion and Ioff considering line-end shapes as well as line-end extension length, and we define a new electrical metric for line-end extension-namely, the expected change in Ion or Ioff under a given overlay error distribution. Our model accuracy is within 0.47% and 1.28% for Ion and Ioff, respectively, compared to 3-D TCAD simulation in a typical 45-nm process. Using our proposed electrical metric, we are able to quantify the electrical impact of optical proximity correction, lithography, and design rule parameters, and we can quantify trade-offs between cost and electrical characteristics.
This paper proposes shift-trim double patterning lithography (ST-DPL), a cost-effective method for achieving 2× pitchrelaxation
with a single photomask (especially at polysilicon layer). The mask is re-used for the second exposure by
applying a translational mask-shift. Extra printed features are then removed using a non-critical trim exposure. The
viability of ST-DPL is demonstrated. The proposed method has many advantages with virtually no area overhead (< 0.3%
standard-cell area): (1) cuts mask-cost to nearly half that of standard-DPL, (2) reduces overlay errors between the two
patterns and can virtually eliminate it in some process implementations, (3) alleviates the bimodal problem in doublepatterning,
and (4) enhances throughput of first-rate scanners. We implement a small 45nm standard-cell library and small
benchmark designs with ST-DPL to illustrate its viability.
This paper compares the range of accepted layouts produced by conventional Shape Driven Proximity Correction (SDOPC)
and Electrically Driven Optical Proximity Correction (EDOPC). For SDOPC, correction is made until a target geometry
matches a layout. In EDOPC, current matching is the primary objective.1-6 Using electrical objectives results in a smaller
fragmentation requirement for an equivalent current accuracy as SDOPC. Number of candidate OPC solutions accepted
by EDOPC is orders of magnitude higher than SDOPC leading to potentially much faster convergence. Moreover, due to
additional flexibility in EDOPC, it is able to correct several layouts which are not correctable by SDOPC with the same
In double patterning lithography (DPL), overlay error between two patterning steps at the same layer translates into CD
variability. Since CD uniformity budget is very tight, overlay control becomes a tough challenge for DPL. In this paper,
we electrically evaluate overlay error for BEOL DPL with the goal of studying relative effects of different overlay sources
and interactions of overlay control with design parameters. Experimental results show the following: (a) overlay electrical
impact is not significant in case of positive-tone DPL (< 3.4% average capacitance variation) and should be the base for
determining overlay budget requirement; (b) when considering congestion, overlay electrical impact reduces in positivetone
DPL; (c) Design For Manufacturability (DFM) techniques like wire spreading can have a large effect on overlay
electrical impact (20% increase of spacing can reduce capacitance variation by 22%); (d) translation overlay has the largest
electrical impact compared to other overlay sources; and (e) overlay in y direction (x for horizontal metalization) has
negligible electrical impact and, therefore, preferred routing direction should be taken into account for overlay sampling
and alignment strategies.
A major source of patterning problems in low-k1 lithography is line-end pullback. Though geometric metrics such as CD
at gate edge have served as good indicators, the ever-rising contribution of line-end extension to layout area necessitates
reducing pessimism in qualifying line-end patterning. Electrically-aware metrics for line-ends can be helpful in this
regard. In this work, we calculate the Ion and Ioff impact of line-end taper shapes as well as line-end length. The proposed
models are verified using TCAD simulation in a typical 65nm process. We observe that the device threshold voltage is a
weak function of line-end pullback, and that the electrical impact of the taper can vary with overlay errors. We apply a
non-uniform channel length model in addition to the proposed taper-dependent threshold voltage model to calculate ΔIon
and ΔIoff. Finally, the electrical metric for line-end printing is defined as expected change in Ion or Ioff under a given
overlay error distribution. We also propose a super-ellipse form to parameterize taper shapes, and then explore a large
variety of taper shapes to characterize electrical impact.
With the increased need for low power applications, designers are being forced to employ circuit optimization
methods that make tradeoffs between performance and power. In this paper, we propose a novel transistor-level
optimization method. Instead of drawing the transistor channel as a perfect rectangle, this method involves
reshaping the channel to create an optimized device that is superior in both delay and leakage to the original
device. The method exploits the unequal drive and leakage current distributions across the transistor channel to find
an optimal non-rectangular shape for the channel. In this work we apply this technique to circuit-level leakage
reduction. By replacing every transistor in a circuit with its optimally shaped counterpart, we achieve 5% savings in
leakage on average for a set of benchmark circuits, with no delay penalty. This improvement is achieved without
any additional circuit optimization iterations, and is well suited to fit into existing design flows.
With continued aggressive process scaling in the subwavelength lithographic regime, resolution enhancement techniques (RETs) such as optical proximity correction (OPC) are an integral part of the design to mask flow. OPC creates complex features to the layout, resulting in mask data volume explosion and increased mask costs. Traditionally, the mask flow has suffered from a lack of design information, such that all features (whether critical or noncritical) are treated equally by RET insertion. We develop a novel minimum cost of correction (MinCorr) methodology to determine the level of correction of each layout feature, such that prescribed parametric yield is attained with minimum RET cost. This flow is implemented with model-based OPC explicitly driven by timing constraints. We apply a mathematical-programming-based slack budgeting algorithm to determine OPC level for all polysilicon gate geometries. Designs adopted with this methodology achieve up to 20% Manufacturing Electron Beam Exposure System (MEBES) data volume reduction and 39% OPC run-time improvement.
In this work we present a predictive model for the edge placement error (EPE) distribution of devices in standard library cells based on lithography simulations of selective test patterns. Poly-silicon linewidth variation in the sub-100nm technology nodes is a major source of transistor performance variation (e.g., Ion and Ioff) and circuit parametric yield. It has been reported that significant part of the observed variation is systematically impacted by the neighboring layout pattern within optical proximity. Design optimization should account for this variation in order to maximize the performance and manufacturability of chip designs. We focus our analysis on standard library cells. In the past the EPE characterization was done on simple line array structures. However, the real circuit contexts are much more complex. Standard library cells offer a nice balance of usability by the designers and modeling complexity. We first construct a set of canonical test structures to perform lithography simulations using various OPC parameters and under various focus and exposure conditions. We then analyze the simulated printed image and capture the layout-dependent characteristics of the EPE distribution. Subsequently, our model estimates the EPE distribution of library cells based on their layout. In contrast to a straight-forward simulation of the library cells themselves, this approach is computationally less expensive. In addition the model can be used to predict the EPE distribution of any library cells and not limited to those that are simulated. Also, since the model encapsulates the details of lithography, it is easier for designers to integrate into design flow.
Focus is one of the major sources of linewidth variation. CD variation caused by defocus is largely systematic after the layout is finished. In particular, dense lines "smile" through focus while isolated lines "frown" in typical Bossung plots. This well-defined systematic behavior of focus-dependent CD variation allows us to develop a self-compensating design methodology.
In this work, we propose a novel design methodology that allows explicit compensation of focus-dependent CD variation, either within a cell (self-compensated cells) or across cells in a critical path (self-compensated design). By creating iso and dense variants for each library cell, we can achieve designs that are more robust to focus variation. Optimization with a mixture of iso and dense cell variants is possible both for area and leakage power, with the latter providing an interesting complement to existing leakage reduction techniques such as dual-Vth. We implement both heuristic and Mixed-Integer Linear Programming (MILP) solution methods to address this optimization, and experimentally compare their results. Our results indicate that designing with a self-compensated cell library incurs ~12% area penalty and ~6% leakage increase over original layouts while compensating for focus-dependent CD variation (i.e., the design meets timing constraints across a large range of focus variation). We observe ~27% area penalty and ~7% leakage increase at the worst-case defocus condition using only single-pitch cells. The area penalty of circuits after using either the heuristic or MILP optimization approach is reduced to ~3% while maintaining timing. We also apply our optimizations to leakage, which traditionally shows very large variability due to its exponential relationship with gate CD. We conclude that a mixed iso/dense library combined with a sensitivity-based optimization approach yields much better area/timing/leakage tradeoffs than using a self-compensated cell library alone. Self-compensated design shows an average of 25% leakage reduction at the worst defocus condition for the benchmark designs that we have studied.
Today's design flows sign-off performance and power prior to application of resolution enhancement techniques (RETs). Together with process variations, RETs can lead to substantial difference between post-layout and on-silicon performance and power. Lithography simulation enables estimation of on-silicon feature sizes at different process conditions. However, current lithography simulation tools are completely shape-based and not connected to the design in any way. This prevents designers from estimating on-silicon performance and power and consequently most chips are designed for pessimistic worst-cases. In this paper we present a novel methodology that uses the result of lithography simulation for estimation of performance and power of a design using standard device- and chip-level analysis tools. The key challenge addressed by our methodology is to transform shapes generated by lithography simulation to a form that is acceptable by standard analysis tools such that electrical properties are preserved. Our approach is sufficiently fast to be run full-chip on all layers of a large design. We observe that while the difference in power and performance estimates at post-layout and on-silicon is small at ideal process conditions, it increases
substantially at non-ideal process conditions. With our RET recipes, linewidths tend to decrease with defocus for most patterns. According to the proposed analyses of layouts litho-simulated at 100nm defocus, leakage increases by up to 68%, setup time improves by up to 14%, and dynamic power reduces by up to 2%. The key challenge addressed by our methodology is to transform shapes generated by lithography simulation to a form that is acceptable by standard analysis tools such that electrical properties are preserved. Our approach is sufficiently fast to be run full-chip on all layers of a large design. We observe that while the difference in power and performance estimates at post-layout and on-silicon is small at ideal process conditions, it increases substantially at non-ideal process conditions. With our RET recipes, linewidths tend to decrease with defocus for most patterns. According to the proposed analyses of layouts litho-simulated at 100nm defocus, leakage increases by up to 68%, setup time improves by up to 14%, and dynamic power reduces by up to 2%.
Current ORC and LRC tools are not connected to design in any way. They are pure shape-based functions. A wafer-shape based power and performance signoff is desirable for RET validation as well as for "closest-to-silicon" analysis. The printed images (generated by lithography simulation) are not restricted to simple rectilinear geometries. There may be other sources of such irregularities such as Line Edge Roughness (LER). For instance, a silicon image of a transistor may not be a perfect rectangle as is assumed by all current circuit analysis tools. Existing tools and device models cannot handle complicated non-rectilinear geometries.
In this paper, we present a novel technique to model non-uniform, non-rectilinear gates as equivalent perfect rectangle gates so that they can be analyzed by SPICE-like circuit analysis tools. The effect of threshold voltage variation along the width of the device is shown to be significant and is modeled accurately. Taking this effect into account, we find the current density at every point along the device and integrate it to obtain the total current. The current thus calculated is used to obtain the effective length for the equivalent rectangular device. We show that this method is much more accurate than previously proposed approaches which neglect the location dependence of the threshold voltage.
Increasing design complexity in sub-90nm designs results in increased mask complexity and cost. Resolution enhancement techniques (RET) such as assist feature addition, phase shifting (attenuated PSM) and aggressive optical proximity correction (OPC) help in preserving feature fidelity in silicon but increase mask complexity and cost. Data volume increase with rise in mask complexity is becoming prohibitive for manufacturing. Mask cost is determined by mask write time and mask inspection time, which are directly related to the complexity of features printed on the mask. Aggressive RET increase complexity by adding assist features and by modifying existing features. Passing design intent to OPC has been identified as a solution for reducing mask complexity and cost in several recent works. The goal of design-aware OPC is to relax OPC tolerances of layout features to minimize mask cost, without sacrificing parametric yield. To convey optimal OPC tolerances for manufacturing, design optimization should drive OPC tolerance optimization using models of mask cost for devices and wires. Design optimization should be aware of impact of OPC correction levels on mask cost and performance of the design. This work introduces mask cost characterization (MCC) that quantifies OPC complexity, measured in terms of fracture count of the mask, for different OPC tolerances. MCC with different OPC tolerances is a critical step in linking design and manufacturing. In this paper, we present a MCC methodology that provides models of fracture count of standard cells and wire patterns for use in design optimization. MCC cannot be performed by designers as they do not have access to foundry OPC recipes and RET tools. To build a fracture count model, we perform OPC and fracturing on a limited set of standard cells and wire configurations with all tolerance combinations. Separately, we identify the characteristics of the layout that impact fracture count. Based on the fracture count (FC) data from OPC and mask data preparation runs, we build models of FC as function of OPC tolerances and layout parameters.
Etch dummy features are used in the mask data preparation flow to reduce critical dimension (CD) skew between resist and etch processes and improve the printability of layouts. However, etch dummy rules conflict with SRAF (Sub-Resolution Assist Feature) insertion because each of the two techniques requires specific spacings of poly-to-assist, assist-to-assist, active-to-etch dummy and dummy-to-dummy. In this work, we first present a novel SRAF-aware etch dummy insertion method (SAEDM) which optimizes etch dummy insertion to make the layout more conducive to assist-feature insertion after etch dummy features have been inserted. However, placed standard-cell layouts may not have the ideal whitespace distribution to allow for optimal etch dummy and assist-feature insertions. Since placement of cells can create forbidden pitch violations, the placer must generate assist-correct and etch dummy-correct placements. This can be achieved by intelligent whitespace management in the placer. We describe a novel dynamic programming-based technique for etch-dummy
correctness (EtchCorr) which can be combine with the SAEDM in detailed placement of standard-cell designs. Our algorithm is validated on industrial testcases with respect to wafer printability, database complexity and device performance.
It has been demonstrated that the write time for 50keV E-beam masks is a function of layout complexity including figure count, vertex count and total line edge. This study is aimed to improve model fitting by utilizing all the variables generated from CATS. A better correlation of R2 = 0.99 was achieved by including quadratic and interaction terms. The vertex model was then applied to estimate write time of various nano-imprint templates. Accuracy of the vertex model is much better than the numbers generated from E-beam tool software. A 90nm test layout was treated with a mask optimization (MO) algorithm. A 26% write time reduction was observed through shot count reduction. The advanced features of the new generation E-beam writing tool combined with mask layout optimization, allows the same level of mask cost even though the capital cost of the new tool set increased 25%.
Depth of focus is the major contributor to lithographic process margin. One of the major causes of focus variation is imperfect planarization of fabrication layers. Presently, OPC (Optical Proximity Correction) methods are oblivious to the predictable nature of focus variation arising from wafer topography. As a result, designers suffer from manufacturing yield loss, as well as loss of design quality through unnecessary guardbanding. In this work, we propose a novel flow and method to drive OPC with a topography map of the layout that is generated by CMP simulation. The wafer topography variations result in local defocus, which we explicitly model in our OPC insertion and verification flows. Our experimental validation uses 90nm foundry libraries and industry-strength OPC and scattering bar recipes. We find that the proposed topography-aware OPC can yield up to 90% reduction in edge placement errors at the cost of little increase in mask cost.
Today's design-manufacturing interface lacks essential mechanisms to link disparate disciplines and tool sets. In this paper, we describe three specific mechanisms for improving OPC quality via interactions within the design-to-manufacturing flow. Our studies of these improvements have yielded promising results.
Quality of a layout has the most direct impact in the manufacturability of a design. Traditionally, layout quality is ensured in the first order by design rules, i.e. if a layout is free of design rules violation, it is a good layout. It is assumed such a layout will be fabricated to specification. Moreover, a design rule clean layout also ensures the electrical performance of the circuit it represents. There are other layout quality measures, e.g. random defects yield of a layout is modeled by critical area, systematic defects yield is sometime measured by a weighted score of recommended design rules. All the traditional layout quality measures are computed with drawn layout shapes.
In the advent of low K1 lithography and the increasing variability of process technologies beyond 90nm, nominal layout quality measures need to be revisited. Traditionally, nominal electrical properties such as L-eff and W-eff are extracted from drawn layout, and the corner cases are estimated with worst case process conditions. Most of these parameters are layout pattern dependent. As a matter of fact, they can be systematic through process and can have large impact in the modeling of circuit parameters .
In this paper, we investigate a through process layout quality measure, in which we extract through process electrical parameters from simulated through process resist contours. We showed a mechanism to compute a statistical model that predicts through process electrical parameters from the process parameter variation. We demonstrated that such computation is practical.
Sub-resolution assist features (SRAFs) provide an absolutely essential technique for critical dimension (CD) control and process window enhancement in subwavelength lithography. As focus levels change during manufacturing, CDs at a given "legal" pitch can fail to achieve manufacturing tolerances required for adequate yield. Furthermore, adoption of off-axis illumination (OAI) and SRAF techniques to enhance resolution at minimum pitch worsens printability of patterns at other pitches. Our previous work [Gupta et al.] described a novel dynamic programming-based technique for Assist-Feature Correctness (AFCorr) to account for interactions within a cell row. We now extend the AFCorr methodology to handle vertical interactions of field polys between adjacent cell rows in the detailed placement of standard-cell designs. Pattern bridge between field poly geometries becomes a major reason for yield degradation even though CD variation of gates determines circuit performance. In this paper, AFCorr is validated in all possible horizontal (H-) and vertical (V-) interactions of polysilicon geometries in the layout. For benchmark designs, forbidden pitch count between polysilicon shapes of neighboring cells is reduced by 89%-100% in 130nm and 93%-100% in 90nm. Edge placement error (EPE) count is also reduced by 80%-98% in 130nm and 83%-100% in 90nm. AFCorr facilitates additional SRAF insertion by up to 7.4% for 130nm and 7.9% for 90nm. In addition, AFCorr provides substantial improvement in CD control with negligible timing, area, or CPU overhead. The advantages of AFCorr are expected to increase in future technology nodes.
Today's design-manufacturing interfaces have only minimal information exchange. Lack of information on either side leads to under-performance due to too much guardbanding, and increased mask cost and increased turnaround time due to over-correction. In this work we present techniques that simultaneously utilize design and manufacturing information to improve mask quality and reduce mask cost.
Tight ACLV control has become increasingly diffcult due to the diminishing process constant, K1. Focus variation and pitch variation are two major systematic components of ACLV. In this paper, we demonstrate these systematic effects and propose a design flow which exploits the systematic effect. We demonstrate the systematic ACLV by showing a Bossung plot for a nominal 90nm technology node. The plot is generated by simulation with lithographic parameters closely resembling a production technology node. Traditionally, tight CD control is achieved by sophisticated RET such as OPC, SRAF, AltPSM and more recently the Dense Template Design. The CD variation is specified in the design manual and the circuit designs will ensure functionality by building in enough margin to account for the variability. Even though, the systematic components of CD variation are understood, they have always been considered together with other
random components as being random. This approach has left design performance on the table. We propose a holistic design flow by integrating the technology development process, design process and the
manufacturing process. This holistic approach is aiming to tame the systematic through-pitch and through-focus CD variation. We quantify the design timing benefit using this approach by circuit design experiments. Results of our experiments show that timing uncertainty can be reduced by up to 30%. We also discuss other possibilities which are infeasible to carry out in traditional approach with silos of technology development, design and manufacturing.
One of the most compute intensive dataprep operations for 90nm PC level is the model-based optical proximity correction (MBOPC). The running time and output data size are growing unacceptably, particularly for ASICs and designs containing large macros built out of library cells (books). The reason for this growth is that the region-of-interest for MBOPC is approximately 600nm, which means that most library cells “see” interactions with adjacent books in the same row and also in adjacent rows.
In this paper, we investigate the merits of doing cellwise MBOPC. In its simplest form, the approach is to perform dataprep for each cell once per cell definition rather than once per placement. By inspection, this will reduce the computation time and output data size by a factor of P/D, where P is the number of book placements (100s to millions) and D is the number of book definitions.
Our preliminary finding indicates that there is negligible difference between nominal CD for cellwise corrected cells and chipwise corrected cells. We will present our finding in terms of average CD and contact coverage, as well as runtime reduction.
Chemical-mechanical planarization (CMP) and other manufacturing steps in very deep submicron VLSI have varying effects on device and interconnect features, depending on the local layout density. To improve manufacturability and performance predictability, area fill features are inserted into the layout to imrpove uniformity with respect to density criteria. However, the performance impact of area fill insertion is not considered by any fill method in the literature. In this paper, we first review and develop estimates for capacitance and timing overhead of area fill insertion. We then give the first formulation of the Performance Impact Limited Fill (PIL-Fill) problem, and describe three practical solution approaches based on Integer Linear Programming (ILP-I and ILP-II) and the Greedy method. We test our methods on two layout test cases obtained from industry. Compared with the normal fill method, our ILP-II method achieves between 25% and 90% reduction in terms of total weighted edge delay (roughly, a measure of sum of node slacks) impact, while maintaining identical quality of the layout density control.
As minimum feature sizes continue to shrink, patterned features have become significantly smaller than the wavelength of light used in optical lithography. As a result, the requirement for dimensional variation control, especially in critical dimension (CD) 3σ, has become more stringent. To meet these requirements, resolution
enhancement techniques (RET) such as optical proximity correction (OPC) and phase shift mask (PSM) technology are applied. These approaches result in a substantial increase in mask costs and make the cost of ownership (COO) a key parameter in the comparison of lithography technologies. No concept of function is injected into the mask flow; that is, current OPC techniques are oblivious to the design intent. The entire layout is corrected uniformly with the same effort. We propose a novel minimum cost of correction (MinCorr)
methodology to determine the level of correction for each layout feature such that prescribed parametric yield is attained. We highlight potential solutions to the MinCorr problem and give a simple mapping to traditional performance optimization. We conclude with experimental results showing the RET costs that can be saved
while attaining a desired level of parametric yield.
This course explains how layout and circuit design interact with lithography choices. We especially focus on multi-patterning technologies such as LELE double patterning and SADP. We will explore role of design in lithography technology development as well as in lithographic process control. We will further discuss design enablement of multi-patterning technologies, especially in context of cell-based digital designs.
SC1187: Understanding Design-Patterning Interactions for EUV and DSA
EUV lithography and DSA haven been accepted by the industry as most promising candidates for dimensional scaling enablement at N7 technology node and beyond. This tutorial explains how introduction of such lithography technologies going to impact layout and circuit design. Choices of lithography would impact physical design and have a significant impact at system level. This tutorial will focus on transition from 193i multi-patterning technologies to EUV lithography and DSA. Factors that would determine on the enablement of these technologies would be highlighted and possible solutions would be shared.
Sub-90nm CMOS technologies are giving rise to significant variation in physical parameters of VLSI designs which has adverse impact on their electrical behavior. Most manufacturing-oriented professionals are familiar with the variations in physical parameters. This course will provide attendees with knowledge of how these physical variations impact the circuit operations, i.e., their electrical behavior. The impact on timing as well as power will be discussed. We will describe relative impact of these variations on various circuit families as well as circuit design techniques to mitigate the impact of manufacturing variations. Due to the large mangnitude of these variations, it is clear that designing for worst case behavior leaves significant performance on the table. We will discuss how systematic variation can be exploited in the current static timing methodology if it is known. A statistical timing and design methodology will also be discussed that can help regain some of this performance. With an eye towards the future, we will also explore manufacturing aware design closure. The course will be illustrated with practical examples throughout.