The history of dummy fill in semiconductor design goes back many generations of technology
development. From its start with planarization requirements, fill needs have expanded across
many wafer process manufacturing steps. They include lithography, etch, deposition, surface
anneal, and device performance with stress analysis. Modern EDA tools have advanced to
automatically place dummy shapes to meet these new requirements. These include placing
multi-layer cell constructs, and multi-layer analysis during placement. New fill requirements
have affected downstream flows such as extraction and timing analysis, physical verification,
and RET flows. Further enhancements to fill tools and flows are under development to meet the
total DFM needs for the next generations of chips.
As technology processes continue to shrink, standard design rule checking (DRC) has become insufficient to guarantee
design manufacturability. DRCPlus is a powerful technique for capturing yield detractors related to complex 2D
situations1,2. DRCPlus is a pattern-based 2D design rule check beyond traditional width and space DRC that can identify
problematic 2D configurations which are difficult to manufacture. This paper describes a new approach for applying
DRCPlus in a router, enabling an automated approach to detecting and fixing known lithography hotspots using an
integrated fast 2D pattern matching engine. A simple pass/no-pass criterion associated with each pattern offers designers
guidance on how to fix these problematic patterns. Since it does not rely on compute intensive simulations, DRCPlus can
be applied on fairly large design blocks and enforced in conjunction with standard DRC in the early stages of the design
flow. By embedding this capability into the router, 2D yield detractors can be identified and fixed by designers in a
push-button manner without losing design connectivity. More robust designs can be achieved and the impact on
parasitics can be easily assessed.
This paper will describe a flow using a fast 2D pattern matching engine integrated into the router in order to enforce
DRCPlus rules. An integrated approach allows for rapid identification of hotspot patterns and, more importantly, allows
for rapid fixing and verification of these hotspots by a tool that understands design intent and constraints. The overall
flow is illustrated in Figure 1. An inexact search pattern is passed to the integrated pattern matcher. The match locations
are filtered by the router through application of a DRC constraint (typically a recommended rule). Matches that fail this
constraint are automatically fixed by the router, with the modified regions incrementally re-checked to ensure no additional DRCPlus violations are introduced.
As technology processes continue to shrink and aggressive resolution enhancement technologies (RET) and optical
proximity correction (OPC) are applied, standard design rule constraints (DRC) sometimes fails to fully capture the
concept of design manufacturability. DRC Plus augments standard DRC by applying fast 2D pattern matching to design
layout to identify problematic 2D patterns missed by DRC. DRC Plus offers several advantages over other DFM
techniques: it offers a simple pass/no-pass criterion, it is simple to document as part of the design manual, it does not
require compute intensive simulations, and it does not require highly-accurate lithographic models. These advantages
allow DRC Plus to be inserted early in the design flow, and enforced in conjunction with standard DRC.
The creation of DRC Plus rules, however, remains a challenge. Hotspots derived from lithographic simulation may be
used to create DRC Plus rules, but the process of translating a hotspot into a pattern is a difficult and manual effort. In
this paper, we present an algorithmic methodology to identify hot patterns using lithographic simulation rather than
hotspots. First, a complete set of pattern classes, which covers the entire design space of a sample layout, is computed.
These pattern classes, by construction, can be directly used as DRC Plus rules. Next, the manufacturability of each
pattern class is evaluated as a whole. This results in a quantifiable metric for both design impact and manufacturability,
which can be used to select individual pattern classes as DRC Plus rules. Simulation experiment shows that hundreds of
rules can be created using this methodology, which is well beyond what is possible by hand. Selective visual inspection
shows that algorithmically generated rules are quite reasonable. In addition to producing DRC Plus rules, this
methodology also provides a concrete understanding of design style, design variability, and how they affect
With continued aggressive process scaling in the subwavelength lithographic regime, resolution enhancement techniques (RETs) such as optical proximity correction (OPC) are an integral part of the design to mask flow. OPC creates complex features to the layout, resulting in mask data volume explosion and increased mask costs. Traditionally, the mask flow has suffered from a lack of design information, such that all features (whether critical or noncritical) are treated equally by RET insertion. We develop a novel minimum cost of correction (MinCorr) methodology to determine the level of correction of each layout feature, such that prescribed parametric yield is attained with minimum RET cost. This flow is implemented with model-based OPC explicitly driven by timing constraints. We apply a mathematical-programming-based slack budgeting algorithm to determine OPC level for all polysilicon gate geometries. Designs adopted with this methodology achieve up to 20% Manufacturing Electron Beam Exposure System (MEBES) data volume reduction and 39% OPC run-time improvement.
Design rule constraints (DRC) are the industry workhorse for constraining design to ensure both physical and electrical
manufacturability. However, as technology processes continue to shrink and aggressive resolution enhancement
technologies (RET) and optical proximity correction (OPC) are applied, standard DRC sometimes fails to fully capture
the concept of design manufacturability. Consequently, some DRC-clean layout designs are found to be difficult to
manufacture. Attempts have been made to "patch up" standard DRC with additional rules to identify these specific
problematic cases. However, due to the lack of specificity with DRC, these efforts often meet with mixed-success.
Although it typically resolves the issue at hand, quite often, it is the enforcement of some DRC rule that causes other
problematic geometries to be generated, as designers attempt to meet all the constraints given to them. In effect,
designers meet the letter of the law, as defined by the DRC implementation code, without understanding the "spirit of the
rule". This leads to more exceptional cases being added to the DRC manual, further increasing its complexity.
DRC Plus adopts a different approach. It augments standard DRC by applying fast 2D pattern matching to design layout
to identify problematic 2D configurations which are difficult to manufacture. The tool then returns specific feedback to
designers on how to resolve these issues. This basic approach offers several advantages over other DFM techniques: It is
enforceable, it offers a simple pass/no-pass criterion, it is simple to document as part of the design manual, it does not
require compute intensive simulations, and it does not require highly-accurate lithographic models that may not be
available during design. These advantages allow DRC Plus to be inserted early in the design flow, and enforced in
conjunction with standard DRC.
A methodology for layout verification and optimization based on
exible design rules is provided. This methodology
is based on image parameter determined
exible design rules (FDRs), in contrast with restrictive design
rules (RDRs), and enables fine-grained optimization of designs in the yield-performance space. Conventional
design rules are developed based on experimental data obtained from design, fabrication and measurements of a
set of test structures. They are generated at early stage of a process development and used as guidelines for later
IC layouts. These design rules (DRs) serve to guarantee a high functional yield of the fabricated design. Since
small areas are preferred in integrated circuit designs due to their corresponding high speed and lower cost, most
design rules focus on minimum resolvable dimensions.
For current and upcoming technology nodes (90, 65, 45 nm and beyond) one of the fundamental enablers of Moore's Law is the use of Resolution Enhancement Techniques (RET) in optical lithography. While RETs allow for continuing reduction in integrated circuits’ critical dimensions (CD), layout distortions are introduced as an undesired consequence due to proximity effects. Complex and costly Optical Proximity Correction (OPC) is then deployed to compensate for CD variations and loss of pattern fidelity, in an effort to improve yield. This, together with other sources for CD variations, causes the actual on-silicon chip performance to be quite different from sign-off expectations.
In current design optimization methodologies, process variation modeling, aimed at providing guardbands for performance analysis, is based on "worst-case scenarios" (corner cases) and yields overly pessimistic simulation results which makes meeting design targets unnecessarily difficult. Assumptions of CD distributions in Monte Carlo simulations, and statistical timing analysis in general, can be made more rigorous by considering realistic systematic and random contributions to the overall process variation.
A novel methodology is presented in this paper for extracting residual OPC errors from a placed and routed full chip layout and for deriving actual (i.e., calibrated to silicon) CD values, to be used in timing analysis and speed path characterization. The implementation of this automated flow is achieved through a combination of tagging critical gates, post-OPC layout back-annotation, and selective extraction from the global circuit netlist. This approach improves upon traditional design flow practices where ideal (i.e., drawn) CD values are employed, which leads to poor performance predictability of the as-fabricated design.
With this more accurate timing analysis, we are able to highlight the necessity of a post-OPC verification embedded design flow by showing substantial differences in the silicon-based timing simulations, both in terms of a significant reordering of speed path criticality and a 36.4% increase in worst-case slack. Extensions of this methodology to multi-layer extraction and timing characterization are also proposed. The paper concludes by showing how the methodology implemented in this flow also provides a general design for manufacturability (DFM) tool template. In particular, by passing design intent to process/OPC engineers, selective OPC can be applied to improve CD variation control based on gates' functions such as critical gates and matching transistors. Furthermore, back-annotated process-based data can be used during early stages of circuit design verification and optimization, driving tradeoffs when significant variability is unavoidable.
As minimum feature sizes continue to shrink, patterned features have become significantly smaller than the wavelength of light used in optical lithography. As a result, the requirement for dimensional variation control, especially in critical dimension (CD) 3σ, has become more stringent. To meet these requirements, resolution
enhancement techniques (RET) such as optical proximity correction (OPC) and phase shift mask (PSM) technology are applied. These approaches result in a substantial increase in mask costs and make the cost of ownership (COO) a key parameter in the comparison of lithography technologies. No concept of function is injected into the mask flow; that is, current OPC techniques are oblivious to the design intent. The entire layout is corrected uniformly with the same effort. We propose a novel minimum cost of correction (MinCorr)
methodology to determine the level of correction for each layout feature such that prescribed parametric yield is attained. We highlight potential solutions to the MinCorr problem and give a simple mapping to traditional performance optimization. We conclude with experimental results showing the RET costs that can be saved
while attaining a desired level of parametric yield.