At the early stage of a new technology development, Ground Rule (GR) calculations are performed assuming design targets are met, and all the process variations are within certain process assumptions. However, as technology matures, it is expensive, and time consuming to verify these assumptions for all the designs allowed by the rules. Thus, there’s a loop hole in the GRs if these assumptions are not met on the design. This issue becomes even worse for 2D dense design such as SRAM, where the design target and wafer image are so different due to all the corners, jogs, line ends etc, and the process variations are much larger for the 2D designs. As a result, SRAM designs almost never follow a standard GR approach but its own unique rules. In fact, for SRAM designs of slightly different style, the rules will be significantly different. On the other hand, Process Variation (PV) contours offer much more details of process variations even though their usage is very limited. One reason for that is that off-nominal conditions having larger risks from rule point of view but have lower probability at the same time, making it a dilemma for us. In this talk we propose a method to incorporate PV contours in GR calculation, and each PV contour are used in Monte-Carlo calculation in accordance with its own probability. We apply this method in SRAM layout optimization as an example. This work was performed at the IBM Semiconductor Research Center, Albany NY 12203
Proc. SPIE. 10148, Design-Process-Technology Co-optimization for Manufacturability XI
KEYWORDS: Semiconductors, Lithography, Logic, Detection and tracking algorithms, Data modeling, Manufacturing, Computer simulations, 3D modeling, Monte Carlo methods, Tolerancing, Statistical modeling, Performance modeling, Systems modeling, Process engineering, Design for manufacturability
We demonstrate a tool which can function as an interface between VLSI designers and process-technology engineers
throughout the Design-Technology Co-optimization (DTCO) process. This tool uses a Monte Carlo algorithm on the
output of lithography simulations to model the frequency of fail mechanisms on wafer. Fail mechanisms are defined
according to process integration flow: by Boolean operations and measurements between original and derived shapes.
Another feature of this design rule optimization methodology is the use of a Markov-Chain-based algorithm to perform a
sensitivity analysis, the output of which may be used by process engineers to target key process-induced variabilities for
improvement. This tool is used to analyze multiple Middle-Of-Line fail mechanisms in a 10nm inverter design and identify
key process assumptions that will most strongly affect the yield of the structures. This tool and the underlying algorithm
are also shown to be scalable to arbitrarily complex geometries in three dimensions. Such a characteristic which is
becoming more important with the introduction of novel patterning technologies and more complex 3-D on-wafer
Design rules are created considering a wafer fail mechanism with the relevant design levels under various design cases, and the values are set to cover the worst scenario. Because of the simplification and generalization, design rule hinders, rather than helps, dense device scaling. As an example, SRAM designs always need extensive ground rule waivers. Furthermore, dense design also often involves "design arc", a collection of design rules, the sum of which equals critical pitch defined by technology. In design arc, a single rule change can lead to chain reaction of other rule violations. In this talk we present a methodology using Layout Based Monte-Carlo Simulation (LBMCS) with integrated multiple ground rule checks. We apply this methodology on SRAM word line contact, and the result is a layout that has balanced wafer fail risks based on Process Assumptions (PAs). This work was performed at the IBM Microelectronics Div, Semiconductor Research and Development Center, Hopewell Junction, NY 12533
Challenges in block levels due to the dilemma of cost control and under-layer effects have been addressed in several papers already, and different approaches to solve the issue have been addressed. Among the known approaches, developable BARC and under-layer aware modeling are the most promising. However, in this paper we will discuss and explain the limitation inefficiency of both methods. In addition, as more block levels are employing etching step, the under-layer dependent etch behavior that we see in some of the block levels is also discussed. All these place great challenges for block level process development. We discuss here possible solutions/improvements including: developable BARC (dBARC) thickness optimization for specific under layers; Simplified model based corrections for lith and etch. This work was performed at the IBM Microelectronics Div, Semiconductor Research and Development Center, Hopewell Junction, NY 12533
The ability to incorporate the effect of patterned underlayers in a 3-dimensional physical resist model that
truly mimics the process on real wafers could be used to formulate robust ground rules for design. We have
shown as an example block level simulations, where the resist critical dimension is determined by the
presence of STI (shallow trench isolation) and/or patterned gate level underneath & their relative spacing,
as confirmed on wafer. We will demonstrate how the results of such study could be used for creating
ground rules which are truly dependent on the interaction between the current layer resist & the patterned
layers underneath. We have also developed a new way to visualize lithographic process variations in 3-D
space that is useful for simulation analysis that can prove very helpful in ground rule development and
process optimization. Such visualization capability in the dataprep flow to flag issues or dispose critical
structures increases speed and efficiency in the mask tapeout process.
Implant level photolithography processes are becoming more challenging each node due to everdecreasing
CD and resist edge placement requirements, and the technical challenge is exacerbated
by the business need to develop and maintain low-cost processes. Optical Proximity Correction
(OPC) using models created based on data from plain silicon substrate is not able to accommodate
the various real device/design scenarios due to substrate pattern effects. In this paper, we show our
systematic study on substrate effect (RX/STI) on implant level lithography CD printing. We also
explain the CD variation mechanism and validate by simulation using well calibrated physical resist
model. Based on the results, we propose an approach to generate substrate-aware OPC rules to
correct for such substrate effects.
To reduce cost, implant levels usually use masks fabricated with older generation mask tools, such
as laser writers, which are known to introduce significant mask errors. In fact, for the same implant
photolithography process, Optical Proximity Correction (OPC) models have to be developed
separately for the negative and positive mask tones to account for the resulting differences from the
mask making process. However, in order to calibrate a physical resist model, it is ideal to use single
resist model to predict the resist performance under the two mask polarities. In this study, we show
our attempt to de-convolute mask error from the Correct Positive (CP) and Correct Negative (CN)
tone CD data collected from bare Si wafer and derive a single resist model. Moreover, we also
present the predictability of this resist model over a patterned substrate by comparing simulated
CD/profiles against wafer data of various features.
In this paper, we report large scale three-dimensional photoresist model calibration and validation
results for critical layer models that span 32 nm, 28 nm and 22 nm technology nodes. Although
methods for calibrating physical photoresist models have been reported previously, we are unaware
of any that leverage data sets typically used for building empirical mask shape correction models. .
A method to calibrate and verify physical resist models that uses contour model calibration data sets
in conjuction with scanning electron microscope profiles and atomic force microscope profiles is
discussed. In addition, we explore ways in which three-dimensional physical resist models can be
used to complement and extend pattern hot-spot detection in a mask shape validation flow.
An aggressive pursuit for ever decreasing the minimum feature size in modern integrated circuit has lead to various challenges in nanofabrication. Finer feature size is very desirable in microelectronics and other applications for higher performance. However, it is difficult to achieve critical dimensions at sub-wavelength scale using traditional optical lithography techniques due to the optical diffraction limit. We developed several techniques to overcome this diffraction limit and simultaneously achieve massive, parallel patterning. One of the methods involves the principle of optical near-field enhancement between the spheres and substrate when irradiated by a laser beam, for obtaining the nano-features. Nonlinear absorption of the enhanced optical field between the spheres and substrate sample was believed to be the primary reason for the creation of nano-features. Also, we utilized the near-field enhancement around nanoridges and nanotips upon pulsed laser irradiation to produce line or dot patterns in nanoscale on gold thin films deposited on glass substrates. We demonstrated that the photolithography can be extended to a sub-wavelength resolution for patterning any substrate by exciting the surface plasmons on both metallic mask and a shield layer covering the substrate. We used laser-assisted photothermal imprinting method for directly nanopatterning carbon nanofiber-reinforced polyethylene nanocomposite.