As the critical dimensions (CDs) of etch profiles continue to decrease, precise control of plasma etch processing becomes increasingly important. Achieving this control requires optimizing etch recipes, which is time consuming and expensive as an extensive amount of experiments must be performed. Here we present a method for the prediction of process windows to achieve target CDs for high aspect ratio trenches using model-based experimental design. A reduced-order model of the physics and chemistry of the etch is used to identify the best experiments to perform to calibrate the model. The model is then used to efficiently explore the process parameter space to identify the largest ranges of process parameters that achieve desired ranges of CDs. The methodology is practically demonstrated on a three-step trench etch through three layers of material consisting of spin-on-glass, spin-on-carbon and silicon. It is found that this physics-model based method requires less than half as many experiments to identify the optimal etch recipe than full-factorial design of experiments.
We present a model-based experimental design methodology for accelerating 3D etch optimization with demonstration on 3D NAND structures. The design and optimization of etch recipes for such 3D structures face significant challenges requiring costly and time-consuming experiments in order to achieve the required tolerances. 3D NAND memory devices additionally require accurate nanofabrication of high aspect ratio trenches and isolation slits, which are challenging to manufacture reliably within specifications. Our model efficiently captures the relevant physical and chemical processes, which allows them to be calibrated using a limited number of experimental samples and can reproduce realistic 3D etch of multilayer materials, including bowing, necking, and tapering. Since our GPU-powered simulations run in a matter of minutes, the relevant process parameter space can be explored extensively in a short amount of time. The calibrated physics-based model can be used to train adaptive machine-learning-based heuristics which enable near-instant queries, for example for data visualization and analytics. With this approach, we show a rapid methodology for locating optimal windows in the process parameter space for etching 3D structures. Optimality metrics under consideration include both conformances to specified tolerances as well as robustness against process parameter variations. These techniques can reduce cost and time to market for complex multi-layer three-dimensional device designs and improve semiconductor device yields.
Uniformity of critical dimensions (CDs) across a wafer is an increasing challenge as both CDs and tolerances shrink. Plasma etch uniformity is achieved in part through reactor design and in part through the operating conditions or process recipe of the reactor. The identification of a recipe for a specific etch process is time consuming and expensive, requiring extensive experiments and metrology. Here we present two modules in SandBox StudioTM, SB-Bayesian and SBNeuralNet, to accelerate the prediction and optimization of etch recipes for across the wafer uniformity. A model of etch rates across the wafer is created that accounts for injector locations, gas flow rates and distribution and plasma powers. Synthetic experiments on etching line-space patterns on 300 mm wafers are performed and the CDs and their variations are computed at several hundred site locations. SB-Bayesian requires many fewer experiments to be calibrated and achieve an excellent qualitative match with the experimental data. SB-NeuralNet achieves comparable levels of accuracy to SBBayesian at predicting average CDs and uniformity, but it does not perform as well at predicting trends across the wafer. It is shown that neural nets require a prohibitive amount of experimental data to successfully predict wafer patterns. SBBayesian and SB-NeuralNet were used to create detailed process maps across the parameters space of interest to identify optimal recipes to achieve required CDs and tolerances. Both modules can predict optimal recipe conditions for achieving identified target CD and uniformity metrics. Using these tools, etch recipes for across the wafer uniformity are rapidly optimized at lower cost.
A methodology is presented to virtually predict etch profiles on flexible substrates across multi-dimensional process spaces using a minimal number of calibration experiments. Simulations and predictions of the physics and chemical kinetics of plasma etch on flexible substrates are performed using the commercial software SandBox StudioTM. The evolution of a trench profile is computed using surface kinetics models and the level set method. Local etch rates include visibility effects to account for partial shielding of the etch as the pattern is developed and the effects of redeposition. The results of the experiments are then used to update the calibrated model parameters. If the process objectives (e.g., sidewall angle, trench critical dimensions, and across the web uniformity) are not achieved, then a new set of experiments is suggested by the methodology. The process is repeated until the optimal process conditions are identified. The methodology is validated by experiments on etching line-space patterns of polysilicon films on polymer substrates. Results with reactive ion etching with either CF4 and HBr are shown and the optimal etch recipes (power, etch time and gas flow rates) determined. It is found that this coupled simulation-experiment approach is much more efficient than full factorial experimental design at predicting process outcomes. The methodology presented requires 66% fewer experiments reducing the cost of development by a factor of three.
A two-dimensional, cellular automata model for atomic layer etching (ALE) is presented and used to predict the etch rate and the evolution of the roughness of various surfaces as a function of the efficiencies or probabilities of the adsorption and removal steps in the ALE process. The atoms of the material to be etched are initially placed in a two-dimensional array several layers thick. The etch follows the two step process of ALE. First, the initial reaction step (e.g., Cl reacting with Si) is assumed to occur at 100% efficiency activating the exposed, surface atoms; that is, all exposed atoms react with the etching gas. The second reaction step (e.g., Ar ion bombardment or sputtering) occurs with efficiencies that are assumed to vary depending on the exposure of the surface atoms relative to their neighbors and on the strength of bombardment. For sufficiently high bombardment or sputtering, atoms below the activated surface atoms can also be removed, which gives etch rates greater than one layer per ALE cycle. The bounds on the efficiencies of the second removal step are extracted from experimental measurements and fully detailed molecular dynamics simulations from the literature. A trade-off is observed between etch rate and surface roughness as the Ar ion bombardment is increased.
Predicting the etch and deposition profiles created using plasma processes is challenging due to the complexity of plasma discharges and plasma-surface interactions. Volume-averaged global models allow for efficient prediction of important processing parameters and provide a means to quickly determine the effect of a variety of process inputs on the plasma discharge. However, global models are limited based on simplifying assumptions to describe the chemical reaction network. Here a database of 128 reactions is compiled and their corresponding rate constants collected from 24 sources for an Ar/CF4 plasma using the platform RODEo (Recipe Optimization for Deposition and Etching). Six different reaction sets were tested which employed anywhere from 12 to all 128 reactions to evaluate the impact of the reaction database on particle species densities and electron temperature. Because many the reactions used in our database had conflicting rate constants as reported in literature, we also present a method to deal with those uncertainties when constructing the model which includes weighting each reaction rate and filtering outliers. By analyzing the link between a reaction’s rate constant and its impact on the predicted plasma densities and electron temperatures, we determine the conditions at which a reaction is deemed necessary to the plasma model. The results of this study provide a foundation for determining which minimal set of reactions must be included in the reaction set of the plasma model.
The design and optimization of highly nonlinear and complex processes like plasma etching is challenging and timeconsuming. Significant effort has been devoted to creating plasma profile simulators to facilitate the development of etch recipes. Nevertheless, these simulators are often difficult to use in practice due to the large number of unknown parameters in the plasma discharge and surface kinetics of the etch material, the dependency of the etch rate on the evolving front profile, and the disparate length scales of the system. Here, we expand on the development of a previously published, data informed, Bayesian approach embodied in the platform RODEo (Recipe Optimization for Deposition and Etching). RODEo is used to predict etch rates and etch profiles over a range of powers, pressures, gas flow rates, and gas mixing ratios of an CF4/Ar gas chemistry. Three examples are shown: (1) etch rate predictions of an unknown material “X” using simulated experiments for a CF4/Ar chemistry, (2) etch rate predictions of SiO2 in a Plasma-Therm 790 RIE reactor for a CF4/Ar chemistry, and (3) profile prediction using level set methods.
Next generation semiconductor technologies like high density memory storage require precise 2D and 3D nanopatterns. Plasma etching processes are essential to achieving the nanoscale precision required for these structures. Current plasma process development methods rely primarily on iterative trial and error or factorial design of experiment (DOE) to define the plasma process space. Here we evaluate the efficacy of the software tool Recipe Optimization for Deposition and Etching (RODEo) against standard industry methods at determining the process parameters of a high density O2 plasma system with three case studies. In the first case study, we demonstrate that RODEo is able to predict etch rates more accurately than a regression model based on a full factorial design while using 40% fewer experiments. In the second case study, we demonstrate that RODEo performs significantly better than a full factorial DOE at identifying optimal process conditions to maximize anisotropy. In the third case study we experimentally show how RODEo maximizes etch rates while using half the experiments of a full factorial DOE method. With enhanced process predictions and more accurate maps of the process space, RODEo reduces the number of experiments required to develop and optimize plasma processes.
A fast and inexpensive scheme for etch rate prediction using flexible continuum models and Bayesian statistics is demonstrated. Bulk etch rates of MgO are predicted using a steady-state model with volume-averaged plasma parameters and classical Langmuir surface kinetics. Plasma particle and surface kinetics are modeled within a global plasma framework using single component Metropolis Hastings methods and limited data. The accuracy of these predictions is evaluated with synthetic and experimental etch rate data for magnesium oxide in an ICP-RIE system. This approach is compared and superior to factorial models generated from JMP, a software package frequently employed for recipe creation and optimization.
Polymer shrinkage from curing in nanoimprint lithography (NIL) strongly affects the ultimate shapes of two- and three-dimensional structures produced following etching. We computationally study the curing step in the NIL process and predict the shape changes caused by polymer shrinkage. The shape changes are predicted for crosses, diamonds with sharp and rounded tips, and multitiered structures that are applicable for multibit memory devices and dual damascene processes. The shape changes from curing are shown to be governed by the shrinkage coefficient of the polymer resist, its Poisson’s ratio, and the geometric aspect ratios of the shapes. Finite element simulations demonstrate that shape change due to polymer densification is equal to the average volumetric contraction of the resist material, but shrinkage is not isotropic and vertical displacement dominates. The thickness of the residual layer does not impact the final profile of the imprinted shapes considered. Further analysis shows that diamonds with sharp tips stay sharp while the tips of rounded diamonds get sharper. Additionally, shape changes for multitiered structures are not uniformly distributed among the tiers. Using etch simulations, we demonstrate the significant impact of polymer shrinkage on the final feature profile.
Nanosculpting, the fabrication of two- and three-dimensional shapes at the nanoscale, enables applications in
photonics, metamaterials, multi-bit magnetic memory, and bio-nanoparticles. A promising high resolution and high
throughput method for nanosculpting is nanoimprint lithography (NIL). A key requirement to achieving
manufacturing viability of nanosculptures in NIL is maintaining image fidelity through each step of the imprinting
process. In particular, polymer densification during UV curing can distort the imprinted image. Here we study the
shape changes introduced by polymer densification and develop a forward method for predicting changes in
nanoscale geometries from UV curing. We show that shape changes by polymer densification are governed by the
Poisson’s ratio, the shrinkage coefficient of the polymer resist, and the geometric aspect ratios of the nanosculpted
shape. We also show that the size of the residual layer does not impact the final profile of the imprinted shape.
A lattice-type Monte Carlo–based mesoscale model and simulation of the lithography process have been adapted to study the insoluble particle generation that arises from statistically improbable events. These events occur when there is a connected pathway of soluble material that envelops a volume of insoluble material due to fluctuations in the deprotection profile. The simulation shows that development erodes the insoluble material into the developer stream and produces a cavity on the line edge that can be far larger than a single polymer molecule. The insoluble particles can coalesce to form aggregates that deposit on the wafer surface. The effect of the resist formulation, exposure, postexposure bake, and development variables on particle generation was analyzed in both low- and high-frequency domains. It is suggested that different mechanisms are dominant for the formation of line-edge roughness (LER) at different frequencies. The simulations were used to assess the commonly proposed measures to reduce LER such as the use of low molecular weight polymers, addition of quenchers, varying acid diffusion length, etc. The simulation can be used to help set process variables to minimize the extent of particle generation and LER.
A lattice-type Monte Carlo based mesoscale model and simulation of the lithography process has been described
previously . The model includes the spin coating, post apply bake, exposure, post exposure bake and development
steps. This simulation has been adapted to study the insoluble particle generation that arises from statistically
improbable events. These events occur when there is a connected pathway of soluble material that envelops a volume of
insoluble material due to fluctuations in the deprotection profile that occur during the post exposure bake .
Development erodes the insoluble material into the developer stream as an insoluble particle. This process may produce
a cavity on the line edge that can be far larger than a single polymer molecule. The insoluble particles generated may
coalesce in developer to form large aggregates of insoluble material that ultimately deposit on the wafer surface and the
tooling. The recent modifications made in mesoscale models for the PEB and dissolution steps, which have enabled this
study are briefly described. An algorithm that was used for particle detection in the current study is also discussed. The
effect of the resist formulation and the different lithographic steps, namely, exposure, post exposure bake and
development, on the extent of particle generation is analyzed. These simulations can be used to set process variables to
minimize the extent of particle generation.
The current scale of features size in the microelectronics industry has reached the point where molecular level
interactions affect process fidelity and produce excursions from the continuum world like line edge roughness (LER).
Here we present a 3D molecular level model based on the adaptation of the critical ionization (CI) theory using a
fundamental interaction energy approach. The model asserts that it is the favorable interaction between the ionized part
of the polymer and the developer solution which renders the polymer soluble. Dynamic Monte Carlo methods were used
in the current model to study the polymer dissolution phenomenon. The surface ionization was captured by employing an
electric double layer at the interface, and polymer motion was simulated using the Metropolis algorithm. The
approximated interaction parameters, for different species in the system, were obtained experimentally and used to
calibrate the simulated dissolution rate response to polymer molecular weight and developer concentration. The
predicted response is in good agreement with experimental dissolution rate data. The simulation results support the
premise of the CI theory and provide an insight into the CI model from a new prospective. This model may provide a
means to study the contribution of development to LER and other related defects based on molecular level interactions
between distinct components in the polymer and the developer.
In order to quickly and cheaply test candidate fluids and coatings for immersion lithography, we have devised a fluid handling scheme that we call drag-a-drop. We have constructed a prototype tool in order to test materials using this fluid scheme, and conducted several experiments with it. From these tests, we have determined that a hydrophobic topcoat with low contact angle hysteresis on the substrate increases the maximum stable scanning velocity by at least a factor of 2 over a standard 193 nm photoresist. We observed that instabilities on the receding contact line are unaffected by height, but the onset of instability on the advancing contact line occurs when the height of the lens is low. We also examined the drag-a-drop technique for possible use in laser mask writing, and found that by means of a hydrophobic topcoat, the lens can be completely removed from the substrate while keeping the immersion droplet affixed to the lens.
Step and Flash Imprint Lithography (SFIL) is a photolithography process in which the photoresist is dispensed onto the wafer in its liquid monomer form and then imprinted and cured into a desired pattern instead of using traditional optic systems. The mask used in the SFIL process is a template of the desired features that is made using electron beam writing. Several variable sized drops of monomer are dispensed onto the wafer for imprinting. The base layer thickness at the end of the imprinting process is typically about 50 nm, with an approximate imprint area of one square inch. This disparate length scale allows simulation of the fluid movement through the template-wafer channel by solving governing fluid equations that are simplified by lubrication theory. Capillary forces are also an important factor governing fluid movement; a dimensionless number known as the capillary number is used to describe these forces. This paper presents a simulation to model the flow and coalescence of the multiple fluid drops and the effect the number of drops dispensed has on final imprint time. The imprint time is shown to decrease with the use of increasing numbers of drops or with the use of an applied force on the template. Appropriate filling of features in the template is an important issue in SFIL, so a mechanism for handling the interface movement into features using a modified boundary condition is outlined and examples are. Fluid spreading outside of the mask edge is also an issue that is resolved by results from this study. The simulation is thus a useful predictive tool providing insight on the effect multiple drop configurations and applied force have on imprint time, as well as providing a means for predicting feature filling.