With conventional methods, improvements in optical proximity correction (OPC) runtime and accuracy can be challenging. Often improvements in accuracy have limited impact or require longer runtimes. Conversely, improvements in runtime often come at a sacrifice to overall correction quality. OPC industries have been developing and applying machine-learning (ML) methods to address both issues together, such as the Newron® machine learning family of products, which provides for both faster ML-based correction and more accurate resist models. Benchmark testing shows that ML-based correction prediction can yield runtime improvements of 30% or more without sacrificing pattern fidelity. It also shows that a ML resist model can deliver simulation accuracy 15% better than a conventional lithography model. This paper discusses the conversion flow from baseline OPC recipe to ML-accelerated recipe and presents results of a study that applies this technique to a sub-5 nm EUV test case, as well as results of a study that leverages a ML resist model to improve OPC accuracy.
In the last 2 years, the semiconductor industry has recognized the critical importance of verification for optical proximity correction (OPC) and reticle/resolution enhancement technology (RET). Consequently, RET verification usage has increased and improved dramatically. These changes are due to the arrival of new verification tools, new companies, new requirements and new awareness by product groups about the necessity of RET verification. Currently, as the 65nm device generation comes into full production and the 45nm generation starts full development, companies now have the tools and experience (i.e., long lists of previous errors to avoid) needed to perform a detailed analysis of what is required for 45nm and 65nm RET verification. In previous work  we performed a theoretical analysis of OPC & RET verification requirements for the 65nm and 45nm device generations and drew conclusions for the ideal verification strategy. In this paper, we extend the previous work to include actual observed verification issues and experimental results. We analyze the historical experimental issues with regard to cause, impact and optimum verification detection strategy. The results of this experimental analysis are compared to the theoretical results, with differences and agreement noted. Finally, we use theoretical and experimental results to propose an optimized RET verification strategy to meet the user requirements of 45nm development and the differing requirements of 65nm volume production.
Despite the complexity of AAPSM patterning using the complementary PSM approach with respect to OPC correction, mask making, fab logistics etc, the technique still remains a valuable solution for special products where a low CD dispersion printing process is required. For current and next generation process technologies (90-65nm ground rules), the most common alternating mask solution of single trench etch with or without undercut becomes more difficult to manufacture. Especially challenging is the aspect ratio control of quartz etched trenches as a function of density in order to assure the correct phase angle and sidewall for dense and isolated structures over all phase shifted geometries. In order to solve this problem, a modified mask architecture is proposed, called the Transparent Etch Stop Layer (TESL) phase shift mask. In TESL, a transparent (etch stop) layer is deposited on the quartz substrate, followed by the deposition of a quartz layer having a thickness corresponding to the required phase angle for the used wavelength. On top a Chromium layer will be deposited. The patterning of this mask will be quite similar to the single trench variant. The difference is, that now an overetch can be applied for the phase definition resulting from the high etch selectivity of quartz to the etch stop material. The result of this approach should be that we can better control the phase depth and sidewall angle for dense and isolated structures. In this paper we will discuss the results of the printing tests performed using TESL masks especially with respect to litho process window, and we will compare these with the single trench undercut approach. Simulation results are presented with respect to shifter sidewall profile and TESL thickness in order to optimize image imbalance. Throughout the study we will correlate simulations and measurements to the after-MBOPC CD values for the shifter structures. These results will allow us to determine if the TESL AAPSM approach can be a more effective alternative to the single trench undercut approach.
As lithography and other patterning processes become more complex and more non-linear with each generation, the task of physical design rules necessarily increases in complexity also. The goal of the physical design rules is to define the boundary between the physical layout structures which will yield well from those which will not. This is essentially a rule-based pre-silicon guarantee of layout correctness. However the rapid increase in design rule requirement complexity has created logistical problems for both the design and process functions. Therefore, similar to the semiconductor industry's transition from rule-based to model-based optical proximity correction (OPC) due to increased patterning complexity, opportunities for improving physical design restrictions by implementing model-based physical design methods are evident. In this paper we analyze the possible need and applications for model-based physical design restrictions (MBPDR). We first analyze the traditional design rule evolution, development and usage methodologies for semiconductor manufacturers. Next we discuss examples of specific design rule challenges requiring new solution methods in the patterning regime of low K1 lithography and highly complex RET. We then evaluate possible working strategies for MBPDR in the process development and product design flows, including examples of recent model-based pre-silicon verification techniques. Finally we summarize with a proposed flow and key considerations for MBPDR implementation.
GDSII file size is not very well correlated with the computer runtime and memory required to perform RET processing. Occasionally, small files can take many hours to process, while large files can run very quickly. The ability to accurately predict resource requirements for RET processing is essential to optimizing RET automation. In this paper, we examine GDSII complexity metrics in an effort to find a method for predicting RET processing resource requirements.
The cost of developing and deploying optical proximity correction (OPC) technology has become a non-negligible part of the total lithography cost of ownership (CoO). In this paper, we present our efforts to reduce costs associated with OPC in the development phase for the 90nm node, and production phase for the 130nm node.
The patterning of logic layouts for the 65nm and subsequent device generations will require the implementation of new capabilities in process control, optical proximity correction (OPC), resolution enhancement technique (RET) complexity, and lithography-design interactions. Many of the methods used to implement and verify these complex interactions can be described as design for manufacturability (DFM) techniques. In this paper we review a wide range of existing non-lithographic and lithographic DFM techniques in the semiconductor industry. We also analyze existing product designs for DFM technique implementation potential and propose new design methods for improving lithographic capability.
The OASIS format was designed to be a replacement for the GDSII stream format. Previous papers have reported that OASIS files can be 5-20X smaller than comparable GDSII files. This paper examines the storage capabilities of OASIS, as well as other benefits, in more detail. The primary focus of this study is on OASIS integers, deltas, point-lists, and its explicit support for rectangles & squares. We also show how the two OASIS integer types and four delta types can be implemented using a single core procedure.
The past few years have seen an explosion in the application of software techniques to improve lithographic printing. Techniques such as optical proximity correction (OPC) and phase shift masks (PSM) increase resolution and CD control by distorting the mask pattern data from the original designed pattern. These software techniques are becoming increasingly complicated and non-intuitive; and the rate of complexity increase appears to be accelerating . The benefits of these techniques to improve CD control and lower cost of ownership (COO) is balanced against the effort required to implement them and the additional problems they create.
One severe problem for users of immature and complex software tools and methodologies is quality control,  as it ultimately becomes a COO problem. Software quality can be defined very simply as the ability of an application to meet detailed customer requirements. Software quality practice can be defined as the adherence to proven methods for planning, developing, testing and maintaining software. Although software quality for lithographic resolution enhancement is extremely important, the understanding and recognition of good software development practices among lithographers is generally poor. We therefore start by reviewing the essential terms and concepts of software quality that impact lithography and COO. We then propose methods by which semiconductor process and design engineers can estimate and compare the quality of the software tools and vendors they are evaluating or using. We include examples from advanced process technology resolution enhancement work that highlight the need for high-quality software practices, and show how to avoid many problems. Note that, although several authors have worked in software application development, our analysis here is somewhat of a black box analysis. The black box is the software development organization of an RET software supplier. Our access to actual developers within these organizations is very limited. In so far as our comments with respect to the internal workings of these development organizations go, we rely on the interactions we have had with applications engineers and other technical specialists who provide our interface to the development organizations.
In recent years mask data preparation (MDP) has been complicated by a number of factors, including the introduction of resolution enhancement technologies such as optical proximity correction (OPC) and phase shift masks. These complications not only have led to significant increases in file sizes and computer runtimes, but they have also created an urgent need for data management tools -- MDP automation. Current practices rely on point solutions to specific problems, such as OPC; use outdated, proprietary, non-standard, informal or inefficient data formats; and just barely manage portions of the data flow via low-level scripting. Without automation, MDP requires human intervention, which leads to longer cycle times and more errors. Without adequate data interchange formats, automation cannot succeed. This paper examines MDP processes and data formats, and suggests opportunities for improvement. Within the context of existing data formats, we examine the effect of inadequate (e.g., proprietary) data formats on MDP flow. We also examine the closest thing to an open, formal, standard data format--GDSII--and suggest improvements and even a replacement based on the extensible markup language (XML).