A new PSM using high transmittance is developed to overcome patterning process limits in ArF immersion lithography. We optimized mask structure, materials, and film thicknesses for patterning process. A new material for phase-shifter is applied to the HT-PSM to exhibit higher transmittance in ArF wavelengths and the thickness of the new material is thinner than that of the conventional 6% phase-shifter (MoSiON). A new blank structure using a MoSi shading layer with double Cr hardmasks (HM) is developed and suggested for the HTPSM process. Double HM blank stacks enable the HT-PSM to adopt thin PR process for resolution enhancement in mask process. The first Cr on the MoSi is utilized as a HM to etch MoSi shading layer, an adhesion layer for PR process, and also a capping layer to protect blind area during MoSi and phase-shifter etching. In contrast, the role of the second Cr between MoSi and phase-shifter is an etch stopper for MoSi and a HM to etch phase-shifter at the same time. However, Double HM process has some problems, such as first Cr removal during second Cr etching and complex process steps. To solve the Cr removal issues, we evaluated various Cr layers which have different etchrates and compositions. According to the evaluations, we optimized thicknesses and compositions of the two Cr layers and corresponding etching conditions. Lithography simulations demonstrate that the new HT-PSM has advantages in NILS in aerial images. As a result, initial wafer exposure experiments using the HT-PSM show 13-32% improvements in LCDU compared to that of the conventional 6% PSM due to its higher NILS.
Sub Resolution Assist Features (SRAFs) are now the main option for enabling low-k1 photolithograpy. These technical challenges for the 45nm node, along with the insurmountable difficulties in EUV lithography, have driven the semiconductor mask-maker into the low-k1 lithography era under the pressure of ever shrinking feature sizes. Extending lithography towards lower k1 puts a strong demand on the resolution enhancement technique (RET), and better exposure tool. However, current mask making equipments and technologies are facing their limits. Particularly, due to smaller feature size, the critical dimension (CD) linearity of both main cell patterns and SRAFs on a mask is deviated from perfect condition differently. There are certain discrepancies of CD linearity from ideal case. For example, as the CD size gets smaller, the bigger CD discrepancy is to be.
There are many technologies, such as hard-mask process and negative-resist process and so on. One of them is an assist feature correction, which can be applied to achieve better CD control. In other words, in order to compensate this CD linearity deviation, the new correction algorithm with SRAFs is applied in data process flow. In this paper, we will describe in detail the implement of our study and present results on a full 65nm node with experimental data.
As the critical dimension (CD) becomes smaller, various resolution enhancement techniques (RET) are widely adopted. In developing sub-100nm devices, the complexity of optical proximity correction (OPC) is severely increased and applied OPC layers are expanded to non-critical layers. The transformation of designed pattern data by OPC operation causes complexity, which cause runtime overheads to following steps such as mask data preparation (MDP), and collapse of existing design hierarchy. Therefore, many mask shops exploit the distributed computing method in order to reduce the runtime of mask data preparation rather than exploit the design hierarchy. Distributed computing uses a cluster of computers that are connected to local network system. However, there are two things to limit the benefit of the distributing computing method in MDP. First, every sequential MDP job, which uses maximum number of available CPUs, is not efficient compared to parallel MDP job execution due to the input data characteristics. Second, the runtime enhancement over input cost is not sufficient enough since the scalability of fracturing tools is limited. In this paper, we will discuss optimum load balancing environment that is useful in increasing the uptime of distributed computing system by assigning appropriate number of CPUs for each input design data. We will also describe the distributed processing (DP) parameter optimization to obtain maximum throughput in MDP job processing.