There are different approaches for alignment sampling optimization. In order to determine, which approach is optimal, OPAL run-to-run simulations<sup>1</sup> must be executed using the result of the different sampling optimization. This means that there is a two-step approach: first, an iterative sampling optimization algorithm that results in optimal overlay modeling. Then, a run-to-run simulation is done to verify the impact on the overlay performance.<p> </p> In this study, we investigate on the behavior of four different approaches to alignment sampling optimization on four different layers and analyze which approach is most suitable for which layer.
It was proven that higher order intra-field alignment data modeling and correction has the potential to improve overlay performance by correcting reticle heating and lens heating effects intra-wafer and wafer- to-wafer.<sup>1 </sup>But there were also challenges shown that needed further investigation. As the alignment measurement is done on a coordinate system with absolute positions, the modeled iHOPC values might be high. A suitable method needs to be developed to distinguish between tool-to-tool offsets, process influence and layer-to-layer tool stack effect. In this paper we will take the next step and evaluate the overlay improvement potential by using intra-field alignment data in an overlay feed-forward simulation. An overlay run-to-run simulation is afterwards performed to estimate the optimization potential. To simulate higher order intra-field overlay, dense alignment data is needed. Facing the challenge of optimizing the number of measured marks but not losing relevant information, an intra-field alignment mark sampling optimization is done to find the best compromise between throughput and overlay accuracy.
In leading edge lithography, overlay is usually controlled by feedback based on measurements on overlay targets, which are located between the dies. These measurements are done directly after developing the wafer. However, it is well-known that the measurement on the overlay marks does not always represent the actual device overlay correctly. This can be due to different factors, including mask writing errors, target-to-device differences and non-litho processing effects, for instance by the etch process.<sup>1</sup> <p> </p>In order to verify these differences, overlay measurements are regularly done after the final etch process. These post-etch overlay measurements can be performed by using the same overlay targets used in post-litho overlay measurement or other targets. Alternatively, they can be in-device measurements using electron beam measurement tools (for instance CD-SEM). The difference is calculated between the standard post-litho measurement and the post-etch measurement. The calculation result is known as litho-etch overlay bias. <p> </p>This study focuses on the feasibility of post-etch overlay measurement run-to-run (R2R) feedback instead of post-lithography R2R feedback correction. It is known that the post-litho processes have strong non-linear influences on the in-device overlay signature and, hence, on the final overlay budget. A post-etch based R2R correction is able to mitigate such influences.<sup>2</sup><p> </p> This paper addresses several questions and challenges related to post-etch overlay measurement with respect to R2R feedback control. The behavior of the overlay targets in the scribe-line is compared to the overlay behavior of device structures. The influence of different measurement methodologies (optical image-based overlay vs. electron microscope overlay measurement) was evaluated. Scribe-line standard overlay targets will be measured with electron microscope measurement. In addition, the influence of the intra-field location of the targets on device-to-target shifts was evaluated.
Advanced processing methods like multiple patterning necessitate improved intra-layer uniformity and balancing monitoring for overlay and CD. To achieve those requirements without major throughout impact, a new advanced mark for measurement is introduced. Based on an optical measurement, this mark delivers CD and overlay results for a specified layer at once. During the conducted experiments at front-end-of-line (FEOL) process area, a mark selection is done and the measurement capability of this mark design is verified. Gathered results are used to determine lithography to etch biases and intra-wafer signatures for CD and overlay. Furthermore, possible use cases like dose correction recipe creation and process signature monitoring were discussed.
Before each wafer exposure, the photo lithography scanner’s alignment system measures alignment marks to correct for placement errors and wafer deformation. To minimize throughput impact, the number of alignment measurements is limited. Usually, the wafer alignment does not correct for intrafield effects. However, after calibration of lens and reticle heating, residual heating effects remain. A set of wafers is exposed with special reticles containing many alignment marks, enabling intra-field alignment. Reticles with a dense alignment layout have been used, with different defined intra-field bias. In addition, overlay simulations are performed with dedicated higher order intra-field overlay models to compensate for wafer-to-wafer and across-wafer heating.
Monitoring long-term performance of projection optics in lithographic exposure systems will become more and more important, especially for 193nm wavelength. Various effects influence the quality and long-term stability of a lens projection system. Using the well known and established blazed phasegrating method, it is possible to identify lens degradation before it becomes a significant detractor in a manufacturing process. A two beam interferometer formed by a blazed grating reticle is used to measure the aberration values. This works for all DUV tools, and therefore it allows a comparison of tools from different suppliers. The test can be run after regular preventive maintenance or as daily monitor checks, in order to evaluate lens aberration over time. By storing the results, it is easy to generate a tool individual database. With this paper, we will show aberration data over time and the possibility to increase tool performance and stability.