Multi-patterning lithography for future technology nodes in logic and memory are driving the allowed on-product overlay error in an DUV and EUV matched machine operation down to values of 2 nm and below. The ASML ORION alignment sensor provides an effective way to deal with process impact on alignment marks. In addition, optimized higher order wafer alignment models combined with overlay metrology based feedforward correction schemes are deployed to control the process induced overlay variability from wafer-to-wafer and lot-to-lot. In addition machine learning based algorithms based on hybrid metrology inputs, strengthen the control capabilities for high volume manufacturing. The increase of the number of process layers in semiconductor devices results in an increase of control complexity of the total overlay and alignment control strategy. This complexity requires a holistic solution approach, that addresses total overlay optimization from process design, to process setup, and process control in high volume manufacturing. We find the optimum combination between feedforward and feedback, by having feedback deal with constant and predictable parts of overlay and have scanner wafer alignment covering the wafer-to-wafer variable part of overlay. In this paper we present investigation results using more wavelengths for wafer alignment and show the benefits in wavelength selection and recipe optimization. We investigate the wafer-to-wafer variable content of two experiment cases and show that a sample scheme of about 60 marks is well capable estimating the model parameters describing the grid. Finally, we show initial results of using level sensor metrology data as hybrid input to the derivation of the exposure grid.
For the past several years there has been a push in the industry to drive innovation by pairing different types of metrology to keep up with the challenging requirements of overlay, focus and CD in multi-patterning processes. Holistic metrology is an example of this where instead of using a single metrology method we pair various available metrology methods to enrich the overall information content. With advancements in deep learning algorithms we can better utilize existing infrastructure to extract information from metrology parings for a cost-effective solution that has traditionally gone unused. In computational alignment metrology we pair leveling data with alignment and wafer quality to generate a dense alignment vector map. In the first step wafer leveling metrology from the lithographic apparatus is deconvolved into individual contributors. Selecting the deconvolved signatures with greatest influence on alignment metrology we train our dense input metrology to our targeted alignment metrology using a deep feedforward network. With the trained weights and biases of the deep feedforward network and input from a new lot of wafers we can now compute a dense alignment vector map. With a 3rd order HOWA model fit to the original 32 marks and then again to the same 32 marks paired with leveling, the model fit to the dense estimation from the 32 marks paired with leveling out performs HOWA fit to the original 32 marks. Finally, by fitting an advanced alignment model which optimizes spatial frequency between our enhanced alignment and corresponding overlay metrology, we can realize additional performance improvements in wafer to wafer overlay.
All wafers moving through a microchip nanofabrication process pass through a lithographic apparatus for most, if not all, layers. With a lithographic apparatus providing a massive amount of data per wafer, this paper will outline how physicsbased models can be used to refine UVLS (ultraviolet level sensor) metrology into four unique inputs for use in a deep learning network. Due to the multi-dimensional cross correlation of our deep learning network, we then show that training to a sparse overlay layout with dense inputs results in a hyper dense overly signature. On a testing dataset blind to the training we show that the accuracy of the predictive computational overlay metrology can capture R2 up to 0.81 of the signature in overlay Y. As a real-world application, we outline how our predictive computational overlay metrology can then be used to designate which wafer combinations, coming from the TWINSCAN system, should have overlay measured with a YieldStar system for possible use with APC (advanced process control).
With photolithography as the fundamental patterning step in the modern nanofabrication process, every wafer within a semiconductor fab will pass through a lithographic apparatus multiple times. With more than 20,000 sensors producing more than 700GB of data per day across multiple subsystems, the combination of a light source and lithographic apparatus provide a massive amount of information for data analytics. This paper outlines how data analysis tools and techniques that extend insight into data that traditionally had been considered unmanageably large, known as adaptive analytics, can be used to show how data collected before the wafer is exposed can be used to detect small process dependent wafer-towafer changes in overlay.
Multi-patterning lithography at the 10-nm and 7-nm nodes is driving the allowed overlay error down to extreme low values. Advanced high order overlay correction schemes are needed to control the process variability. Additionally the increase of the number of split layers results in an exponential increase of metrology complexity of the total overlay and alignment tree. At the same time, the process stack includes more hard-mask steps and becomes more and more complex, with as consequence that the setup and verification of the overlay metrology recipe becomes more critical. All of the above require a holistic approach that addresses total overlay optimization from process design to process setup and control in volume manufacturing. In this paper we will present the holistic overlay control flow designed for 10-nm and 7-nm nodes and illustrate the achievable ultimate overlay performance for a logic and DRAM use case. As figure 1 illustrates we will explain the details of the steps in the holistic flow. Overlay accuracy is the driver for target design and metrology tool optimization like wavelength and polarization. We will show that it is essential to include processing effects like etching and CMP which can result in a physical asymmetry of the bottom grating of diffraction based overlay targets. We will introduce a new method to create a reference overlay map, based on metrology data using multiple wavelengths and polarization settings. A similar approach is developed for the wafer alignment step. The overlay fingerprint correction using linear or high order correction per exposure (CPE) has a large amount of parameters. It is critical to balance the metrology noise with the ultimate correction model and the related metrology sampling scheme. Similar approach is needed for the wafer align step. Both for overlay control as well as alignment we have developed methods which include efficient use of metrology time, available for an in the litho-cluster integrated metrology use. These methods include a novel set models that efficiently describe different process fingerprints. We will explain the methods and show the benefits for logic and DRAM use cases.
While semiconductor manufacturing moves toward the 7nm node for logic and 15nm node for memory, an increased emphasis has been placed on reducing the influence known contributors have toward the on product overlay budget. With a machine learning technique known as function approximation, we use a neural network to gain insight to how known contributors, such as those collected with scanner metrology, influence the on product overlay budget. The result is a sufficiently trained function that can approximate overlay for all wafers exposed with the lithography system. As a real world application, inline metrology can be used to measure overlay for a few wafers while using the trained function to approximate overlay vector maps for the entire lot of wafers. With the approximated overlay vector maps for all wafers coming off the track, a process engineer can redirect wafers or lots with overlay signatures outside the standard population to offline metrology for excursion validation. With this added flexibility, engineers will be given more opportunities to catch wafers that need to be reworked, resulting in improved yield. The quality of the derived corrections from measured overlay metrology feedback can be improved using the approximated overlay to trigger, which wafers should or shouldn’t be, measured inline. As a development or integration engineer the approximated overlay can be used to gain insight into lots and wafers used for design of experiments (DOE) troubleshooting. In this paper we will present the results of a case study that follows the machine learning function approximation approach to data analysis, with production overlay measured on an inline metrology system at SK hynix.
While semiconductor manufacturing is moving towards the 14nm node using immersion lithography, the overlay requirements are tightened to below 5nm. Next to improvements in the immersion scanner platform, enhancements in the overlay optimization and process control are needed to enable these low overlay numbers. Whereas conventional overlay control methods address wafer and lot variation autonomously with wafer pre exposure alignment metrology and post exposure overlay metrology, we see a need to reduce these variations by correlating more of the TWINSCAN system’s sensor data directly to the post exposure YieldStar metrology in time. In this paper we will present the results of a study on applying a real time control algorithm based on machine learning technology. Machine learning methods use context and TWINSCAN system sensor data paired with post exposure YieldStar metrology to recognize generic behavior and train the control system to anticipate on this generic behavior. Specific for this study, the data concerns immersion scanner context, sensor data and on-wafer measured overlay data. By making the link between the scanner data and the wafer data we are able to establish a real time relationship. The result is an inline controller that accounts for small changes in scanner hardware performance in time while picking up subtle lot to lot and wafer to wafer deviations introduced by wafer processing.
As our ability to scale lithographic dimensions via reduction of actinic wavelength and increase of numerical
aperture (NA) comes to an end, we need to find alternative methods of increasing pattern density. Double-Patterning
techniques have attracted widespread interest for enabling further scaling of semiconductor devices. We have developed
DE2 (develop/etch/develop/etch) and DETO (Double-Expose-Track-Optimized) methods for producing pitch-split
patterns capable of supporting 16 and 11-nm node semiconductor devices. The IBM Alliance has established a DETO
baseline in collaboration with KT, TEL, ASML and JSR to evaluate commercially available resist-on-resist systems. In
this paper we will describe our automated engine for characterizing defectivity, line width and overlay performance for
our DETO process.
Double patterning is considered the most viable option for 32- and 22-nm complementary metal-oxide semiconductor (CMOS) node development and has seen a surge of interest due to the remaining challenges of next-generation lithography systems. Most double patterning approaches previously described require intermediate processing steps (e.g., hard mask etching, resist freezing, spacer material deposition, etc.). These additional steps can add significantly to the cost of producing the double pattern. Alternative litho-only double patterning processes are investigated to achieve a composite image without the need for intermediate processing steps. A comparative study between positive–negative (TArF-P6239+N3007) and positive–positive tone (TArF-P6239+PP002) imaging is described. In brief, the positive–positive tone approach is found to be a superior solution due to a variety of considerations.
In this paper, we describe the integration of EUV lithography into a standard semiconductor manufacturing flow to
produce demonstration devices. 45 nm logic test chips with functional transistors were fabricated using EUV lithography
to pattern the first interconnect level (metal 1).
This device fabrication exercise required the development of rule-based 'OPC' to correct for flare and mask shadowing
effects. These corrections were applied to the fabrication of a full-field mask. The resulting mask and the 0.25-NA fullfield
EUV scanner were found to provide more than adequate performance for this 45 nm logic node demonstration. The
CD uniformity across the field and through a lot of wafers was 6.6% (3σ) and the measured overlay on the test-chip
(product) wafers was well below 20 nm (mean + 3σ). A resist process was developed and performed well at a sensitivity
of 3.8 mJ/cm2, providing ample process latitude and etch selectivity for pattern transfer. The etch recipes provided good
CD control, profiles and end-point discrimination, allowing for good electrical connection to the underlying levels, as
evidenced by electrical test results.
Many transistors connected with Cu-metal lines defined using EUV lithography were tested electrically and found to
have characteristics very similar to 45 nm node transistors fabricated using more traditional methods.
The introduction of lithographic systems with NA=1.35 has enabled the extension of optical lithography to 45 nm and
below. At the same time, despite the larger NA, k1-factors have dropped to 0.3 and below. Defining the appropriate
strategies for these high-end lithographic processes requires the integration and co-optimization of the design, mask and
imaging parameters. This requires an in-depth understanding of the relevant parameters for imaging performance during
high volume manufacturing.
Besides the Critical Dimension Uniformity (CDU) budget for the baseline lithographic system, it is crucial to realize that
system performance may vary over time in volume manufacturing.
In this paper the CDU budget will be restated, with all the well-known contributors, and extended with some new terms,
such as volume manufacturing effects.
Experimental low-k1 results will be shown from NA=1.35 lithographic tools and compared to model-based predictions
under realistic volume manufacturing circumstances.
The combination of extreme NA and low k1 makes it necessary to introduce computational lithography for scanner
optimization. The potential of using LithoCruiserTM and TachyonTM for optimising scanner source and OPC will be
described. Also, using the fast scanner correction mechanisms to compensate for reticle, track and etch fingerprints and
variations will be discussed.
With the continuous shrink of feature sizes the pitch of the mask comes closer to the wave length of light.
It has been recognized that in this case polarization effects of the mask become much more pronounced and
deviations in the diffraction efficiencies from the well-known Kirchhoff approach can no longer be neglected.
It is not only the diffraction efficiencies that become polarization-dependent, also the phases of the diffracted
orders tend to deviate from Kirchhoff theory when calculated rigorously. This also happens for large structures,
where these phase deviations can mimic polarization dependent wave front aberrations, which in the case of
polarized illumination can lead to non-negligible focus shifts that depend on the orientation and the features
size themselves. This orientation dependence results in a polarization induced astigmatism offset, which can be
of the same order of magnitude or even larger as polarization effects stemming from the lens itself. Hence, for
correctly predicting polarization induced astigmatism offsets, one has to both consider lens and mask effects at
the same time. In this paper we present a comprehensive study of polarized induced phase effects of topographic
masks and develop a simple theoretical model that accurately describes the observed effects.
The continuous implementation of novel technological advances in optical lithography is pushing the technology to ever
smaller feature sizes. For instance, it is now well recognized that the 45nm node will be executed using state-of-the-art
ArF (193nm) hyper-NA immersion-lithography. Nevertheless, a substantial effort will be necessary to make imaging
enhancement techniques like hyper-NA immersion technology, polarized illumination or sophisticated illumination
modes routinely available for production environments.
In order to support these trends, more stringent demands need to be placed on the lithographic optics. Although this
holds for both the illumination unit and the projection lens, this paper will focus on the latter module. Today, projection
lens aberrations are well controlled and their lithographic impact is understood. With the advent of imaging enhancement
techniques such as hyper-NA immersion lithography and the implementation of polarized illumination, a clear
description and control of the state of polarization throughout the complete optical system is required.
Before polarization was used to enhance imaging, the imaging properties at each field position of the lens could be fully
characterized by 2 pupil maps: a phase map and a transmission map. For polarized imaging, these two maps are replaced
by a 2x2 complex Jones matrix for each point in the pupil. Although such a pupil of Jones matrices (short: Jones pupil)
allows for a full and accurate description of the physical imaging, it seems to lack transparency towards direct
visualization and lithographic imaging relevance.
In this paper we will present a comprehensive method to decompose the Jones pupils into quantities that represent a clear
physical interpretation and we will study the relevance of these quantities for the imaging properties of lithography
lenses.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.