In most leading-edge technologies, first layers usually are more critical than later layers. For some technologies, however, most critical layers are in mid-of line. On such technologies, less advanced equipment is used for first layers. Because such tools are not so stable, the overlay variation must be compensated on advanced tools used for later layers. Wafer-to-wafer variation is typically corrected by wafer alignment. By standard wafer alignment, intra-field variations are usually not corrected. Because of the instability of the older tools, additional marks to compensate intra-field variation were measured on advanced tools. This reduces the wafer-to-wafer variation but causes throughput loss. Therefore, sampling plans were optimized to reduce the number of intra-field marks by 50%. This was verified by run-to-run simulations and experiments.
In leading-edge lithography, field-by-field corrections, also known as corrections per exposure, are well established. Many manufacturers use a combination of the traditional higher order wafer and intrafield polynomial corrections, combined with linear field-by-field corrections. However, non-linear wafer deformations are usually strongest at the wafer edge. Therefore, specific high order field-by-field corrections are the ultimate correction method to mitigate the effects of these non-linear wafer deformations. However, determining the appropriate amount of high order field-to-field corrections is not trivial. At the wafer edge, exposure fields are often incomplete, so the fields contain less overlay marks and have a less regular distribution than fields that are completely on the wafer. Therefore, even on dense measurements, it is challenging to model these fields with a high order model without applying overcorrection. On the other hand, the metrology capacity for dense measurements is high, so these can typically only be performed with low frequency. Alternatively, smart field-by-field modeling algorithms are available to compute higher order effects based on reduced sampling plans. In this paper, we study different algorithmic approaches to optimize modeling algorithms for both dense and sparse (reduced) sampling plans. We compare the impact of varying the frequency of dense sampling to the performance of different modeling algorithms on sparse sampling.