It is common knowledge that DFM guidelines require revisions to design data. These guidelines impose the need for
corrections inserted into areas within the design data flow. At times, this requires rather drastic modifications to the data,
both during the layer derivation or DRC phase, and especially within the RET phase. For example, OPC. During such
data transformations, several polygon geometry changes are introduced, which can substantially increase shot count,
geometry complexity, and eventually conversion to mask writer machine formats. In this resulting complex data, it may
happen that notches are found that do not significantly contribute to the final manufacturing results, but do in fact
contribute to the complexity of the surrounding geometry, and are therefore undesirable.
Additionally, there are cases in which the overall figure count can be reduced with minimum impact in the quality of the
corrected data, if notches are detected and corrected. Case in point, there are other cases where data quality could be
improved if specific valley notches are filled in, or peak notches are cut out. Such cases generally satisfy specific
geometrical restrictions in order to be valid candidates for notch correction.
Traditional notch detection has been done for rectilinear data (Manhattan-style) and only in axis-parallel directions. The
traditional approaches employ dimensional measurement algorithms that measure edge distances along the outside of
polygons. These approaches are in general adaptations, and therefore ill-fitted for generalized detection of notches with
strange shapes and in strange rotations.
This paper covers a novel algorithm developed for the CATS MRCC tool that finds both valley and/or peak notches that
are candidates for removal. The algorithm is generalized and invariant to data rotation, so that it can find notches in data
rotated in any angle. It includes parameters to control the dimensions of detected notches, as well as algorithm tolerances
and data reach.
In previous work, an approach was detailed using CATS-MRCC-RPM, where new pattern matching functionality is used
to find locations on a jobdeck that are suitable for mark placements and ultimately, metrology tool measurement
locations. These locations are found by first creating pattern definitions. The defined patterns are passed to the CATS
MRCC-RPM match algorithm which in turn outputs all found locations that match the description.
In that previous work, the pattern definitions, also known as mark templates, had several limitations. For example, each
template could hold only one mark placed at its center, and had to be symmetrical. This was considered to be severely
limiting in nature and not production worthy for advanced mask manufacturing.
This paper builds on top of the previous one in various ways, extends the possibilities, and provides mask makers
unlimited options for extending metrology automation. Mark templates are expanded upon to hold multiple marks at
different offsets from its center, and even of different types. Each template can be symmetrical or asymmetrical, and yet
all the marks on it can still be correctly placed by taking advantage of match orientation information during the
Classification step. Placement of other mark types beyond basic ones is also explored, such as Arbitrary Area.
Lastly, the classification step is an enhancement process that thoroughly manages the use of chip/mark information. The
result makes use of the output of JD MRC (jobdeck MRC) which executes RPM on jobdecks by chip in order to reduce
redundant chips processing, rather than search all chip placements by extension.
There are known VSB (Variable shape beam) mask writers in production today that require mask data with all angle
edges to be approximated by staircases (stacked slits) in either a horizontal or vertical direction. The approximation uses
0, 45 or 90 degree edges, depending on the fracture setup and the angle of the edges in the original data. In order to
gauge the effectiveness of the algorithm that fractures the original design to the VSB format, several methods can be
employed to analyze the differences between the fractured data in the VSB format output and the original data before
fracture. This is commonly referred to as skew error.
This paper explores various methods and approaches that can be used, and examines each in detail. The goal is to
highlight the differences and the effectiveness of each method in order to provide mask makers with the necessary
information to make decisions best suited for their MDP-to-Lithography process flow.
The first method explored is an XOR operation, followed by a double geometric biasing, aka Underover Sizing. The
second method explored is an XOR operation, followed by a cut out of the differences starting from the unfractured
polygon edges, aka Path Implode. The third method analyzed is an XOR operation, followed by measurement of the
differences in the direction orthogonal to the unfractured edges, aka Difference Measurement. The fourth and last
method analyzed is a mix of the last two, and employs an XOR operation, followed by measurement of the differences in
the direction orthogonal to the unfractured edges, followed by a cut out of the measurement results, aka Measurement
Mask data preparation (MDP) typically involves multiple flows, sometimes consisting of many steps to ensure that the data is properly written on the mask. This may include multiple inputs, transformations (scaling, orientation, etc.), and processing (layer extraction, sizing, Boolean operations, data filtering). Many MDP techniques currently in practice require multiple passes through the input data and/or multiple file I/O steps to achieve these goals. This paper details an approach which efficiently process the data, resulting in minimal I/O and greatly improved turnaround times (TAT). This approach takes advanced processing algorithms and adapts them to produce efficient and reliable data flow. In tandem with this processing flow, an internal jobdeck mapping approach, transparent to the user, allows an essentially unlimited number of pattern inputs to be handled in a single pass, resulting in increased flexibility and ease of use.<p> </p>Transformations and processing operations are critical to MDP. Transformations such as scaling, reverse tone and orientation, along with processing including sizing, Boolean operations and data filtering are key parts of this. These techniques are often employed in sequence and/or in parallel in a complex functional chain. While transformations typically are done "up front" when the data is input, processing is less straightforward, involving multiple reads and writes to handle the more intricate functionality and also the collection of input patterns which may be required to produce the data that comprises a single mask.<p> </p>The approach detailed in this paper consists of two complementary techniques: efficient MDP flow and jobdeck mapping. Efficient MDP flow is achieved by pipelining the output of each step to the input of the subsequent step. Rather than writing the output of a particular processing step to file and then reading it in to the following step, the pipelining or chaining of the steps results in an efficient flow with minimal file I/O.<p> </p>The efficient MDP flow is enhanced by a technique called jobdeck mapping which allows in essence an unlimited number of pattern inputs by taking each transformed pattern and including it in an input jobdeck. Making use of established jobdeck handling capabilities, the user-selected input pattern/transformation combinations are mapped to an input jobdeck which is processed by the advanced flow, allowing great flexibility and user control of the process.
One step in MDP is the process of marking CD features via the jobdeck. These marks are usually further translated into specially formatted files used by optical metrology tools or CD SEM. There are various practices currently in use to accomplish the marking process, e.g.: by eye with a point and click GUI, by script using a list of known coordinates, by searching for a coordinate within a very limited neighborhood of a suspect coordinate, etc. However, all of these methods suffer from various shortcomings. They require extensive user intervention, or not all or enough marking places are found, or the coordinates that are supposed to be known are slightly off and cause mark placement scripts to fail, and so on.This paper details an approach using CATS MRCC-RPM, where a new pattern matching functionality is used to find
locations suitable for mark placements. The location coordinates thus found are then passed to well known mark placing functionality to then place the marks.