Data volume and average data preparation time continue to trend upward with newer technology nodes. In the past decade, with file sizes measured in terabytes and network bandwidth requirements exceeding 40GB/s, mask synthesis operations have expanded their cluster capacity to thousands and even 10s of thousands of CPU cores. Efficient, scalable and flexible management of this expensive, high performance, distributed computing system is required in every stage of geometry processing - from layout polishing through Optical Proximity Correction (OPC), Mask Process Correction (MPC) and Mask Data Preparation (MDP) - to consistently meet tape out cycle time goals. The MDP step, being the final stage in the entire flow, has to write all of the pattern data into one or more disk files. This extremely I/O intensive section remains a significant portion of the processing time and creates a major challenge for the software from a scalability perspective. It is important to have a comprehensive solution that displays high scalability for large jobs and low overhead for small jobs, which is the ideal behavior in a typical production environment. In this paper we will discuss methods to address the former requirement, emphasizing the efficient use of high performance distributed file systems while minimizing the less scalable disk I/O operations. We will also discuss dynamic resource management and efficient job scheduling to address the latter requirement. Finally, we will demonstrate the use of a cluster management system to create a comprehensive data processing environment suitable to support large scale data processing requirements.
Designs which are output by OPC (Optical Proximity Correction) tools contain a large number of jog edges. Jogs are small edges introduced by OPC tools to create segments in an input design edge to provide freedom to the individual segments to move independently. Such segmentation is important to achieve correct, uniform results across the critical dimensions of a feature. Traditionally, Mask Process Correction (MPC) tools which work on OPC output, choose to not move these jog edges (a.k.a. jog freeze). The main reason for doing so is that the jog edges are so small that moving them does not significantly improve the mask quality. However, for newer design nodes, increasing OPC complexity results in primary segments similar in size to jog edge size. Hence, freezing the jogs may not be a viable option as it may mean that a significant portion of design edges are frozen. In this paper, we propose methods for movement of the jog edges and the impact it has on the overall mask quality. Shot count of the mask data post-fracture is an important Quality of Results (QoR) metric for Vector Shaped Beam (VSB) mask writer tools. One of the main advantages that comes from the flexibility of moving jog edges is to improve the mask data shot count. This paper will discuss the shot count improvement method within the MPC tool and show the impact it has on the other quality metrics.
According to the 2013 SEMATECH Mask Industry Survey,i roughly half of all photomasks are produced
using laser mask pattern generator (“LMPG”) lithography. LMPG lithography can be used for all layers at
mature technology nodes, and for many non-critical and semi-critical masks at advanced nodes. The
extensive use of multi-patterning at the 14-nm node significantly increases the number of critical mask
layers, and the transition in wafer lithography from positive tone resist to negative tone resist at the 14-nm
design node enables the switch from advanced binary masks back to attenuated phase shifting masks that
require second level writes to remove unwanted chrome. LMPG lithography is typically used for second
level writes due to its high productivity, absence of charging effects, and versatile non-actinic alignment
capability. As multi-patterning use expands from double to triple patterning and beyond, the number of
LMPG second level writes increases correspondingly. The desire to reserve the limited capacity of
advanced electron beam writers for use when essential is another factor driving the demand for LMPG
The increasing demand for cost-effective productivity has kept most of the laser mask writers ever
manufactured running in production, sometimes long past their projected lifespan, and new writers continue
to be built based on hardware developed some years ago.ii The data path is a case in point. While state-ofthe-
art when first introduced, hardware-based data path systems are difficult to modify or add new features
to meet the changing requirements of the market. As data volumes increase, design styles change, and new
uses are found for laser writers, it is useful to consider a replacement for this critical subsystem.
The availability of low-cost, high-performance, distributed computer systems combined with highly
scalable EDA software lends itself well to creating an advanced data path system. EDA software, in routine
production today, scales well to hundreds or even thousands of CPU-cores, offering the potential for
virtually unlimited capacity. Features available in EDA software such as sizing, scaling, tone reversal, OPC,
MPC, rasterization, and others are easily adapted to the requirements of a data path system.
This paper presents the motivation, requirements, design and performance of an advanced, scalable
software data path system suitable to support multi-beam laser mask lithography.
Mask manufacturing using E-Beam at 32 nm process node and below is failing to meet CD uniformity, CD linearity
requirements due to the inherent systematic errors in the e-beam process. MPC-GC (Mask Process Correction through Geometric correction) is one technique, which moves the edges of input shapes inwardly or outwardly to compensate for the systematic errors. Since, geometric correction is done under some constraints there will always be further scope to improve the intensity profile of the mask layout to achieve better fidelity. In this paper, we discuss about an MPC flow to further enhance fidelity of the patterned shapes on the mask by adding dose correction on top of the geometric correction.
We have developed NxMPC-DC tool as part of NxMDP<sup>1</sup> tool suite to achieve the above mentioned objective. If the input layout data is not fractured already, NxMPC-DC will use NxFracture<sup>2</sup> to carry out fracturing and then assign modulated dose values to the shots. NxMPC-DC would take the same mask process model as the one used for NxMPC-GC. Hence, in this proposed flow, the fidelity of the simulated contour would only improve beyond the MPC-GC corrected data as there would not be any conflict between the mask process models used for geometric and dose based corrections.
In this paper, we present the idea of (in-place) substitution of the fracture solution for some of the badly or nonuniformly
fractured instances of polygons, by a better fracture solution. Polygons could be categorized as badly or nonuniformly
fractured based on the values of various quality metrics - such as number of generated trapezoids, number of
slivers, uniformity in fracturing etc. The inferior quality of fracture solution may be due to sub-optimal fracturing. This
In-Place Optimization (IPO) strategy proposes a solution wherein rather than carrying out a complete re-fracturing of the
mask data, the QoR of fractured data can be improved "in-place" through applying patches to the hotspots of badly or
non-uniformly fractured polygons.
The proposed IPO scheme is flexible enough to classify the QoR of a fractured solution of a polygon using externally
defined parameters or formulae. In a way, the users responsible for mask MDP can categorize the quality of a fractured
solution as good or bad through defining some criteria externally to the tool. The IPO scheme allows internal
substitution, where a better fracture solution for any given polygon is found within the same fracture data at some other
instance of the polygon, or external substitution where a better fracturing solution is generated using a third party
fracturing tool or by using the same fracturing tool with different inputs. Since this IPO technique modifies the fractured
mask data, it is mandatory to have a built-in validation scheme which is discussed in detail.