We base our considerations on our previous analyses of microlithography costs, semiconductor industry, and the needs in design/equipment/process infrastructure. We identify and describe two major investment trends, which seem promising for semiconductor industry. First, into true Design-For-Manufacturing platforms and integration; and second, into Next Generation Computing (atom and molecular-based, bio- and DNA=based; some of it - combined with silicon platform and technology, such as neuroelectronic engineering).
The technology acceleration of the ITRS Roadmap has many implications on both the semiconductor supplier community and the manufacturers. This work examines the im-pact of technology acceleration on the manufacturers and the resultant supplier impact. This work begins with an overview of the forces in the industry that are driving the accel-eration. Providing an analysis of the drive behind the acceleration, the impact on total production is developed. This acceleration results in more functionality per unit area in a shorter time frame. Based on a constant growth of devices, the technology acceleration reduces the requirements for manufacturing capacity increases. This fact has a direct im-pact on the supplier community. An additional factor is introduced, which is time to market. An analysis of the impact of "winning" the time to market race provides insight into a key industry driver. This work provides an improved understanding of the market forces that drive the semiconductor industry.
The technology acceleration of the ITRS Roadmap has many implications on both the semiconductor supplier community and the manufacturers. This work examines the impact of technology acceleration on the suppliers, the manufacturers of tools, materials and masks. From an Industry perspective, the development and product life cycle are examined with respect to the resources required and the return on investment. Historical information is available regarding the length of time required to develop a manufacturing worth product. Resource requirement estimates are available, so it is possible to develop an investment curve for product development. Similarly, estimates of total product sales provide the basis of the investment recovery scenario. From these evaluations, the Industry return on investment can be projected. It is possible to evaluate the impact of changes in technology on suppliers as the industry moved from 248nm to 193nm.
Simple microeconomic models that directly link yield learning to profitability in semiconductor manufacturing have been rare or non-existent. In this work, we review such a model and provide links to inspection capability and cost. Using a small number of input parameters, we explain current yield management practices in 200mm factories. The model is then used to extrapolate requirements for 300mm factories, including the impact of technology transitions to 130nm design rules and below. We show that the dramatic increase in value per wafer at the 300mm transition becomes a driver for increasing metrology and inspection capability and sampling. These analyses correlate well wtih actual factory data and often identify millions of dollars in potential cost savings. We demonstrate this using the example of grating-based overlay metrology for the 65nm node.
Process window control enables accelerated design-rule shrinks for both logic and memory manufacturers, but simple microeconomic models that directly link the effects of process window control to maximum profitability are rare. In this work, we derive these links using a simplified model for the maximum rate of profit generated by the semiconductor manufacturing process. We show that the ability of process window control to achieve these economic objectives may be limited by variability in the larger manufacturing context, including measurement delays and process variation at the lot, wafer, x-wafer, x-field, and x-chip levels. We conclude that x-wafer and x-field CD control strategies will be critical enablers of density, performance and optimum profitability at the 90 and 65nm technology nodes. These analyses correlate well with actual factory data and often identify millions of dollars in potential incremental revenue and cost savings. As an example, we show that a scatterometry-based CD Process Window Monitor is an economically justified, enabling technology for the 65nm node.
Effective design reuse in electronic products has the potential to provide very large cost savings, substantial time-to-market reduction, and extra sources of revenue. Unfortunately, critical reuse opportunities are often missed because, although they provide clear value to the corporation, they may not benefit the business performance of an internal organization. It is therefore crucial to provide tools to help reuse partners participate in a reuse transaction when the transaction provides value to the corporation as a whole. Value-based Reuse Management (VRM) addresses this challenge by (a) ensuring that all parties can quickly assess the business performance impact of a reuse opportunity, and (b) encouraging high-value reuse opportunities by supplying value-based rewards to potential parties. In this paper we introduce the Value-Based Reuse Management approach and we describe key results on electronic designs that demonstrate its advantages. Our results indicate that Value-Based Reuse Management has the potential to significantly increase the success probability of high-value electronic design reuse.
As minimum feature size shrinks below 100 nm, all cost components of photomasks: the material, the writing process, the develop/etch process, and the inspection, are skyrocketing. That increase, which impacts new product R&D return on investment, can be mitigated by improving mask first pass yield or synchronizing technology and device requirements with mask shop capabilities. This work is focused on the optimal utilization and tradeoffs of the existing reticle
technology to ensure desired device and circuit parameters. We first look at mask cost increase against the total manufacturing cost, evaluate mask cost by layer, and identify the opportunities to reduce it without compromising product requirements. We then show how integrated simulation (optical combined with electrical) helps estimate the impact of mask CD budget on transistor drive and leakage current, thereby helping justify the need for the tight mask CD
control. For cell level simulation, one would extract FET channel shape from the simulated aerial images to get the parametric data depending on the OPC options at the assumed mask grade and exposure conditions. For chip level simulation, one would derive statistical distribution of device parameters, at the assumed mask grade; parametric yield is then estimated using Monte Carlo analysis, to verify the impact of CD variation of a MOSFET channel across the reticle field. Overall, many challenges of the sub-100 nm reticle manufacturing resulting in high cost can be dealt with by simulation. Integration of simulation tools into design flow would itself become a challenge for computing power and CAD procedures.
The technology acceleration of the ITRS Roadmap has many implications on both the semiconductor sup-plier community and the manufacturers. INTERNATIONAL SEMATECH has revaluated the projected cost of advanced technology masks. Building on the methodology developed in 1996 for mask costs, this work provided a critical review of mask yields and factors relating to the manufacture of photolithography masks. The impact of the yields provided insight into the learning curve for leading edge mask manufac-turing. The projected mask set cost was surprising, and the ability to provide first and second year cost estimates provided additional information on technology introduction. From this information, the impact of technology acceleration can be added to the projected yields to evaluate the impact on mask costs.
With the advent of Reticle Enhancement Technologies (RET) such as Optical Proximity Correction (OPC) and Phase Shift Masks (PSM) required to manufacture semiconductors in the sub-wavelength era, the cost of photomask tooling has skyrocketed. On the leading edge of technology, mask set prices often exceed $1 million. This shifts an enormous burden back to designers and Electronic Design Automation (EDA) software vendors to create perfect designs at a time when the number of transistors per chip is measured in the hundreds of millions, and gigachips are on the drawing boards.
Moore's Law has driven technology to incredible feats. The prime beneficiaries of the technology - memory and microprocessor (MPU) manufacturers - can continue to fit the model because wafer volumes (and chip prices in the MPU case) render tooling costs relatively insignificant. However, Application-Specific IC (ASIC) manufacturers and most foundry clients average very small wafer per reticle ratios causing a dramatic and potentially insupportable rise in the cost of manufacturing.
Multi-Project wafers (MPWs) are a way to share the cost of tooling and silicon by putting more than one chip on each reticle. Lacking any unexpected breakthroughs in simulation, verification, or mask technology to reduce the cost of prototyping, more efficient use of reticle space becomes a viable and increasingly attractive choice. It is worthwhile therefore, to discuss the economics of prototyping in the sub-wavelength era and the increasing advantages of the MPW, shared-silicon approach.
However, putting together a collection of different-sized chips during tapeout can be challenging and time consuming. Design compatibility, reticle field optimization, and frame generation have traditionally been the biggest worries but, with the advent of dummy-fill for planarization and RET for resolution, another layer of complexity has been added. MPW automation software is quite advanced today, but the size of the task dictates careful consideration of the
Mask quality is a prime concern to the Intel Mask Operation (IMO) and the Intel wafer fabrication customers. Extreme concern is taken to inspect and repair all defects before shipment. Given that the classification and repair of defects detected by inspection systems is labor intensive, the procedure is prone to human error. Futhermore, since operators manually disposition hundreds of defects each day, it is virtually impossible to eliminate all misclassifications. Due to diffraction effects, not all defects resolve on a wafer. Hence, a defect that an operator may classify as 'real' may indeed be 'lithographically insignifincant'. Conversely an operator may miss a defect that prints, causing a serious reduction in product yield. The DIVAS (Defect, Inspection, Viewing, Archiving and Simulation) system has been described previously and was developed to address these manual classification issues. This paper outlines the fully automated system deployed in a production environment.
As minimum feature sizes continue to shrink, patterned features have become significantly smaller than the wavelength of light used in optical lithography. As a result, the requirement for dimensional variation control, especially in critical dimension (CD) 3σ, has become more stringent. To meet these requirements, resolution
enhancement techniques (RET) such as optical proximity correction (OPC) and phase shift mask (PSM) technology are applied. These approaches result in a substantial increase in mask costs and make the cost of ownership (COO) a key parameter in the comparison of lithography technologies. No concept of function is injected into the mask flow; that is, current OPC techniques are oblivious to the design intent. The entire layout is corrected uniformly with the same effort. We propose a novel minimum cost of correction (MinCorr)
methodology to determine the level of correction for each layout feature such that prescribed parametric yield is attained. We highlight potential solutions to the MinCorr problem and give a simple mapping to traditional performance optimization. We conclude with experimental results showing the RET costs that can be saved
while attaining a desired level of parametric yield.
The impact of grid-placed contacts on application-specific integrated circuit (ASIC) performance is studied. Although snapping contacts to grid adds restrictions during layout design, smaller circuit area can be achieved by careful selection of the grid pitch, raising the lower limit of transistor width, applying double exposure, and shrinking the minimum contact pitch enabled by more effective application of resolution enhancement technologies. The technique is demonstrated on the contact level of 250-nm standard cells with the minimum contact pitch shrunk by 10%. The area change of 84 cells ranges from -20% to 25% with a median decrease of 5%. The areas of two circuits, a finite-impulse-response (FIR) filter and an add-compare-select (ACS) unit in the Viterbi decoder, decrease by 4% and 2% respectively. Delay and power consumption are also estimated to decrease with area.
It is suggested that the high cost of mask sets for 90nm and below technologies may restrict the application of technologies to a handful of high volume chips. Most of the cost for mask production is a result of the increased time to write and inspect (including defect disposition) a mask due to the large files that are created prior to mask writing. Stringent mask specifications needed for low k factor imaging drive protracted and costly yield learning curves for a
mask maker. The cost of different steps in the flow from design tape-out to final wafer test are analyzed and it is shown that limiting the reticle field size on critical layers could reduce net costs. The net die cost is lower as long as the number of processed wafers stays below a cutoff number. Costs can be further decreased by reducing the overall "figure count" (and hence writing time) for an ASIC chip by restricting the amount of OPC done on critical layers.
A number of techniques are used for resolution enhancement in leading edge lithography. As feature dimensions shrink, these resolution enhancement techniques (RETs) become more aggressive, causing huge increases in data volume, complexity and write time. The results of these techniques are verified using methods such as SEM measurements of resist or etched structures on the wafer. These RETs tend to either over or under-compensate by way of the suggested corrections or enhancements with respect to the actual device operation. In addition, the systematic and random metrology errors inherent in wafer level top-down SEM measurements become more significant as feature sizes shrink and tolerances become tighter. These errors further cloud the decision as to which RET is most suitable and necessary. To overcome these problems, we have designed an electrical test vehicle which targets those geometries most prevalent in the cells for a given technology. Electrical test (E-test) structures are then varied around these geometries covering the design rule space. Device parameters are measured over this design space for various RETs. This method reconciles the accuracy or effectiveness of RET models using electrical device parameters and uses the same to choose the RET which results in the lowest NRE while at the same time meeting all electrical requirements.
The past few years have seen an explosion in the application of software techniques to improve lithographic printing. Techniques such as optical proximity correction (OPC) and phase shift masks (PSM) increase resolution and CD control by distorting the mask pattern data from the original designed pattern. These software techniques are becoming increasingly complicated and non-intuitive; and the rate of complexity increase appears to be accelerating . The benefits of these techniques to improve CD control and lower cost of ownership (COO) is balanced against the effort required to implement them and the additional problems they create.
One severe problem for users of immature and complex software tools and methodologies is quality control,  as it ultimately becomes a COO problem. Software quality can be defined very simply as the ability of an application to meet detailed customer requirements. Software quality practice can be defined as the adherence to proven methods for planning, developing, testing and maintaining software. Although software quality for lithographic resolution enhancement is extremely important, the understanding and recognition of good software development practices among lithographers is generally poor. We therefore start by reviewing the essential terms and concepts of software quality that impact lithography and COO. We then propose methods by which semiconductor process and design engineers can estimate and compare the quality of the software tools and vendors they are evaluating or using. We include examples from advanced process technology resolution enhancement work that highlight the need for high-quality software practices, and show how to avoid many problems. Note that, although several authors have worked in software application development, our analysis here is somewhat of a black box analysis. The black box is the software development organization of an RET software supplier. Our access to actual developers within these organizations is very limited. In so far as our comments with respect to the internal workings of these development organizations go, we rely on the interactions we have had with applications engineers and other technical specialists who provide our interface to the development organizations.
The management of critical materials in a high technology manufacturing facility is crucial to obtaining consistently high production yield. This is especially true in an industry like semiconductors where the success of the product is so dependent on the integrity of the critical production materials. Bar code systems, the traditional management tools, are voluntary, defeatable, and do not continuously monitor materials when in use. The significant costs associated with mis-management of chemicals can be captured with a customized model resulting in highly favorable ROI’s for the NOWTrak RFID chemical management system. This system transmits reliable chemical data about each individual container and generates information that can be used to increase wafer production efficiency and yield. The future of the RFID system will expand beyond the benefits of chemical management and into dynamic IC process management
Chip size as a function of field fill on wafer layouts, and their effect on thruput, has been well understood as a loss of both opportunity and cost of operation (COO), as a function of depreciated capital expense. The resultant effects on consumable replacement time, expense and budgeting has not been as clear-cut. This paper will outline the consequences that field fill has with respect to increased laser, and litho tool optic train, consumable usages as well as availability detractors to replace these components. Resulting losses due to increased cost of operation, and additional
consumable spending and usages will be explored.