S-Genius is a new universal scatterometry platform, which gathers all the LTM-CNRS know-how regarding the rigorous
electromagnetic computation and several inverse problem solver solutions. This software platform is built to be a userfriendly,
light, swift, accurate, user-oriented scatterometry tool, compatible with any ellipsometric measurements to fit
and any types of pattern. It aims to combine a set of inverse problem solver capabilities — via adapted Levenberg-
Marquard optimization, Kriging, Neural Network solutions — that greatly improve the reliability and the velocity of the
solution determination. Furthermore, as the model solution is mainly vulnerable to materials optical properties, S-Genius
may be coupled with an innovative material refractive indices determination. This paper will a little bit more focuses on
the modified Levenberg-Marquardt optimization, one of the indirect method solver built up in parallel with the total SGenius
software coding by yours truly. This modified Levenberg-Marquardt optimization corresponds to a Newton
algorithm with an adapted damping parameter regarding the definition domains of the optimized parameters.
Currently, S-Genius is technically ready for scientific collaboration, python-powered, multi-platform
(windows/linux/macOS), multi-core, ready for 2D- (infinite features along the direction perpendicular to the incident
plane), conical, and 3D-features computation, compatible with all kinds of input data from any possible ellipsometers
(angle or wavelength resolved) or reflectometers, and widely used in our laboratory for resist trimming studies, etching
features characterization (such as complex stack) or nano-imprint lithography measurements for instance. The work
about kriging solver, neural network solver and material refractive indices determination is done (or about to) by other
LTM members and about to be integrated on S-Genius platform.
A roadmap extending far beyond the current 22nm CMOS node has been presented several times.  This roadmap
includes the use of a highly regular layout style which can be decomposed into "lines and cuts." The "lines" can be
done with existing optical immersion lithography and pitch division with self-aligned spacers. The "cuts" can be done
with either multiple exposures using immersion lithography, or a hybrid solution using either EUV or direct-write ebeam.[
4] The choice for "cuts" will be driven by the availability of cost-effective, manufacturing-ready equipment and
Optical lithography improvements have enabled scaling far beyond what was expected; for example, soft x-rays (aka
EUV) were in the semiconductor roadmap as early as 1994 since optical resolution was not expected for sub-100nm
features. However, steady improvements and innovations such as Excimer laser sources and immersion photolithography
have allowed some manufacturers to build 22nm CMOS SOCs with single-exposure optical lithography.
With the transition from random complex 2D shapes to regular 1D-patterns at 28nm, the "lines and cuts" approach can
extend CMOS logic to at least the 7nm node. The spacer double patterning for lines and optical cuts patterning is
expected to be used down to the 14nm node. In this study, we extend the scaling to 18nm half-pitch which is
approximately the 10-11nm node using spacer pitch division and complementary e-beam lithography.
For practical reasons, E-Beam lithography is used as well to expose the "mandrel" patterns that support the spacers.
However, in a production mode, it might be cost effective to replace this step by a standard 193nm exposure and
applying the spacer technique twice to divide the pitch by 3 or 4.
The Metal-1 "cut" pattern is designed for a reasonably complex logic function with ~100k gates of combinatorial logic
and flip-flops. Since the final conductor is defined by a Damascene process, the "cut" patterns become islands of resist
blocking hard-mask trenches. The shapes are often small and positioned on a dense grid making this layer to be the most
critical one. This is why direct-write e-beam patterning, possibly using massively parallel beams, is well suited for this
task. In this study, we show that a conventional shaped beam system can already pattern the 11nm node Metal-1 layer
with reasonable overlay margin.
The combination of design style, optical lithography plus pitch-division, and e-beam lithography appears to provide a
scaling path far into the future.
Lithography faces today many challenges to meet the ITRS road-map. 193nm is still today the only existing
industrial option to address high volume production for the 22nm node. Nevertheless to achieve such a resolution, double
exposure is mandatory for critical level patterning. EUV lithography is still challenged by the availability of high power
source and mask defectivity and suffers from a high cost of ownership perspective. Its introduction is now not foreseen
Parallel to these mask-based technologies, maskless lithography regularly makes significant progress in terms of
potential and maturity. The massively parallel e-beam solution appears as a real candidate for high volume
manufacturing. Several industrial projects are under development, one in the US, with the KLA REBL project and two in
Europe driven by IMS Nanofabrication (Austria) and MAPPER (The Netherlands).
Among the developments to be performed to secure the takeoff of the multi-beam technology, the availability of a
rapid and robust data treatment solution will be one of the major challenges. Within this data preparation flow, advanced
proximity effect corrections must be implemented to address the 16nm node and below. This paper will detail this
process and compare correction strategies in terms of robustness and accuracy. It will be based on results obtained using
a MAPPER tool within the IMAGINE program driven by CEA-LETI, in Grenoble, France. All proximity effects
corrections and the dithering step were performed using the software platform Inscale® from Aselta Nanographics. One
important advantage of Inscale® is the ability to combine both model based dose and geometry adjustment to accurately
pattern critical features. The paper will focus on the advantage of combining those two corrections at the 16nm node
instead of using only geometry corrections. Thanks to the simulation capability of Inscale®, pattern fidelity and
correction robustness will be evaluated and compared between the correction strategies. This work will be lead on the
most critical layers of the 16nm integrate circuits layouts which are contact and metal 1. Finally the aim of this paper is
to demonstrate that a complete data preparation flow including advanced proximity effects corrections, simulation and
verification capabilities is available for the maskless lithography at the 16nm node and below, through the direct write
version of Inscale®. This data preparation platform is already in use in several laboratories for direct write processes.
Proc. SPIE. 7970, Alternative Lithographic Technologies III
KEYWORDS: Electron beam lithography, Point spread functions, Modulation, Calibration, Printing, Monte Carlo methods, Optical proximity correction, Line edge roughness, Semiconducting wafers, Projection lithography
It is now obvious that the path leading to denser IC has become hazardous since 193nm scanners have been operating
beyond their resolution limit. However if the tools that could provide photo-lithographers with some relief are not in
production yet, luckily enough, good progresses were made in developing alternative photolithography techniques.
Among them, massively parallel mask less lithography stands out as a serious candidate since it can achieve the required
resolution at the right cost of ownership provided targeted throughput performance is reached. This paper will focus on
this latter technique and more precisely, will report on part of the development works performed at CEA/LETI using the
MAPPER technology inside the open multi-partners program IMAGINE.
Data preparation is certainly not the easiest part in the technology. Indeed, layouts are basically turned into huge bitmap
streams containing the information to be sent to the thousands of parallel beams working all together to print the patterns
correctly. Addressing the low energy specific case, we had studied several ways of performing this step involving
geometrical correction with and without dose modulation. The results were analysed against the achieved design to wafer
fidelity and the robustness of the patterns with respect to process variations and shot noise.
The intention of the paper is therefore to give a status towards where E-Beam Proximity Correction (EBPC) performance
stands today using current MAPPER alpha tool. It will also provide with some insights about how corrections will be
performed on the HVM tool.
In this paper, an ill-posed inverse ellipsometric problem for thin film characterization is studied. The aim is to determine the thickness, the refractive index and the coefficient of extinction of homogeneous films deposited on a substrate without assuming any a priori knowledge of the dispersion law. Different methods are implemented for the benchmark. The first method considers the spectroscopic ellipsometer as an addition of single wavelength ellipsometers coupled only via the film thickness. The second is an improvement of the first one and uses Tikhonov regularization in order to smooth out the parameter curve. Cross-validation technique is used to determine the best regularization coefficient. The third method consists in a library searching. The aim is to choose the best combination of parameters inside a pre-computed library. In order to be more accurate, we also used multi-angle and multi-thickness measurements combined with the Tikhonov regularization method. This complementary approach is also part of the benchmark. The same polymer resist material is used as the thin film under test, with two different thicknesses and three angles of measurement. The paper discloses the results obtained with these different methods and provides elements for the choice of the most efficient strategy.
In-line process control in microelectronics manufacturing requires real-time and non-invasive monitoring techniques. Among the different metrology techniques, scatterometry, based on the analysis of ellipsometric signatures of the light scattered by a patterned structures, is well adapted. Traditionally, the problem of defining the shape and computing the signature is dealt with modal methods and is called direct problem. On the opposite, the inverse problem allows to find the grating shape thanks to an experimental signature acquisition, and can not be solved as easily. Different classes of algorithms have been introduced (evolutionary, simplex, etc.) to address this problem, but the method of library searching seems to be the most attractive technique for industry. This technique has many advantages that will be presented in this article. However the main limitation in real-time context comes from the short data acquisition time for different wavelengths. Indeed, the lack of data leads to the method failure and several database patterns can match the experimental data.
In this article, a technique for real time reconstruction of grating shape variation using dynamic scatterometry is presented. The different tools to realize this reconstruction, such as Modal Method by Fourier Expansion, regularization technique and specific software and hardware architectures are then introduced. Results issued from dynamic experiments will finally illustrate this paper.
In-line process control in microelectronics manufacturing requires real-time and non-invasive monitoring techniques.
Among the different metrology techniques, scatterometry, based on the analysis of ellipsometric signatures (i.e stokes coefficients vs. wavelength) of the light scattered by a patterned structures, seems to be well adapted.
Traditionally, the problem of defining the shape and computing the signature is dealt with modal methods and is called direct problem. On the opposite, the inverse problem allows to find the grating shape thanks to an experimental signature acquisition, and can not be solved as easily. Different classes of algorithms have been introduced (evolutionary, simplex, etc.) to address this problem, but the method of library searching seems to be the most attractive technique for industry. This technique has many advantages that will be presented in this article, however the main limitation in real-time context comes from the short data acquisition time for
different wavelengths. Indeed, the lack of data leads to the method failure and several database patterns can match the experimental data. In this article, a technique for real time reconstruction of grating shape variation using dynamic scatterometry is presented. The different tools to realize this reconstruction, such as Modal
Method by Fourier Expansion, regularization technique and specific software and hardware architectures are then introduced. Results issued from dynamic experiments will finally illustrate this paper.