Open Access
11 July 2018 Shot noise: a 100-year history, with applications to lithography
Author Affiliations +
Abstract
The term “shot effect” (schroteffekt) was coined in 1918 when Walter Schottky studied electrical noise in vacuum tubes. Earlier still, the foundations of shot noise theory go back to Einstein, who in 1905 explained the photoelectric effect as caused by discrete “particles” of light and Brownian motion as caused by discrete particles of matter. When the numbers of particles that affect observable outcomes are large, shot noise effects (variability in number as a fraction of the mean number) become small, and the continuum approximation (energy and matter are continuous) becomes accurate. For most of the history of semiconductor lithography, the continuum approximation has served well. But at small dimensional scales, where the number of discrete particles or events is small, the counting statistics of shot noise can dominate. The 100-year history of shot noise in science and engineering is today playing a role in our understanding of shot noise in lithography.

“And for this reason it is the more right for you to give heed to these bodies, which you see jostling in the sun’s rays, because such jostlings hint that there are movements of the matter too beneath them, secret and unseen. For you will see many particles there stirred by unseen blows change their course and turn back, driven backwards on their path, now this way, now that, in every direction everywhere.”

      Lucretius, On the Nature of Things, first-century BCE1

1.

Introduction

One hundred years ago, a new form of noise was discovered when Walter Schottky2 studied the output of vacuum tubes run at low currents. He called this noise the “shot effect” (schroteffekt).3 Earlier still, the foundations of shot noise theory go back to Einstein, who in 1905 gave convincing arguments that both light and matter are fundamentally discrete in nature.4,5 Einstein’s photon theory of light was inspired by Max Planck’s quantization of energy, and his argument for the atomic theory of matter provided an explanation for Brownian motion. Today, we recognize that shot noise plays a role in any system where the number of discrete events that are important is sufficiently small so that the counting statistics of those events must be considered.

In lithography, events such as photon absorption within a photoresist, chemical conversion of a light-sensitive component in that resist, and the chemical changes that make a molecule soluble in developer are all discrete events that experience shot noise. Consider a volume of photoresist and ask, “How many events occurred within that volume to change the solubility of that portion of resist?” If the volume of resist is sufficiently large that the number of events is large, the variability of that number relative to its mean (the shot noise) will be small, or even negligible. In this regime, it is safe to make the continuum approximation, where the discrete nature of light and matter is ignored. But if the volume of resist of interest is small enough, shot noise will dominate its behavior. As lithographic dimensions shrank from 25  μm in the early days of semiconductor manufacturing to 25 nm and below today, we have transitioned from a regime where shot noise can be completely ignored to one where it dominates errors in patterning.

This paper will review the history of shot noise, beginning almost 200 years ago with Brownian motion. We will then look at how the ideas of shot noise have been applied in lithography for semiconductor manufacturing over the last 40 years.6

2.

Brownian Motion

The term Brownian motion is named for Robert Brown (1773 to 1858), a well-known botanist of his day. In 1827, while studying pollen suspended in water under a microscope, he observed a random, quick motion of the pollen particles that at first he attributed to the “vital force,” an ill-defined concept of life that was in vogue at the time. Brown was not the first to notice such movements, but he was the first to make an extensive study of it. He quickly realized that inorganic particles (and in fact, any properly sized particles) would also exhibit the random motion, going so far as to test ground particles from the Egyptian Sphinx (courtesy of the British Museum where he worked). He systematically ruled out causes for the motion, such as living creatures, gradients and convection currents, evaporation, or vibrations. In the end, he left its cause a mystery. In 1827, he self-published a pamphlet describing his observations (Fig. 1), the contents of which were republished in a scientific journal the next year.7

Fig. 1

The front page of Robert Brown’s self-published pamphlet describing his observations and experiments on what came to be called Brownian motion.

JM3_17_4_041002_f001.png

The mystery of the cause of Brownian motion simmered for over 75 years. Although hand-waving arguments were quickly proposed, they were slowly dispelled in turn through careful experiments, including the possibilities of chemical and electrical attractions. What was left was the theory of molecular motion: energetic water molecules constantly colliding with the suspended particle randomly push it in various directions. However, the atomic/molecular theory of matter was still controversial at the time, and many were unconvinced by this proposed cause of Brownian motion.

In 1859, James Clerk Maxwell described the distribution of velocities of idealized gas particles statistically, the first statistical law of physics. Ludwig Boltzmann expanded on this result in 1871 into what we now call the Maxwell–Boltzmann distribution of particle velocities. The success of this theory in predicting and explaining the properties of gases prompted many to apply the same concepts to explain Brownian motion. If the water molecules could be described by the same distribution of velocities, perhaps the motion of the suspended particle could be explained and predicted as well. However, attempts to both predict and measure the velocity of the Brownian particles proved a failure. By the close of the 19th century, many saw Brownian motion as a possible proof for the existence of atoms, but that proof remained lacking.

It is hard to imagine today that in 1900 the atomic/molecular theory of matter was very much in debate. Fueled by the positivist school of philosophy, which eschewed explanations that could not be directly observed with the senses, many scientists considered atoms a convenient fiction, but not real. The statistics of ideal gases seemed ideally suited to explain Brownian motion, but the connection remained elusive. In 1905, Albert Einstein finished his doctoral dissertation, “A new determination of molecular dimensions” (Fig. 2), later published in Annalen der Physik.8 In the dissertation and a second publication that year,5 Einstein applied Maxwell–Boltzmann statistics to the problem of Brownian motion. But instead of trying to mimic the Maxwell–Boltzmann theory and predict the velocity of the Brownian particle (an exercise that will later be proven to be impossible), Einstein instead focused on the mean position of the particle over time. In doing so, he provided the first theoretical derivation of the diffusion equation and showed that a Brownian particle’s position should follow Gaussian statistics.

Fig. 2

The front page of Albert Einstein’s 1905 PhD dissertation.

JM3_17_4_041002_f002.png

Fig. 3

Illustrations from Jean Perrin’s work on measuring Brownian motion.9

JM3_17_4_041002_f003.png

Fig. 4

Illustrations from Schottky’s 1918 paper where he examined the role of shot noise in a simple cathode tube and one connected to an RLC circuit.2

JM3_17_4_041002_f004.png

Einstein’s theoretical achievement was quickly appreciated as providing the proof needed for the molecular theory of matter, but only if the predictions of his theory could be experimentally verified. The experimental verification came a few years later by the careful and detailed work of the French scientist Jean Baptiste Perrin.9 Perrin devised several tests of the molecular theory of Brownian motion and began by creating particles of uniform diameter and density, which he measured. Since the particles were denser than water, a suspension of these particles in water could test the molecular theory of Brownian motion by looking at how they settled. In the absence of molecular collisions, the particles should all settle to the bottom under the force of gravity. But molecular collisions would force some particles to stay suspended, creating an exponential concentration gradient in the vertical direction in the same way as barometric pressure varies with altitude. By measuring the concentration gradient (counting particles using a Zeiss microscope in water immersion mode, see Fig. 3), he was able to verify the predicted exponential distribution of particle counts. Further, by separately measuring quantities such as particle size and density, Perrin was able to calculate the one unknown quantity in his theoretical equation: Avogadro’s number. His result of 7.05×1023 was only off by 17% from the value we know today and was the most accurate measurement of Avogadro’s number to date. Perrin also directly verified Einstein’s predictions about Brownian motion by measuring the particle position over time (Fig. 3).

Perrin’s work earned him the Nobel Prize in physics in 1926 and firmly established the atomic theory of matter: matter is not a continuum but instead is made up of discrete particles. But the study of Brownian motion was not complete. In 1908, Paul Langevin10,11 developed a stochastic differential equation for Brownian motion, adding an important mathematical tool to its study. In 1930, Uhlenbeck and Ornstein12 added friction to the motion of the particle, creating a more accurate model for Brownian motion (a model which remains useful today in describing photoresist stochastic behavior). The most complete model for Brownian motion was developed by the mathematician Norbert Wiener. Known as the Wiener process, it is a continuous-time stochastic process such that the distance a particle travels in time interval Δt is Gaussian with mean 0 and variance proportional to Δt, and with independent, nonoverlapping time intervals. The use of this model required the development of stochastic random variables and their mathematical treatment. For example, the position of a particle in a Wiener process is a stochastic random variable that has no defined derivative (that is, undefined velocity). Wiener developed time-series analysis techniques for such variables13,14 and a stochastic version of the Fourier transform.15,16 He also showed that the power spectral density of a stochastic random variable is the Fourier transform of its autocorrelation function. The study of Brownian motion took another interesting turn when Benoit Mandelbrot applied the new concept of fractals to its description.17 Even today, Brownian motion remains an active area of research, with many interesting applications.

3.

Discrete Light and Discrete Current: The Photon and the Electron

The year 1905 was special not just for Einstein’s theoretical proof of the existence of atoms, but for the concept of the photon: light is not a continuum of energy but instead is made up of photons, small units of energy that interact with matter discretely. Einstein examined the photoelectric effect, where light shining on a metal plate in a vacuum can release electrons. A surprising experimental result by Philipp Lenard in 1902 was that while the number of electrons emitted was a function of the intensity of light, the energy of the emitted electrons was independent of the intensity of light. And if the frequency of the light was below a certain threshold, no electrons would be emitted regardless of the light’s intensity. This behavior was completely anomalous under the paradigm of continuous light energy. Einstein was able to make sense of it by applying Max Planck’s concept of the energy quantum: light is made up of discrete “particles,” each with an energy fixed by the frequency of the light. The intensity of the light is then determined by the number of photons.

The twin ideas of discrete matter and discrete energy events produced a remarkable paradigm shift from the continuum ideas prevalent in the 19th century. Even the atom was shown to be made of more elementary particles. In 1896, Joseph J. Thomson showed that cathode rays were made of negatively charged particles (later named electrons) and measured their mass and charge. In a series of experiments in 1908 to 1911, Ernst Rutherford, Hans Geiger, and Ernest Marsden showed that atoms are composed of a central positively charged nucleus surrounded by electrons.

The theoretical and experimental ideas of matter and energy as made up of discrete particles inspired mathematicians to delve into appropriate mathematical tools for their description, especially random variables with a Poisson distribution. For example, in 1909, Campbell18 derived the moments (mean and variance) of a sum of Poisson processes, motivated by the latest developments in physics: “The trend of modern theory is everywhere to replace by discontinuity the continuity which was the basis of science in the last century.”

4.

Walter Schottky and the “Shot Effect”

It took some time for the full implications to become apparent of energy and matter (and their interaction) as composed of discrete, stochastic particles or events. In 1918, Walter Schottky2 was studying the behavior of vacuum tubes, such as a vacuum diode, under low-current conditions (Fig. 4). The measured current was noisy, a not unexpected result. But unlike other types of noise, this noise had a frequency spectrum that was constant over all frequencies and was not temperature dependent. Schottky called this noise the “Schroteffekt,” the shot effect.2,3 In coining this phrase, Schottky said, “The expression ‘shot’ points, as it does in common language use, to the occurrence of a large number of homogeneous elementary particles.”3

At the time, other forms of electrical noise had well-known frequency and temperature dependences that differed greatly from Schottky’s new shot effect. Johnson noise, thermal noise of current flowing through a conductor or resistor, varied greatly with temperature. Flicker noise had a 1/fα frequency dependence. But the shot effect was white noise, with a flat frequency response over most of the frequency range. As Schottky said, “Because of the atomic structure of electricity the electrical transition is represented not as a continuously flowing current but as a hail of charge quanta which would cause current fluctuations even for a very regular temporal distribution.”3

According to Partridge,19 noted lexicographer of slang, the word “noise” derives from nausea, from “the noise made by an ancient shipful of passengers groaning and vomiting in bad weather.” The Oxford English Dictionary defines noise as “random fluctuations that obscure or do not contain meaningful data or other information.” Over time, the phrase “shot effect” became “shot noise.” It is used to describe any random uncertainty in a physical quantity caused by the counting statistics of the discrete events that underlie the phenomenon. The most common statistical distribution appropriate for shot noise is the Poisson distribution, characterized by a variance of the distribution equal to its mean.

As an example of Poisson counting statistics, consider the concept of concentration, the average number of molecules per unit volume.20 Let C be the average number of molecules per unit volume, and dV a volume small enough so that at most one molecule may be found in it (thus requiring that the concentration be fairly dilute, so that the position of one molecule is independent of the position of other molecules). The probability of finding a molecule in that volume is just CdV. For some larger volume V, the probability of finding exactly n molecules in that volume, P(n), will be given by a binomial distribution. But for any reasonably large volume (CV>1), this binomial distribution will also be well approximated by a Poisson distribution

Eq. (1)

P(n)=(CV)nn!eCV.
The average number of molecules in the volume will be CV, and the variance will also be CV. The relative uncertainty in the number of molecules in a certain volume will be

Eq. (2)

σnn=1n=1CV.

Thus, if the mean number of particles in the process is large, the relative uncertainty in that number is small and the impact of shot noise is also small. But as the mean number of particles becomes small (for example, by examining a very small volume of material so that the expected number of particles in that volume is small), the relative uncertainty in that number becomes great.

For a given concentration, the relative importance of shot noise is determined by the size of the volume of interest. As will be described next, the scaling of lithography to small dimensions has meant that the volume of interest shrinks and the importance of shot noise in lithography processes continues to grow.

5.

Shot Noise in Lithography

Today, shot noise is widely recognized as a serious problem when printing near the resolution limit of extreme ultraviolet (EUV) lithography.21,22 But the concept of shot noise as a limiter to lithographic performance is not new to EUV. In the 1970s, x-ray lithography was considered a potential successor to optical lithography. Like EUV today, x-ray light sources were not as bright as desired, so researchers wondered how low the exposure dose could be made before print quality becomes limited by shot noise. By 1976, Spiller and Feder and coworkers at IBM were exploring the trade-off between resolution and resist sensitivity (what we now think of as a part of the resolution–linewidth roughness–sensitivity trade-off).23,24 Describing photoresist as a detector,

“Theoretically the final limit for the sensitivity of any detector is determined by the shot noise of the absorbed photons… resists with lower resolution can have higher sensitivities than resists with high resolution.”25

They went on to show that the shot-noise limited resist sensitivity must scale as 1/R3, where R is the resolution.26 They showed that the minimum incident dose Einc required to print a feature of dimension δ using photons of energy hν in a resist with absorption coefficient α will be

Eq. (3)

Einc=n¯  hνα  δ3,
where n¯ is the mean number of photons absorbed in the volume that are required to avoid detrimental shot noise effects. The implications of this scaling are unpleasant if one is limited by available light: if the feature size is reduced by a factor of 0.7, the dose required to print it must rise by a factor of 3. A similar scaling law for electron beam lithography was also derived in 1976.27

Further insights into the stochastic limits of x-ray lithography were provided by a number of researchers who tried to turn the Spiller and Feder scaling relationship into hard numbers as to what dose would be required for which feature size. Smith28 derived a simple relationship between lithographic feature edge error (Δx), the image intensity slope at the edge, the Poisson statistics photon shot noise σphoton, and what he called the “development uncertainty band” δN related to photoresist contrast

Eq. (4)

Δx=δN+σphotonimage slope.

Neureuther and Willson,29 in a remarkable 1988 paper, explored the impact of the photoresist in far more detail. They recognized that stochastics must be considered in two ways:

First the occurrence of a large random defect at anyone of some 1011 sites on chip could produce a catastrophic failure and second, intra- and interfeature variations in linewidths at any of several hundred sites could cause unacceptable performance degradation or reliability failure.

Neureuther and Willson described two feature size-scaling regimes. In addition to the 1/R3 dose scaling of Spiller and Feder, they described a “fixed resist aspect ratio” scaling where absorption is made to increase as 1/R so that required dose scales as 1/R2. They derived a model for the dose required to avoid various types of stochastic defects (insoluble bits of resist at different locations on a device), though their dose limit was “based on the occurrence of a defect so infrequent that constructing an experiment to observe this effect would be extremely difficult.”30

The need to consider shot noise effects in EUV lithography was appreciated as early as 1994.31,32 Most of this early work sought to understand the effects of shot noise and other stochastic factors on the roughness of photoresist edges. Scheckler et al.31 extended the stochastic defect model of Neureuther and Willson to chemically amplified resists and predicted, for a given resist, that a 54-mJ/cm2 dose would be required to print 70-nm features. In 2001, O’Brien and Mason33 explored a different approach to predicting stochastic defects in contacts. By counting the total number of photons absorbed inside a contact hole, they considered shot noise as simply an exposure dose error. For a contact with a given exposure latitude, the probability that shot noise will produce a contact so undersized that it will not print can be determined from the Poisson distribution of photon counts. When a device has billions (or hundreds of billions) of contacts, even low probability contact failures can produce a high probability of device failure. In their model, the required dose is inversely proportional to the exposure latitude squared and proportional to the “number of sigmas” in the probability distribution that matter squared. For example, a five-sigma failure rate is about 1 failure per 3.5 million contacts, a six-sigma failure rate is about 1 failure per billion contacts, and a seven-sigma failure rate is about 1 failure per trillion contacts. Thus, the required dose not only scales with feature size, but also with the number of features found on a device.

More recently, the reduction in exposure latitude for undersized contacts has been added to O’Brien and Mason’s model, predicting a much higher probability of contact hole failure.21 These models, while calibrated with experimental data, must still be extrapolated very far along the tail of a probability distribution.22 Even if the critical dimension distribution of a million contacts was measured, it is still very hard to accurately predict the expected number of failures for 100 billion contacts.

One certain consequence of the statistics of stochastic effects will be the need to use higher than desired exposure doses in manufacturing. Early targets for EUV dose to size were 5  mJ/cm2. The throughput specifications for ASML’s NXE:3100 assumed a 10-mJ/cm2 dose and the NXE:3300 assumed a 15-mJ/cm2 dose.34 Today, a 20-mJ/cm2 dose is widely assumed when making throughput calculations, though a workable resist with such a dose to size is more aspirational than realistic. The realities of high-yield manufacturing with EUV will require much higher doses, and these doses must increase every time the feature size is reduced.

The exact stochastic limits of EUV lithography remain uncertain. From a theoretical perspective, the volume V that should be used in a Poisson calculation of shot noise has not been exactly pinned to a measureable quantity, though a recent proposal to relate this volume to the total resist blur as manifest in a roughness correlation length has some potential.35 From a stochastic defect perspective, further work is required to either measure or predict killer stochastic defects in one out of 1011 features. Still, progress in these areas could provide answers in the very near future.

6.

Conclusions

One hundred years ago a new term, and a new concept, entered the lexicon of science: shot noise. This idea was a natural consequence of the changing view of the microscopic world from the continuous to the discrete. For large-scale phenomenon, where the number of particles or events in a volume of interest is large, the relative variation about the mean is small so that a continuum view of the world is quite accurate. But as the scale of interest shrinks, the importance of shot noise grows quickly.

Shot noise has been an understood part of lithographic phenomenon for over 40 years. But models that can predict the lithographic impact of stochastic phenomenon with sufficient accuracy are still lacking. Each new generation of lithographic process, with its smaller dimensions, suffers from increased stochastic variations. So for each new generation of lithographic process, there is a fear that stochastic variation will lead to unacceptable yield loss and a hard limit to progress, and a hope that this limit will occur at least one more generation in the future.

References

1. 

Titus Lucretius Carus, On the Nature of Things, Oxford Press, London (1910). Google Scholar

2. 

W. Schottky, “Über spontane Stromschwankungen in verschiedenen Elektrizitätsleitern,” Ann. Phys., 362 541 –567 (1918). https://doi.org/10.1002/andp.19183622304 Google Scholar

3. 

W. Schottky, “On spontaneous current fluctuations in various electrical conductors,” J. Micro/Nanolith. MEMS MOEMS, 17 (4), 041001 (2018). https://doi.org/10.1117/1.JMM.17.4.041001 Google Scholar

4. 

A. Einstein, “Concerning an heuristic point of view toward the emission and transformation of light,” Ann. Phys., 17 132 –148 (1905). https://doi.org/10.1002/(ISSN)1521-3889 Google Scholar

5. 

A. Einstein, “Über die von der molekularkinetischen Theorie der Wärme geforderte Bewegung von in ruhenden Flüssigkeiten suspendierten Teilchen (On the movement of small particles suspended in stationary liquids required by the molecular-kinetic theory of heat),” Ann. Phys. (Ser. 4), 17 549 –560 (1905). Google Scholar

6. 

C. A. Mack, “Shot noise: a 100 year history, with applications to lithography,” Proc. SPIE, 10583 1058315 (2018). https://doi.org/10.1117/12.2305949 PSISDG 0277-786X Google Scholar

7. 

R. Brown, “A brief account of microscopical observations on the particles contained in the pollen of plants and the general existence of active molecules in organic and inorganic bodies,” Edinburgh New Philos. J., 5 358 –371 (1828). Google Scholar

8. 

A. Einstein, “Eine neue Bestimmung der Moleküldimensionen (A new determination of molecular dimensions),” Ann. Phys. (Ser. 4), 19 289 –306 (1906). Google Scholar

9. 

J. Perrin, “Brownian motion and molecular reality,” Annales de Chimie et de Physique, (1909). Google Scholar

10. 

P. Langevin, “Sur la théorie du mouvement brownien,” Comptes-rendus de l’Académie des sciences Séance, 146 530 –533 (1908). Google Scholar

11. 

D. S. Lemons and A. Gythiel, “Paul Langevin’s 1908 paper ‘On the theory of Brownian Motion’ [Sur la theorie du mouvement brownien, C. R. Acad. Sci. (Paris) 146, 530–533 (1908)],” Am. J. Phys., 65 (11), 1079 –1081 (1997). https://doi.org/10.1119/1.18725 AJPIAS 0002-9505 Google Scholar

12. 

G. E. Uhlenbeck and L. S. Ornstein, “On the theory of the Brownian motion,” Phys. Rev., 36 (5), 823 –841 (1930). https://doi.org/10.1103/PhysRev.36.823 PHRVAO 0031-899X Google Scholar

13. 

N. Wiener, Time Series, MIT Press, Cambridge, Massachusetts (1949). Google Scholar

14. 

N. Wiener, Nonlinear Problems in Random Theory, John Wiley & Sons, New York (1958). Google Scholar

15. 

N. Wiener, “Generalized harmonic analysis,” Acta Math., 55 117 –258 (1930). https://doi.org/10.1007/BF02546511 ACMAA8 0001-5962 Google Scholar

16. 

N. Wiener, The Fourier Integral and Certain of its Applications, Cambridge University Press, Cambridge (1933). Google Scholar

17. 

B. Mandelbrot, The Fractal Geometry of Nature, W. H. Freeman and Company, New York (1982). Google Scholar

18. 

N. Campbell, “The study of discontinuous phenomena,” Proc. Cambridge Philos. Soc, 15 117 –136 (1909). PCPSA4 0008-1981 Google Scholar

19. 

E. Partridge, Origins: A Short Etymological Dictionary of Modern English, 429 Macmillan Company, New York (1958). Google Scholar

20. 

C. Mack, Fundamental Principles of Optical Lithography, 239 John Wiley & Sons, London (2007). Google Scholar

21. 

R. L. Bristol and M. E. Krysak, “Lithographic stochastics: beyond 3σ,” J. Micro/Nanolith. MEMS MOEMS, 16 (2), 023505 (2017). https://doi.org/10.1117/1.JMM.16.2.023505 Google Scholar

22. 

P. De Bisschop, “Stochastic effects in EUV lithography: random, local CD variability, and printing failures,” J. Micro/Nanolith. MEMS MOEMS, 16 (4), 041013 (2017). https://doi.org/10.1117/1.JMM.16.4.041013 Google Scholar

23. 

G. M. Gallatin, “Resist blur and line edge roughness,” Proc. SPIE, 5754 38 –52 (2005). https://doi.org/10.1117/12.607233 PSISDG 0277-786X Google Scholar

24. 

G. M. Gallatin, P. Naulleau and R. Brainard, “Fundamental limits to EUV photoresist,” Proc. SPIE, 6519 651911 (2007). https://doi.org/10.1117/12.712346 PSISDG 0277-786X Google Scholar

25. 

E. Spiller, R. Feder and J. Topalian, “X-Ray lithography and X-ray microscopy,” Physikalische Blätter, 32 (12), 564 –571 (1976). https://doi.org/10.1002/phbl.v32.12 Google Scholar

26. 

E. Spiller, R. Feder, X-ray Optics, 22 52 Springer, Berlin (1977). Google Scholar

27. 

I. E. Sutherland, C. A. Mead and T. E. Everhart, “Basic limitations in microcircuit fabrication technology,” (1976). https://www.rand.org/pubs/reports/R1956.html Google Scholar

28. 

H. I. Smith, “A statistical analysis of ultraviolet, x-ray and charged-particle lithographies,” J. Vac. Sci. Technol. B, 4 (1), 148 –153 (1986). https://doi.org/10.1116/1.583367 JVTBD9 1071-1023 Google Scholar

29. 

A. R. Neureuther and C. Willson, “Reduction in x-ray lithography shot noise exposure limit by dissolution phenomena,” J. Vac. Sci. Technol. B, 6 (1), 167 –173 (1988). https://doi.org/10.1116/1.584037 Google Scholar

30. 

D. Seligson, H. Ito and C. Willson, “The impact of high-sensitivity resist materials on x-ray lithography,” J. Vac. Sci. Technol. B, 6 (6), 2268 –2273 (1988). https://doi.org/10.1116/1.584068 Google Scholar

31. 

E. W. Scheckler et al., “Resist pattern fluctuation limits in extreme-ultraviolet lithography,” J. Vac. Sci. Technol. B, 12 (4), 2361 –2371 (1994). https://doi.org/10.1116/1.587765 Google Scholar

32. 

J. M. Hutchinson, “The shot noise impact on resist roughness in EUV lithography,” Proc. SPIE, 3331 531 –535 (1998). https://doi.org/10.1117/12.309612 PSISDG 0277-786X Google Scholar

33. 

S. C. O’Brien and M. E. Mason, “Exposure latitude requirements for high yield with photon flux-limited laser sources,” Proc. SPIE, 4346 534 –543 (2001). https://doi.org/10.1117/12.435780 PSISDG 0277-786X Google Scholar

34. 

C. Wagner et al., “EUV into production with ASML’s NXE platform,” Proc. SPIE, 7636 76361H (2010). https://doi.org/10.1117/12.845700 PSISDG 0277-786X Google Scholar

35. 

C. A. Mack, “Reducing roughness in extreme ultraviolet lithography,” Proc. SPIE, 10450 104500P (2017). https://doi.org/10.1117/12.2281605 PSISDG 0277-786X Google Scholar

Biography

Chris A. Mack developed the lithography simulator PROLITH and founded and ran FINLE Technologies for ten years. He then served as a vice president of Lithography Technology for KLA-Tencor for five years, until 2005. He is a fellow of SPIE and IEEE and is an adjunct faculty member at the University of Texas at Austin. In 2012, he became editor-in-chief of the Journal of Micro/Nanolithography, MEMS, and MOEMS. In 2017, he cofounded Fractilia, where he is chief technical officer.

© 2018 Society of Photo-Optical Instrumentation Engineers (SPIE)
Chris A. Mack "Shot noise: a 100-year history, with applications to lithography," Journal of Micro/Nanolithography, MEMS, and MOEMS 17(4), 041002 (11 July 2018). https://doi.org/10.1117/1.JMM.17.4.041002
Received: 9 May 2018; Accepted: 8 June 2018; Published: 11 July 2018
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Particles

Lithography

Stochastic processes

Molecules

Failure analysis

Extreme ultraviolet lithography

Photoresist materials

RELATED CONTENT

Then a miracle occurs A description of the issues...
Proceedings of SPIE (January 01 1900)
An analysis of EUV resist stochastic printing failures
Proceedings of SPIE (September 26 2019)
Shot noise A 100 year history, with applications to...
Proceedings of SPIE (May 22 2018)
Molecular glass resists for EUV lithography
Proceedings of SPIE (March 29 2006)

Back to Top