
By cascading thermionic cathode amplifiers, it has become possible in recent years to detect and measure alternating currents of extremely small amplitude. Many technical problems have thus experienced a sudden help, but a new area also seems to have opened up for the researcher; the amplifier circuits certainly have the same significance to electrical studies as the microscope has in optics. Since no clear limit for the achievable amplification has so far been shown, one could hope, by sufficient shielding, interferencefree setup, etc. of the amplifier circuit, to advance to infinite smallness; the dream of “hearing the grass grow” would once again be quite reachable to man. The intention of the following text is to prove certain impassable limits for the amplification using hot cathode and gas discharge tubes. The first insurmountable obstacle is oddly given by the size of the elementary quantum of electricity. The thermal motion of electricity forms a further limit, which seems in most cases to lie higher than the former. However, we shall present first the investigation of this phenomenon as the simpler and more familiar one before turning to our main investigation. Part 1: The Thermal EffectConsider a metallic conductor with distributed selfinductance and capacitance, e.g., a wire coil. Such a structure is known to be capable of natural oscillations, i.e., various oscillation processes exist that are characterized by a current $J$, a selfinductance $L$, a quantity of electricity $e$, and a capacitance $C$ and for which the distributed energy is independent of all other events. Furthermore, the relationship applies. In the case of a simple electrical oscillation circuit, consisting of selfinductance and capacitance, the quantities $J$, $e$, $C$, and $L$ have the familiar simple meaning; for more complicated natural oscillations, there remains at least a formal analogy.Now let our electrical structure be withdrawn from all external influences; let all selfoscillations be faded away through resistive damping. Thermal equilibrium is then reached; the entire available thermal energy is distributed according to the definitive laws over the distributed energies of the system. Based on a remark by Einstein, the average energy of an electrical natural oscillation is then obviously not sunk completely to zero but possesses the value $kT$, where $k$ is the fundamental gas constant and $T$ is the absolute temperature, if the partial energy has the above form and the characteristic oscillation is not too strong. This theorem exists completely independent of any assumption about the mechanism of motion of the electricity in our structure; it would also be valid if the electricity were a continuously distributed fluid. If we now apply the found theorem to the receiving circuit of an amplifier arrangement and accept for example that the receiving circuit possesses a natural oscillation in the audible range, then we find that, with sufficient amplification in a telephone connected at its end, even after elimination of all external disturbances, a continuous humming should be present, which drowns out all weaker signals, thereby making their reception impossible. The following consideration gives an explanation for the “purity” of the oscillations caused by the thermal motion. A slow electrical natural oscillation can be compared with the Brownian motion of a larger particle tightly held by elastic forces in a gas. The gas molecules cause two different actions upon this particle: on the one hand, any existing motion is damped by a frictional force proportional to the velocity; on the other hand, the irregular collisions of the gas molecules make sure that on average as much energy is added to the particle as it loses through damping. In this way, the average energy $kT$ is maintained. For an electrical oscillation, the electrical resistance plays the role of friction; the irregular collisions of gas molecules are replaced by irregular impulses, which the charged particles experience within the conductor due to energy exchange with the remaining system. Since the energy of the natural oscillation consists—grouped under certain viewpoints—of the alternating energies of the electrical elementary particles, the energy of the oscillation is increased concurrently with the energy influx to these elementary particles; to maintain the thermal equilibrium, the energy influx on average has to be equal in magnitude to the energy loss of the oscillation due to damping. From this, it follows that the damping constant must form a metric for the purity of the oscillations that are maintained by the thermal motion. A completely undamped oscillation should endure continuously without energy influx and would, therefore, experience fluctuations neither in phase nor in amplitude; on the other hand, in the case of a strongly damped oscillation, a large part of the initially existing energy gets lost within one period. In case additional energy is supplied, it happens in a completely irregular manner, so that after only one period amplitude and phase of the oscillation could have completely changed. For audible frequencies, we can hear a completely pure tone in the telephone at the end of a circuit in the first case but only an irregular noise in the second case (assuming that the remaining elements of the amplifier structure possess the required properties to convey the sound spectrum reasonably truthfully). The electrical resistance is, therefore, of great influence to the “kind” of oscillation; it influences not the intensity but the purity. Eddy current and hysteresis losses are included just as Ohmic resistance; accordingly, there must exist a thermal excitation of the oscillation by induction and due to magnetic materials in close proximity, the pursuit of which would perhaps be of interest. The conclusion that the thermal intensity of an electrical resonance is independent of the resistance appears to be a paradox in the case where the resistance becomes infinite, when the conductor becomes an insulator. In this case, one of the assumptions of the $kT$law does not apply anymore: large work of separation against molecular forces has to be conducted to transport the electrical particles, such as in a conductor; the electrical energy of the resonance does not superpose itself in an undisturbed way with the remaining energies of the system but can only grow simultaneously with other energies. Therefore, the requirement of the $kT$law becomes inapplicable. The same happens when natural oscillations of such small magnitudes are considered that only a small number of elementary particles can contribute to them; the motion of these elementary particles is then simultaneously relevant for different resonances, and the oscillations are not sufficiently coherent to superpose themselves in an undisturbed manner. In the pursuit of this train of thought, it is at any rate interesting that when counting the total number of degrees of freedom of a system in addition to the elastic degrees of freedom of Debye we need to consider a new series of “electrical” degrees of freedom; the total number of the degrees of freedom can of course only be as large as the number of independent particle coordinates, and for poor conductors as well as for the higher resonances, the electrical and elastic degrees of freedom will transition into each other. However, we are far away from all technically accessible frequencies. In addition to the electrical resistance, no material properties whatsoever are relevant for thermal oscillations of electricity, and the resistance determines not the energy but exclusively the “purity” of the oscillation. The computation of the average thermal energy ${E}_{T}$ of an electrical resonance is thus very simple. It is or expressed in technical energy units (Joule) as For room temperature, this amounts to $\sim 4\times {10}^{21}$ technical energy units.What this value represents in an amplifier circuit is not readily apparent. The given quantity in all amplifier problems is not an initial “energy” but instead an initial “power;” to be more exact: the maximum power that can be transferred into the amplifier circuit through suitable matching is given by external conditions. We therefore have to ask the question in the following way: what is the power that is necessary to maintain continuously the energy in the receiver circuit which is of equal magnitude to the thermal energy of a resonance? This power ${P}_{T}$ then evidently represents the limit for the initial power which can still be amplified without the thermal motion blurring and drowning out the signal. We answer the question for an oscillation circuit consisting of a selfinductance $L$ and a capacitance $C$, in which the attenuation is caused by an Ohmic resistance $R$. The average energy in such a resonant circuit is equal to $L\overline{{J}^{2}}$, that is to say twice as large as the average magnetic energy which is equal to $\frac{L}{2}\overline{{J}^{2}}$. The average power by comparison is equal to $R\overline{{J}^{2}}$. So we have If we introduce the “decay time” defined by the time, in which the amplitude of an oscillation decays to the value $1/e$, we can establishEq. (2)$${P}_{T}=\frac{2{E}_{T}}{\vartheta}=\frac{2kT}{\vartheta}\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{erg}/\mathrm{s}\phantom{\rule{0ex}{0ex}}=\frac{2.7\xb7{10}^{16}T}{\vartheta}\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{erg}/\mathrm{s}\phantom{\rule{0ex}{0ex}}=\frac{2.7\xb7{10}^{23}T}{\vartheta}\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{W}.$$The decay times $\vartheta $ range between ${10}^{3}$ and ${10}^{4}\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{s}$ for audible and wireless frequencies. For $\vartheta ={10}^{3}$ and room temperature, we find A power of ${10}^{7}\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{W}$ is easily audible for suitable frequencies in a telephone, and a $\mathrm{10,000}\times $ linear amplification, i.e., a ${10}^{8}$ times power amplification is nothing extraordinary, so that an initial power of ${10}^{15}\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{W}$ can still easily be observed. Because of this, we recognize that a thermal motion of electricity which corresponds to an initial power of about ${10}^{17}\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{W}$ lies already relatively close to the limit of observability. Here, it is important to note that for a given power amplification “the thermal motion appears noisy all the sooner the smaller the decay time $\vartheta $, and the larger the damping of the input circuit.” This result is noteworthy in contrast to another one which we will derive in the second part. If we contemplate an external excitation of the oscillation circuit due to signals then the calculated relation of power and energy refers only to an excitation at the resonant frequency of the oscillation circuit. For known arguments, the energy consumption in the oscillation circuit has to be set to be equal to the initial power. In other cases, there exist more complicated relationships between maximum initial power, energy consumption in the oscillation circuit, and amplification, which cannot be discussed here. As far as the input circuit is relevant for the amplification, the conditions will still be most favorable in the region of resonance; so we can formulate the statement that in general for an arbitrary kind of excitation, we have the chance to get larger final amplitudes only when the maximum available initial power exceeds the above described magnitude for ${P}_{T}$, on the order of ${10}^{17}\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{W}$. The following considerations will show that an undisturbed amplification can be further limited by another phenomenon. Part 2: The “Shot Effect”The principle that the thermal energy of an electrical resonance is equal to $kT$ refers to all systems that are in thermal equilibrium. Connection of two metals with each other, the occurence of potential differences due to contact, or the insertion of an electrolyte between two metal electrodes of the same kind changes nothing in this law. Indeed, even the presence of electromotive forces, which require the flow of a continuous or timevarying current, will not change the character of thermal oscillations as long as the free path length of the electrical particles is sufficiently small to let approximate thermal equilibrium take place everywhere. Since it is well known that in good conductors thermal equilibrium is present almost everywhere even for strong currents, the electrical thermal oscillations—with an energy that corresponds to a mean temperature of the conductor—will therefore simply superpose themselves with the stronger currents. Only in cases where the motion of the conduction electrons deviates strongly from the thermal motion would we expect a different energy of the electrical natural oscillations. Such cases can for example be present for very poor conductors where the electric field strength—without causing too much heat generation—could get so large that during the mean free path of an electrical particle an electrical field energy would be acquired, which would exceed the mean thermal energy. This is possible particularly in cases where the potential drop in single thin layers or interfaces is localized, e.g., where a good conductor is embedded in a poorly conducting material or for a loose contact of identical or disparate conductors. All these are complicated cases, in which the calculation of the electrical fluctuation energy is possible only on the basis of new assumptions. We meet easier conditions again only at the end of the following series of steps: namely, once the mean free path has become so large that every electrical particle traverses the entire applied field without resistance and collision. Exactly this extreme case is present for certain discharge tubes that are used for amplification; highvacuum hot cathode tubes as well as residual gas tubes work with mean free paths which are large compared to the distance between the electrodes. As examples for such discharge processes, we mention the following: 1. Diluted gas with radiative ionization. The ion source $S$ sends xrays or light of short wavelength into the diluted discharge area between anode $A$ and cathode $C$. For sufficiently low pressure, the generated ions discharge themselves without collision or recombination at the anode and cathode; each single ionization event corresponds here to the transition from $A$ to $C$ of a quantity of electricity, which is equal to the charge of the ion that is produced during this ionization (Fig. 1). 2. Diluted gas with corpuscular ionization. Instead of a radiation source, one can also think of a compound that emits particulate rays of high velocity as an ionizing source, e.g., a radioactive substance (Fig. 2). The necessary velocity can also be administered to the ionized particles through a special field; e.g., the radiation source $S$ can be a hot cathode from which, through an accelerating field, from ${A}^{\prime}$ applied, electrons are run through an ionizing region (Fig. 3). Into this group falls finally also the wellknown hot cathode amplifier tube with grid electrode, as long as it still contains some residual gas (Fig. 4). The hot cathode in this case can be viewed as an ionizing source; the field of the anode that goes through the grid $G$ gives the exiting electrons sufficiently high velocity that the gas between $G$ and $A$ gets ionized. From the generated ions, the negative ones migrate to the anode, while the positive ones migrate mostly to the grid. The current flowing from $G$ is a pure gas ionization current if the grid is, as is a common operating condition for amplifier tubes, sufficiently negative against the hot cathode $S$ to preclude the arrival of electrons. 3. Electron discharge in complete vacuum. The case in which through a glowing cathode $C$ electrons transition to the anode $A$ through a discharge region that is completely devoid of gas represents the easiest and purest example of a discharge event, in which the electrical particles transition the entire field freely. The transitioning elementary charge is the wellknown negative electric quantum $=4.69\times {10}^{10}$ electrostatic units. In the highvacuum amplifier tubes, such a discharge process takes place between cathode and anode (Fig. 5). In all the enumerated cases, a different kind of current fluctuation occurs within the discharge tube from the one that is generated through thermal motion. Because of the atomic structure of electricity, the electrical transition is represented not as a continuously flowing current but as a hail of charge quanta, which would cause current fluctuations even for a very regular temporal distribution. The frequency of current fluctuation would be given by the number of particles that transition per second. The transition surely does not take place regularly but sometimes more, sometimes fewer charge quanta strike in successive time intervals $\tau $, so that we can observe for each arbitrary time period $\tau $ an alternating current component of the discharge current. In all abovementioned cases, we offer a simple, obvious assumption that permits us to calculate the current fluctuations for an arbitrary period $\tau $. One only has to assume “that the transition of a charge quantum between cathode and anode represents an elementary process whose occurence has no temporal correlation with the transition of any other charged particle.” That this assumption is true for cases 1 and 2 is readily evident; even for the escape of electrons from a hot cathode, this assumption can hardly be avoided, in particular because the mean distance between two almost simultaneously exiting electrons must be very many atomic diameters for normal current intensities. In all the considered cases, the probability laws for a completely disordered temporal distribution of homogeneous elementary events can be applied for the transition of electricity. To begin with, we calculate in the simplest way a magnitude whose electrical significance, however, is not readily apparent: the mean square variation for a given period $\tau $. Let ${i}_{0}$ be the temporal mean value of the discharge current for a very long time and ${i}_{\tau}$ the mean value of the current during a certain envisaged time interval $\tau $. Let further the deviation of the current ${i}_{\tau}$ from the temporal mean. Then the value that is to be calculated is the mean square value $\overline{{j}_{\tau}^{2}}$.If $N$ is the mean number of elementary discharges in one second, then the mean number of the elementary discharges during an interval $\tau $ is equal to $N\tau $. But in a particular time interval $\tau $, a number of transitions different from this number generally takes place, call it a number ${n}_{\tau}$, which is different from $N\xb7\tau $. For the mean deviation value, holds, in case that during the time interval $\tau $ a large number of elementary events happen, according to our assumptions, the wellknown fluctuation law If $e$ designates the amount of electricity that passes for each elementary process between anode and cathode, then we have the result is, therefore,$$\overline{{j}_{\tau}^{2}}=\frac{{e}^{2}}{{\tau}^{2}}\overline{{\mathfrak{n}}_{\tau}^{2}}=\frac{{e}^{2}}{{\tau}^{2}}\xb7N\tau ,$$ or because of or in a different form The fundamental properties of the effect can already be recognized from these equations. If we define an amplitude $a$ of the mean timevarying current as then the following theorems are valid:1.The amplitude of the mean timevarying current that is created through the “shot effect” (this designation was chosen by considering its origin; the expression “shot” points, as it does in common language use, to the occurrence of a large number of identical elementary particles) is, for a given period and for a given mean discharge current, proportional to the square root of the elementary charge that is transported from one electrode to the other for each elementary process. Contrary to the heat effect, this effect would, therefore, completely disappear if electricity presented itself with arbitrarily small quanta. The absolute magnitude of the electrical elementary quantum determines the magnitude of the effect; conversely, the measurement of the effect would lead to a conclusion on the magnitude of the elementary electrical quantum. In case that in a gas doubly charged ions were produced in each elementary process, the amplitude of the timevarying current should increase proportional to $\sqrt{2}$ compared to the effect with singly charged ions. We would expect a particularly strong effect in cases where the ionization of a gas particle or the leakage of an electron acts as a trigger cause for a momentary strong ionizing event; as charge magnitude would then namely emerge the entire amount of electricity that is transported from one electrode to the other due to this triggered event. It cannot be excluded that for an ordinary glow discharge such complex discharge events have to be understood as independent elementary events. Based on Eq. (4), Theorem 1 corresponds to the following statement: 2.For a given mean discharge current, the amplitude of timevarying current generated through the shot effect is inversely proportional to the square root of the number of independent elementary events per second, which generate the current transition. Further, we have the following theorem by reading from Eq. (3): 3.For a given elementary charge, the amplitude of the shot effect timevarying current of a specific period grows proportionally to the square root of the average discharge current. An increase in the current does not blur the effect, as one would have initially assumed, but lets it emerge ever stronger; only the “ratio” of amplitude of timevarying current to the mean continuous current decreases, which decreases according to the following equation: with the current strength.4.The amplitude of the timevarying current of the shot effect is different, depending on the period it is based on. The smaller the period the larger the timevarying current; the amplitude grows inversely proportional to the square root of the period. This relationship is valid only as long as a significant number of elementary events happen within the period; while using Eq. (3), one can recognize the limit of this condition once the calculated timevarying current falls into the order of magnitude of the continuous current. The importance of Theorem 4 is limited for high frequencies through the fact that only just mean values for longer time intervals are reported for the observation and measurement of high frequencies. We will return to this point at the end of this paper. Action of the Shot Effect on an Oscillation CircuitAs mentioned earlier, the simple Eqs. (3) and (4) are hardly suitable for an assessment of the absolute magnitude of the effect, since the exact physical meaning of the mean square of the current fluctuation is as hard to specify as a direct metrology method of this magnitude. Therefore, we shall investigate more closely a special case with physically completely defined conditions, namely, the excitation of a matched damped oscillation circuit by the shot effect. We can then compare the resulting oscillatory energy directly with the energy of thermal oscillations as well as with a signal energy that is supplied to the oscillation circuit, thereby enabling us to judge the significance of the effect. An inductive coupling of the discharge current into the external oscillation circuit is hardly worth considering due to the low inductance of the discharge path. The most important case is one where an oscillation circuit and a discharge tube are plugged into the same circuit; corresponding to the high resistances of discharge tubes, the oscillation circuit is so situated with the tube that it reacts to its voltage fluctuations, i.e., corresponding to a connection in Fig. 6. We refrain for now from considering the existence of multiple natural oscillations in an oscillation circuit characterized by $L$, $R$, and $C$. The task to be solved has then close resemblance to one that was treated by M. Planck in the older radiation theory: the average energy of a damped oscillator must be calculated under the influence of irregular impulses, where, however, the “spectral distribution” of the exciting oscillation is determined not by the heat radiation law but through the “error law.” We treat one by one the following subtasks: 1. Calculation of the energy of an oscillation circuit under the influence of periodic currents in the discharge tube. If $J$ and $i$ denote the currents in the induction coil and in the discharge tube, respectively, in the direction indicated by arrows in Fig. 6, then $$\frac{{\mathrm{d}}^{2}J}{\mathrm{d}t}+\frac{1}{LC}J\frac{R}{L}\frac{\mathrm{d}J}{\mathrm{d}t}=\frac{1}{LC}i,$$ or, with the introduction of natural frequency of the oscillation circuit and the damping constant $$\frac{{\mathrm{d}}^{2}J}{\mathrm{d}{t}^{2}}+{\omega}_{0}^{2}J\varrho \frac{\mathrm{d}J}{\mathrm{d}t}={\omega}_{0}^{2}i.$$ If $i$ is a periodic function then it yields a stationary solution where $${a}_{k}={C}_{k}{\omega}_{0}^{2}\sqrt{\frac{1}{{({\omega}^{2}{\omega}_{0}^{2})}^{2}+{\varrho}^{2}{\omega}^{2}}},$$ or with $$\frac{\omega}{{\omega}_{0}}=x,\phantom{\rule[0.0ex]{2.0em}{0.0ex}}\frac{\varrho}{{\omega}_{0}}=r$$ Since the energy of the oscillation is ${E}_{k}=L{a}_{k}^{2}$, we have 2. Calculation of the average energy of the oscillation circuit under the effect of many different periodic currents in the discharge tube (corresponding to a Fourier decomposition). We now consider the actual current flowing in the discharge tube, with the current’s fluctuations due to the shot effect between time $t=0$ and the large time $t=\mathfrak{T}$ expanded in a Fourier series $$i=\sum _{k=0}^{\infty}{i}_{k}=\sum _{k=0}^{\infty}{C}_{k}\text{\hspace{0.17em}}\mathrm{sin}({\omega}_{k}t+{\phi}_{k}),$$ where The average value of the energy $\overline{{E}_{S}}$ in the oscillation circuit due to the shot effect is then given by Eq. (6)$$\overline{{E}_{S}}=\sum {E}_{k}=L\sum _{k=0}^{\infty}\frac{{C}_{k}^{2}}{2}\xb7\frac{1}{{(1{x}^{2})}^{2}+{r}^{2}{x}^{2}}.$$Here, depends on the ordinal number $k$ as well.3. Evaluation of the average expressions on the basis of the assumption of “independent elementary events.” By evaluating Eq. (6) and taking the properties of the shot effect as a basis, we make the simplifying assumption that the current $i$ is not influenced by the voltage fluctuations, which cause in the circuit $L$, $R$, $C$ the existing electrical oscillation on the electrodes of the discharge tube. The fluctuations of the current $i$ are then exclusively given by the random fluctuations of the number of elementary events. For each partial occurrence, we have $${C}_{k}=\frac{2}{\mathfrak{T}}\underset{0}{\overset{\mathfrak{T}}{\int}}i\xb7\mathrm{sin}({\omega}_{k}t+{\phi}_{k})\mathrm{d}t,$$ or, if we set $i={i}_{0}+j$, $${C}_{k}=\frac{2}{\mathfrak{T}}\underset{0}{\overset{\mathfrak{T}}{\int}}j\text{\hspace{0.17em}}\mathrm{sin}({\omega}_{k}t+{\phi}_{k})\mathrm{d}t,$$ since we have $$\underset{0}{\overset{\mathfrak{T}}{\int}}{i}_{0}\text{\hspace{0.17em}}\mathrm{sin}({\omega}_{k}t+{\phi}_{k})\mathrm{d}t=0.$$ The value ${C}_{k}$ will fluctuate rapidly and irregularly with the ordinal number $k$ because of the irregular character of the shot effect. For sufficiently large $\mathfrak{T}$, an arbitrarily large number of different values $k$ and coefficients ${C}_{k}$ will belong to a small area between $x$ and $x+\mathrm{\Delta}x$. It is therefore permitted to determine the average value over ${C}_{k}^{2}$ within a certain region without taking into account the variability of $x$. Let ${k}_{1}$ and ${k}_{2}$ be numbers that are different by many units, but which correspond to only a small variation in $x$ relative to 1. We then have to form the following expression: Eq. (7)$$\sum _{k={k}_{1}}^{{k}_{2}}{C}_{k}^{2}=\frac{4}{{\mathfrak{T}}^{2}}\sum _{k={k}_{1}}^{{k}_{2}}{(\underset{0}{\overset{\mathfrak{T}}{\int}}j\xb7\mathrm{sin}({\omega}_{k}t+{\phi}_{k})\mathrm{d}t)}^{2}.$$Eq. (8)$$\underset{t=0}{\overset{\mathfrak{T}}{\int}}\underset{{t}^{\prime}=0}{\overset{\mathfrak{T}}{\int}}j{j}^{\prime}\text{\hspace{0.17em}}\mathrm{sin}({\omega}_{k}t+{\phi}_{k})\mathrm{sin}({\omega}_{k}{t}^{\prime}+{\phi}_{k})\mathrm{d}t\text{\hspace{0.17em}}\mathrm{d}{t}^{\prime}.$$We now notice furthermore that the value of $\overline{{C}_{k}^{2}}$ only needs to be formed for frequencies ${\omega}_{k}$ that are not a large multiple of the characteristic frequency of the oscillation circuit, ${\omega}_{0}$. For Eq. (6) yields that the frequencies that correspond to a very high $k$value contribute vanishingly little to the average energy $\overline{{E}_{S}}$. If we make a further precondition that for all frequencies on the order of magnitude of ${\omega}_{0}$ and below, very many elementary processes occur within a period ${\tau}_{k}$, i.e., many elementary processes fall within the characteristic period ${\tau}_{0}$ of the oscillation circuit, then we should be able to also assume that in such a fraction of a period, in which the value of $\mathrm{sin}({\omega}_{k}t+{\phi}_{k})$ is changed very little, still many elementary processes happen. We can then introduce without much error, instead of the time elements $\mathrm{d}t$ and $\mathrm{d}{t}^{\prime}$, such large time elements $\mathrm{\Delta}t$ and $\mathrm{\Delta}{t}^{\prime}$ that the number $n\mathrm{\Delta}t$ of the elementary charges that traverse within the time $\mathrm{\Delta}t$ is still large; likewise for $\mathrm{\Delta}{t}^{\prime}$. This fact justifies a posteriori the introduction of a current $j$ for the effect that is under consideration. Were we to consider small time intervals, then a current of $j=\infty $; or $j=0$ would have to be assumed, depending on a charge traversing in that instant or not. Equation (8) then becomes a double sum: Eq. (9)$$\sum _{t=0}^{\mathfrak{T}}\sum _{{t}^{\prime}=0}^{\mathfrak{T}}j\mathrm{\Delta}t\xb7{j}^{\prime}\mathrm{\Delta}{t}^{\prime}\xb7\mathrm{sin}({\omega}_{k}t+{\phi}_{k})(\mathrm{sin}\text{\hspace{0.17em}}{\omega}_{k}{t}^{\prime}+{\phi}_{k}),$$We now decompose the expression [Eq. (9)] into two parts. The first part shall contain all terms of the sum, in which $t={t}^{\prime}$, $j={j}^{\prime}$, and $\mathrm{\Delta}t=\mathrm{\Delta}{t}^{\prime}$. We then obtain the simple sum Eq. (10)$$\sum _{t=0}^{\mathfrak{T}}{(j\mathrm{\Delta}t)}^{2}\text{\hspace{0.17em}}{\mathrm{sin}}^{2}({\omega}_{k}t+{\phi}_{k}).$$Now under our assumptions there exist many periods within the time $\mathfrak{T}$. Within the sum, many terms occur, in which $\mathrm{sin}({\omega}_{k}t+{\phi}_{k})$ has the same value. Consequently, it is permitted to multiply each value of the sine with the average value of ${(j\mathrm{\Delta}t)}^{2}$. But this average value depends on $t$; since $j\mathrm{\Delta}t=e\xb7{\mathfrak{n}}_{\mathrm{\Delta}t}$ and since after the already consulted fluctuation law we have we yield$$\overline{{(j\mathrm{\Delta}t)}^{2}}={e}^{2}\overline{{\mathfrak{n}}_{\mathrm{\Delta}t}^{2}}={e}^{2}N\mathrm{\Delta}t=e{i}_{0}\mathrm{\Delta}t.$$ Hence, Eq. (10) transitions to $$e{i}_{0}\sum _{t=0}^{\mathfrak{T}}{\mathrm{sin}}^{2}({\omega}_{k}t+{\phi}_{k})\mathrm{\Delta}t=e{i}_{0}\xb7\frac{\mathfrak{T}}{2}.$$ This expression depends on the ordinal number $k$; after insertion into Eq. (7), we obtain for the value of as long as the first part of the double sum in Eq. (9) is concerned, the expression We now show that the other, so far not considered part of the double sum in Eq. (9) contributes to Eq. (7) a negligibly small contribution compared to the one in Eq. (11). We can limit ourselves here to orderofmagnitude considerations. Because all values $j$ and ${j}^{\prime}$ that refer to the same interval $\mathrm{\Delta}t$ are independent from each other and can be as often positive as negative, the rest of the sum in Eq. (9) consists of summands that equally often can be positive or negative. The multiplication of the sine functions that can equally often be positive or negative as well does not change this fact; the order of magnitude of the sine product is 1, the number of the summands that are to be considered is Now it is well known that the order of magnitude of a sum of $p$ terms of order of magnitude $a$, which can be equally likely positive as negative, is of order $\sqrt{p}\xb7a$, and this expression can in fact be positive or negative. Since the product $j\mathrm{\Delta}t\xb7{j}^{\prime}\mathrm{\Delta}{t}^{\prime}$ is of the order of magnitude so as an order of magnitude for the second part of the double sum in Eq. (9) yields the expression:$$\sqrt{\frac{\mathfrak{T}}{\mathrm{\Delta}t}\xb7\frac{{\mathfrak{T}}^{\prime}}{\mathrm{\Delta}{t}^{\prime}}}\xb7e{i}_{0}\mathrm{\Delta}t=\mathfrak{T}e{i}_{0}.$$ We see that the residual term for each single $k$value is of the same order of magnitude as the term for equal $k$values, which we had calculated above. This residual term can be as often positive as negative, so the value of ${C}_{k}^{2}$ fluctuates thus for each single $k$term to either side around values of the order of its average value. But if many terms ${C}_{k}^{2}$ are now combined then the second part of the double sum contributes to Eq. (7), according to the aforementioned law, only an amount of the order of As soon as ${k}_{2}{k}_{1}$ is chosen sufficiently large, this part disappears compared to the part that is represented by Eq. (11) and we obtain Eq. (12)$$\sum _{k={k}_{1}}^{{k}_{2}}{C}_{k}^{2}=({k}_{2}{k}_{1})\xb7\frac{2}{\mathfrak{T}}\xb7e{i}_{0}.$$Thus the average value of ${C}_{k}^{2}$ is independent of $k$, provided that a sufficiently large number of terms is combined; it is $$\overline{{C}_{k}^{2}}=\frac{\sum _{k={k}_{1}}^{{k}_{2}}{C}_{k}^{2}}{{k}_{2}{k}_{1}}=\frac{2}{\mathfrak{T}}\xb7e\xb7{i}_{0}.$$ We now apply this result in Eq. (6) for the energy $\overline{{E}_{S}}$. It becomes Eq. (13)$$\overline{{E}_{S}}=Le{i}_{0}\xb7\frac{1}{\mathfrak{T}}\sum _{k=0}^{\infty}\frac{1}{{(1{x}^{2})}^{2}+{r}^{2}{x}^{2}}.$$$${\omega}_{k}=\frac{2\pi k}{\mathfrak{T}}\phantom{\rule[0.0ex]{1.0em}{0.0ex}}\text{and}\phantom{\rule[0.0ex]{1.0em}{0.0ex}}{x}_{k}=\frac{{\omega}_{k}}{{\omega}_{0}}$$ and because from our previous statements an increment of $k$ by one unit corresponds only to a very small change in $x$, we can set $$1=\frac{\mathfrak{T}}{2\pi}\xb7({\omega}_{k}{\omega}_{k1})=\frac{\mathfrak{T}}{2\pi}\xb7{\omega}_{0}\mathrm{d}x$$ and instead of a summation over all $k$ values, an integration over $x$ has to be carried out.Therefore, we have $$\frac{1}{\mathfrak{T}}\sum _{k=0}^{\infty}\frac{1}{{(1{x}^{2})}^{2}+{r}^{2}{x}^{2}}=\frac{{\omega}_{0}}{2\pi}{\int}_{0}^{\infty}\frac{\mathrm{d}x}{{(1{x}^{2})}^{2}+{r}^{2}{x}^{2}},$$ or after evaluation of the definite integral, $$\frac{1}{\mathfrak{T}}\sum _{k=0}^{\infty}\frac{1}{{(1{x}^{2})}^{2}+{r}^{2}{x}^{2}}=\frac{{\omega}_{0}}{{r}^{2}}=\frac{{\omega}_{0}^{3}}{{\varrho}^{2}}.$$ So we finally have If we introduce the characteristic period and the decay time then we have in technically clearest groupingEq. (14′)$$\overline{{E}_{S}}=e\xb7{\left(\frac{\pi \vartheta}{\tau}\right)}^{2}\xb7{\omega}_{0}L\xb7{i}_{0}$$5.The current fluctuations in a discharge tube that are caused by the shot effect act upon a parallel connected resonator of characteristic period $\tau $ and arbitrary damping as if a purely sinusoidal alternating current of period $\tau $ and effective amplitude: was present in the discharge tube.With this, we confirm the admissibility of the previous approximate calculation in an important case; the exact effective amplitude of the alternating current that is to be substituted differs in this case from the one determined in Eq. (3) by only a factor $\sqrt{\pi}$. Concerning the nature (the purity) of the fluctuations caused by the shot effect in the oscillation circuit $L$, $R$, and $C$, it is apparently exactly the same as in the thermal fluctuations; the purity of the oscillation is again determined by the ratio of decay time $\vartheta $ and period $\tau $. This holds as long as during the time $\tau $ a large number of elementary processes happen; the origin of the oscillations is in this—probably only measurable—case not detectable, only the magnitude of the effect is different. How far the laws of large numbers is valid for technical currents and frequencies may be elucidated by the fact that for a direct current of ${10}^{12}$ A still about 1000 elementary charges transfer within a microsecond. How large is the oscillatory energy caused by the shot effect in our oscillation circuit in comparison with the thermal oscillatory energy $kT$? Under the assumption that the transitioning charge during an elementary process has its smallest quantity, $\u03f5=4.69\times {10}^{10}$ electrostatic units, yielding, in technical units ($\u03f5=1.56\times {10}^{19}\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{C}$), In amplifier technology, common values are $\frac{\vartheta}{\tau}=3$ through 30, ${\omega}_{0}L\xb7{i}_{0}={10}^{3}$ through 1. Accordingly, the range of $\overline{{E}_{S}}$ moves between roughly ${10}^{20}$ and ${10}^{15}$ technical energy units; the energy of a simple oscillation circuit that is caused by the shot effect is thus from twice up to 200,000 times as large as the energy of a characteristic oscillation that is caused by thermal motion at room temperature.Just as we did in Part 1 for the thermal motion, we can now specify a power ${P}_{S}$ to the average energy $\overline{{E}_{S}}$, which would be necessary to maintain the oscillatory energy in the circuit $L$, $R$, and $C$ through excitation from outside (e.g., inductive). Corresponding to the aboveintroduced assumption that the voltage fluctuations at the capacitor cause no changes of the current in the tube, we can neglect here an energy consumption by the tube, and only the energy that is consumed in the resistance loss of the oscillation circuit must be continuously supplied. As before, we yield Therefore, from Eq. (14′)If the oscillation circuit under consideration is the receiver circuit of an amplifier configuration then ${P}_{S}$ can be compared with a maximum initial power that can be transmitted to the receiver circuit through a given weak signal and the statement can be formulated that the available initial powers that are smaller than ${P}_{S}$ are covered up by oscillations that are caused by the shot effect. ${P}_{S}$ in turn provides a metric for the smallest recordable receiver energies, namely, here we deal with the decisive calculation since the shot effect gives rise to larger noise than the thermal effect, as we have just determined. From Eq. (15) we derive the following statements:
The dependence of ${i}_{0}$ and $e$ was already discussed earlier. If $e$ is the elementary charge of an electron, then we yield where $\vartheta $ is of the order of 1 ms. For discharge tubes with electrons and singly valued ions of large path length, the power that is supplied to the input circuit is in the range between ${10}^{17}$ and ${10}^{12}\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{W}$, taking as a basis the abovementioned limits for $\overline{{E}_{S}}$. Since ${10}^{15}\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{W}$ is a power that can easily be made observable through amplification, it is clear that the shot effect must make itself unpleasantly noticeable in amplifier technology; surely, it must have been observed frequently without recognizing its cause.Of theoretical interest is the quantitative determination of the elementary charge from the magnitude of the current fluctuations that are caused by the shot effect. Here, it will be expedient not to use the noise that arises in the amplifier itself, but to instead switch in parallel to the oscillation circuit a special discharge tube, for example, a hot cathode tube with high discharge currents ${i}_{0}$, and to then amplify the fluctuations that arise in the oscillation circuit. Experiments of this sort are planned to be conducted soon in the KLaboratory of the Wernerwerk of Siemens & Halske. Finally, for a completion of this idea, we shall briefly point out two generalizing considerations. If instead of a single $\mathit{L}$, $\mathit{R}$, and $\mathit{C}$ oscillatory system “systems with many different characteristic oscillations” are considered, then we can calculate the energy of each single characteristic oscillation that is caused by the shot effect as if the other characteristic oscillations were not present. Each of these characteristic oscillations then corresponds to a certain energy consumption in the oscillating entity so that the energy and, therefore, the energy consumption grows with the frequency. If we sum up all oscillations up to periods that approximately correspond to the average time span between two elementary discharges, then this can yield a fairly significant amount of energy that is constantly used up in the oscillating entity and there converted into heat. The battery that lies between the electrodes of the discharge tube can be viewed as the energy source, as it is done in the entire balance of the amplifier tube; through this effect, a part of the energy consumption is simply transferred from the surface of the electrodes to the interior of the oscillation circuit. The tube acts, therefore, to a certain degree as an oscillation generator, just because of the atomistic structure of the electricity. A second generalization consists in the consideration of “feedback of voltage fluctuations” onto the magnitude of current fluctuations in the tube. It is sufficient to consider again the case of a simple oscillation circuit and to point out that the oscillation energy that is caused by the shot effect does not, as it appears from Eq. (14′), grow to infinity for a given $\tau $ and growing $\vartheta $ and $L$ but rather reaches a maximum value that depends only on the properties of the tube and the expression $e{i}_{0}/\tau $. One objection, which we do not want to overlook, can be raised against the calculation of the “smallest observable signal energy,” in order for the developed line of thoughts not to appear incomplete. We assumed that a signal is not clearly observable when it gives rise to voltage fluctuations in the input circuit of an amplifier structure, which are not larger than the ones caused by constant noise, e.g., caused by the shot effect. However, this is true only for the case where in the output circuit of the structure there are systems present that respond to the same frequency as the input circuit. In wireless technology, for example, this condition is not met; the final periods are significantly longer than the characteristic period of the input circuit. For this case, the dangers of high frequencies do not occur to the extent as would appear following Eq. (15). For a clarification of these conditions, we consider the extreme case of very long final periods. If the oscillations in the input circuit are sufficiently amplified and then measured, e.g., with a hotwire ammeter or rectified and then measured with a galvanometer, then it is clear that the difference in the constant deflection with and without constant signal excitation should be of the same order, whether or not constant noise is present. Only the zeroing of the indicator would be offset, and one could believe that it was possible to eliminate the influence of the constant oscillations in the input circuit through a compensation method. Indeed this points to a way to eliminate the noise caused by thermal and shot effect to a large extent, and to make significantly smaller signal energies detectable than the ones that are given by Eqs. (2) and (15). But also this option has its limits. It lies in the nature of the completely irregular character of the shot effect that fluctuations of medium intensity occur even within larger periods; the pointer in a metrology instrument would, therefore, not be completely calm but would execute fluctuations on its own period, which would make impossible the readout of significantly smaller permanent effects. A minimum value for the undisturbed recordable energy must exist even in this case. The assumption that the output instrument takes the average of the energies in the input circuit during a period $P$ yields an approximate observation that the still undisturbed recordable minimum power differs from those expressed in Eq. (15) by a factor $\sqrt{\tau /P}$. Here, $\tau $ represents again the characteristic period of the input circuit, and it is assumed that $P$ is large compared to the decay time of the input circuit. For long final periods (on the order of 10 s), the observable energy is lowered by a factor of 100 to 10,000 for signals in the range of audible frequencies up to wireless frequencies. Summary
Charlottenburg, 26 June 1918 (Received 1 July 1918) Editors’ Comments The editors thank Chris Mack for urging the translation of this significant article to commemorate the 100th anniversary of its original publication; they also thank Timothy Brunner for helpful comments. Finally, we would like to thank Peter Gregory of Wiley and Eric Pepper of SPIE for enabling the publication of this translation.
