Applications that use single-photon sources and detectors cover an expanding range of topics, as demonstrated by the contents of these proceedings. They can be considered as falling under overlapping categories. The first, which is primarily the domain of single-photon detectors, is concerned with detection at the lowest of light levels, and the discrete nature of light is a phenomenon that has to be accommodated. Examples of these are sensing applications, such as low ambient light sensing and surveillance, medical imaging, and astronomy. The second is that where the quantized energy structure of matter is the important factor. Examples of these are photoelectric and thermal detection, single-photon generation, and the majority of spectroscopic applications, which may range from solid-state physics to a multiplicity of biological applications. The final category covers applications where the quantum nature of light is the key factor. This is relevant in the fields of quantum information processing and quantum metrology where industrial technologies based on the production, manipulation, and detection of single photons are emerging, and this is the field that is driving the need for new metrology. Within this category lies the application of nonclassical correlations, such as entanglement. Quantum key distribution (QKD) and quantum random number generators (QRNGs) are two of the most commercially advanced technologies and among the first to directly harness the peculiar laws of quantum physics. Figure 1 shows some of the main application areas.
This review is concerned with the traceable methods that are available for reliable characterization of single-photon detectors and sources. Traceable means that the result of a measurement, no matter where it is made, can be related to a national or international measurement standard and that this relationship is documented. The concept of traceability is important because it makes possible the comparison of the accuracy of measurements worldwide according to a standardized procedure for estimating measurement uncertainty.1 The chosen system of units is the International System of Units (SI). SI is not static but evolves to match the world’s increasingly demanding requirements for measurements at all levels of precision and in all areas of science and technology. However, any changes in the SI system are designed to ensure that any step-changes in units are minimized. Hence, traceability to the SI ensures comparison of measurements not only worldwide, but also between one year and another.
Current optical power scales are based on measurements in the 0.1- to 1-mW regime, which are suitable for traditional requirements.2 The rapid development of single-photon sources and detectors, and the growth of associated technologies, such as QKD and quantum computing, require measurements in the single- and few-photon regime. The international acceptance of these new quantum-based technologies requires improved traceability and reliability of measurements at the few-photon level. This has led to a proposal to expand the formulation of candela, the SI unit of optical power, to include one based on photon number. The reader is directed to Zwinkels et al.3 for further details.
A true single-photon source (SPS) will emit individual photons at periodic intervals. To guarantee its operation, the core of the source will be a solitary quantum emitter, such as an atomic particle, molecule, or quantum dot. Deterministic emission is triggered by a periodic electronic or optical excitation of the source. An ideal source will exhibit highly efficient polarized emission into a well-defined spatial optical mode; there will be negligible temporal jitter of the photon emission with respect to the clock signal triggering the emission.
Various measurements exist to quantify the operating parameters of a practical SPS. The degree of second order coherence determines how antibunched the emitted photons are; for an ideal source, . This parameter is a measure of the probability of multiphoton emission events, which is greatly reduced in comparison to a coherent light source. The coherence time (measured via the coherence length ) and the source emitter’s lifetime are important for evaluating the quantity . For an ideal source, no dephasing of the emitted photons exists and the ratio is , meaning that the photons are wholly indistinguishable. This can be quantified further through the observation of two-photon quantum interference with perfect visibility.
In the context of quantum-photonic technologies based on entanglement, the antibunched and indistinguishable nature of photons is paramount. However, real (imperfect) sources will deviate from the ideal case, and the emission will not be perfectly antibunched or indistinguishable or free of jitter. The parameters described above can be used as metrics to determine the utility of a practical SPS in such applications.
Single-photon detectors, also referred to as photon counters, operate in either gated or nongated (i.e., continuously gated) modes, and are only able to detect an incoming optical pulse during these gates. Single-photon detectors can be characterized by the following properties:
• photon-number resolution (the ability to distinguish the number of photons in each detected pulse)
• detection efficiency, (the probability that a photon incident at the optical input of the detector within a detection gate will be detected and produce an output signal)
• dark count probability (the probability that a detector registers a detection event within a detection gate in the absence of incident photons)
• after-pulse probability (the probability that a detector registers a false detection event in the absence of illumination, conditional on a true photon detection event in a preceding detection gate)
• dead-time (the time interval after a detection event when the detector is unable to provide a response to an incoming photon)
• recovery time (the shortest time duration after a photon detection event for the detection efficiency to return to its steady-state value)
• jitter (the temporal variation in the output signal produced by the detector upon registering an event)
• linearity of response (the property that the detector response is unchanged as the number of incident photons per pulse varies over a specified range)
• maximum exposure level (the photon flux above which the detector may undergo a temporary or permanent change in characteristics).
These properties may be wavelength and temperature dependent, as well as varying across the spatial dimensions of the detector.4 A property that is important in QKD is the photon counter indistinguishability, i.e., the extent to which the detector output voltage pulses of detectors can be distinguished in the time domain.
An ideal detector would be photon number resolving for all , where is the number of photons in a pulse, as well as having unity detection efficiency, and zero dark count and after-pulse probabilities with no dead time or recovery time. Various single-photon detection technologies exist, and a particular technology may only exhibit a subset of the characteristics listed above.
Single-Photon Sources and Detectors
The simplest approximation to an SPS would take thermal radiation (from a hot filament or discharge lamp) and attenuate it down to the photon-counting level. While this form of light may be relevant in ambient low-light sensing applications, the most common SPS is an attenuated laser, where the photons in the attenuated output are distributed within time intervals or pulses [continuous wave (CW) or pulsed radiation, respectively] according to Poissonian statistics. Thermal light also gives rise to a distribution of photon number states, and both types of light can have less (zero) or more than one photon per time-interval/pulse.
Practical SPSs can be divided into two classes. The first comprises heralded SPSs, which are based on correlated pairs of photons produced in a nonlinear medium. The second class uses a single quantum emitter, for example, as in demonstrations with trapped atoms and ions, as well as with molecules, color centers, and quantum dots in a solid-state host.
The drive to develop true, deterministic, SPSs was provided by the realization that such states, together with interference and detection, could be used to achieve quantum computation and other photonic quantum technologies,56.–7 including applications to metrology.89.10.–11 Deterministic sources would provide scalability (i.e., being able to manipulate large numbers of photons using finite resources), which is considered to be problematic for heralded photons created in spontaneous nonlinear processes. Reviews covering the topic in more detail have been published.1126.96.36.199.17.–18
Heralded single photons
Correlated pairs of photons produced in a three- or four-wave nonlinear process can be used as a source of single photons. Detection of one of the created photons heralds the existence of its single-photon twin. The creation process is random in time, and multiphoton events can still arise. Until true deterministic sources are realized, these probabilistic processes are the pragmatic option for producing single- and entangled-photon states and exploring the potential offered by entanglement and nonclassical correlations.
Spontaneous parametric downconversion (SPDC) is a three-wave mixing process where a photon in a medium (usually a crystal) with nonlinearity can be downconverted into two photons under the constraint of energy and momentum conservation, commonly referred to as phase matching. SPDC can produce correlations19,20 (in the pair of downconverted particles) in time, energy, momentum, polarization, and angular momentum. In addition to producing heralded single photons, SPDC can also be used to produce pairs of photons that are entangled in one or more of their observables.21,22 Periodic poling has been applied to crystals in order to achieve phase-matching conditions not otherwise possible,23 and has been applied to waveguide structures24,25 where the downconverted photons are constrained to the single (or few) spatial mode(s) of the waveguide. We highlight a few implementations—entanglement-based QKD over 144 km through free space,26 a 1.55-μm source using a fast shutter to suppress background counts,27 and multiplexing of four sources to provide a high output rate while suppressing multiphoton states.28
Spontaneous four-wave mixing (SFWM) is a nonlinear process, where two pump photons create the pair of downconverted photons. This process is weaker than the process, so a longer interaction path is required. Initial work focused on silica optical fiber,29,30 and recent efforts have applied the SFWM process to on-chip waveguides in silica.3132.–33 In silica, a particular problem to overcome is the Raman background.
Atomic particles can be trapped and cooled almost to rest, thus providing a reproducible single quantum emitter. A single trapped ion will exhibit antibunching,34 and such systems have been used successfully in demonstrating probabilistic atom-photon entanglement,35,36 atom-atom entanglement,37 and quantum teleportation between remote atomic particles.38 A necessary requirement for this work is that the photons from separate sources exhibit quantum interference.39 Cavity quantum electrodynamics offers the route to a deterministic source; the ion is confined in an optical cavity which is near resonant and strongly coupled to one of two transitions in a Λ-configuration of the atomic-level structure.40 The technique of stimulated Raman scattering via adiabatic passage can generate single photons deterministically with high efficiency, although at limited rate. An analogous approach using single atoms confined in an optical dipole trap has yielded deterministic sources,41 which exhibit quantum interference.42 These atom-cavity systems have been used as nodes43,44 in the demonstration of an elementary quantum network.45
Using advanced micro- and nanofabrication techniques, III-V semiconductor heterostructures can be engineered to create SPSs.14,17 For example, InAs quantum dots can be grown within a structure based on GaAs; these can be excited by optical46 or electrical47 means. The Purcell effect can be used to enhance the rate and directionality of the emission by embedding the quantum dot in a pillar microcavity;48,49 layers of a semiconductor with an alternating refractive index form Bragg mirrors, and the structure is etched to a micron-scale diameter to minimize the cavity’s mode volume and maximize enhancement. Optically pumped structures can generate indistinguishable photons,46 which have been used to demonstrate entanglement50 and teleportation.51 Optically pumped semiconductor sources have also been used to generate polarization-entangled photons.52,53 Indistinguishability of single photons emitted by an electrically pumped source has also been demonstrated.54 Quantum dot emission is inhomogeneous; this requires that emission from separate sources is tuned to enable photon indistinguishability. As an alternative to pillar microcavities, locating quantum dots in a photonic crystal cavity has also been investigated.55 Cryogenic temperatures are necessary for all of these semiconductor approaches.
Color center defects in diamond is another approach to generating single photons,15 the nitrogen-vacancy (NV) defect56 being the most widely studied. Recent work has shown coupling of individual NV centers to a microring resonator57 and a photonic crystal cavity58 in single-crystal diamond, as well as coupling of a single NV center to a fiber microcavity.59 These approaches will be used in quantum information applications in conjunction with the internal energy levels of the NV defect. Other diamond defects can also be used for generating single photons, one example being that due to chromium.60
Molecules have also been investigated.61 Their rich vibrational structure leads to spectral broadening at ambient temperatures. At very low temperatures, the transition connecting the ground vibrational states of the ground and excited electronic states is a very narrow line, the zero-phonon line (ZPL). ZPL line-widths are often lifetime-limited at low temperatures. Photostability is a serious issue.62 Recent work has demonstrated quantum interference from separate molecules63 and improved the spontaneous emission rate.64
Several types of single-photon detectors have been developed, and a brief overview of the most commonly used types is given below. For detailed reviews, see Hadfield,69 Eisaman et al.,16 and Migdall et al.18
Non-photon-number resolving detectors
The photomultiplier tube (PMT) was the first established photon-counting technology70 and consists of a vacuum tube with a light absorbing photocathode from which electrons are liberated through the photoelectric effect. The few-electron photocurrent is multiplied by a cascade of secondary electron emissions from a series of electrodes positively biased with respect to the previous one (dynodes) in order to obtain a macroscopic current pulse. Different photocathode materials can be chosen in order to optimize the spectral response, ranging from UV to telecom wavelengths. An evolution of the PMT idea is the microchannel plate PMT, where glass capillaries are fused in parallel and coated with secondary electron-emitting materials to obtain a single continuous biased dynode71 with an improved temporal resolution with respect to the original PMT (tens of picoseconds versus hundreds of picoseconds).
Single-photon avalanche photodiodes (SPADs) operating in Geiger mode are the most common and commercially successful solution for single-photon counting, having replaced PMTs in many applications. SPADs are based on an avalanche photodiode structure reversely biased above the breakdown voltage (known as Geiger mode operation), so that electron-hole pairs generated by photon absorption are multiplied in an avalanche gain process. To control this effect, the avalanche must be stopped and the device reset by a passive or active quenching circuit.7273.74.–75 Silicon-based SPADs operating in Geiger mode achieve single-photon sensitivity in the VIS-NIR, with low dark counts and timing jitter reduced to tens of picoseconds.
SPADs for the 1.3- and 1.55-μm telecommunication bands use lower-band-gap semiconductor materials, such as Ge and InGaAs/InP.7677.78.79.–80 These devices suffer from dark count rates that are orders of magnitude higher than that for their Si counterparts and are typically operated in gated Geiger mode, although free-running operation has recently been achieved.81,82 Much of the ongoing effort to improve InGaAs SPAD performance is targeted at the commercial development of fiber-based QKD systems.83 Novel biasing and gating schemes, employing a dc bias just below the avalanche breakdown voltage on top of which a high-speed periodic low amplitude bias signal is added,84,85 enable increased device clock rates,86 a feature that is particularly important for QKD.87
Frequency upconversion schemes are used to convert photons in the 1.3- and 1.55-μm telecommunication bands to a shorter wavelength that can be detected by Si-SPADs, which typically have a higher detection efficiency than infrared SPADs.8889.90.–91 Upconversion exploits sum-frequency phenomena in nonlinear optical media, where a weak signal (the single-/few-photon state) is combined with a strong pump signal to upconvert the weak signal to the frequency that is the sum of the frequency of the two incoming signals. Despite what seems a very simple and beautiful solution, there are several technical challenges and drawbacks to frequency upconversion detectors related to the stability of, and fluorescence and other optical losses within, the nonlinear medium.
Superconducting nanowire single-photon detectors (SNSPDs) exhibit low dark counts (), short recovery times (), and low timing jitter (). The detection element is an nanowire of superconducting material in a meander structure. The superconducting wire is biased just below its critical current, and a localized resistive hot-spot is created when a photon strikes the wire, triggering the voltage-pulse that signifies the detection of the photon.9293.–94 In contrast to SPADs that operate at room temperature, or temperatures achievable using thermoelectric cooling, these devices operate at a few kelvin. The first devices used NbN nanowires, but recent improvements in detection efficiency have been achieved by exploiting optical cavities95 and amorphous W-Si nanowires.96
Photon-number resolving detectors
PMTs, SPADs, and, more recently, SNSPDs are available commercially and are reasonably straightforward to use, but they do not have a photon-number resolving (PNR) ability. On the contrary, detectors with some kind of PNR ability are often research prototypes.
One approach is to spatially or temporally multiplex non-PNR detectors.69,9798.99.100.–101 In addition to single chip spatial multiplexing of SNSPDs,102 there have been several efforts to fabricate SPAD arrays on a single chip—one example is a silicon photomultiplier device consisting of an array of SPAD pixels that are read in parallel.103 Other examples are arrays where each SPAD of the array is integrated directly with quenching circuitry and millimeter-scale SPAD arrays.104105.106.107.–108
In addition to these extrinsic PNR detectors based on multiplexed non-PNR detectors, there are laboratory-prototype intrinsic PNR detectors, e.g., visible-light photon counters (VLPCs) and superconducting transition-edge sensors (TESs).
VLPCs are low-temperature () semiconductor-based high-efficiency PNR detectors.109110.–111 In these devices, the absorption of a photon produces an electron–hole pair that, in the low-voltage gain region, produces a multiplication process close to the theoretical minimum. This gives rise to a signal that is proportional to the photon number and has been proven successful for counting up to five photons.
Superconducting TESs provide almost ideal detection efficiency and intrinsic PNR ability.112113.–114 A major limitation is that they must be operated in the tens of millikelvin regime since they are essentially very sensitive bolometers. The sensor is a superconducting film maintained at the superconducting transition; thus, any change in temperature will cause a change in resistance, which is detected using a SQUID amplifier. Another drawback of the TES is that the energy resolution and recovery time are, respectively, directly and inversely proportional to the superconducting transition temperature. This currently limits the maximum repetition rate to around .115,116
Quantum Key Distribution
QKD83,117 is arguably the first commercialized quantum optical technology and uses single photons to establish a secret key (cipher) between two parties over an open optical channel, such as free space26 or an optical fiber.118 If a hacker intercepts these photons, (s)he will disturb their encoding in a way that can be detected. QKD does not prevent hacking, but reveals whether a hacker has been able to compromise the key. The simplest configuration, known as “prepare-and-measure” or “one-way,” comprises a transmitter (Alice), which encodes bits on single photons, and a receiver (Bob), which decodes these bits. In the “plug-and-play” or “go-and-return” configuration, Bob transmits photons, Alice encodes information on them, and resends them to Bob for decoding. The most commercially advanced QKD systems operate over an optical fiber, use attenuated laser pulses (faint pulses) as an approximation to true single photons, and encode information on the phase of the photons.119 Photons are distributed in attenuated laser pulses according to Poissonian statistics; hence, some pulses will contain two or more photons. In order to reduce the power of photon-number-splitting120 attacks on these multiphoton pulses, the Scarani-Acin-Ribordy-Gisin121 and decoy state122 protocols were developed.
QKD is a physical, as opposed to an algorithmic, process. The laws of physics prove the security of QKD if faithfully implemented. The physical performance of the QKD system at the time of creating the secret key is, therefore, essential to its security. As such, it is one of the drivers for single-photon source and detector metrology.
Measurement of the performance of the optical components of a QKD system can be used to establish (1) whether they satisfy the assumptions and requirements of security proofs,123,124 (2) the performance of the system in terms of expected secure bit-rate and range, (3) immunity from side-channel attacks,125,126 and (4) whether component performance has changed, either from natural ageing or from device manipulation,127128.–129 i.e., hacking attacks via the open optical channel.
For in-fiber faint-pulse QKD systems, the most important properties of the emitted photons are the mean photon number(s), timing jitter, and any means of distinguishing the photons apart from their phase (e.g., from their spectral or temporal characteristics). For the photon receiver, the relevant properties are photon detection probability, dark count and after-pulse probability, dead time, recovery time, and spectral and temporal distinguishability.
Random number generators are also essential components of QKD systems, since the encoding, as well as the intensity for decoy protocol systems, must be varied in a truly random way. Optical quantum random number generators operate at the single-photon level and depend on the performance of their constituent single-photon sources and detectors.130
Characterization of Sources
Characterization of an SPS is achieved through the estimation of its relevant parameters, using dedicated measurement techniques. A provisional list of these parameters, and the associated measurement techniques, is presented below.
The most important measurement is evaluating the probability of having more than one photon emitted by the source within a prescribed time interval. This is commonly performed with a Hanbury Brown-Twiss (HBT) interferometer operating at single-photon level. It is usually implemented using two threshold (click/no-click) detectors placed at the output ports of a 50:50 beam splitter [Fig. 2(a)].16,18,131 How efficiently an SPS emits a single photon can be quantified by means of the α parameter proposed by Grangier et al.131 This is essentially an anticorrelation criterion based on the parameter.
, where Q(1) is the probability of a count in the reflection (r) or transmission (t) port of the beam splitter, Q(2) is the probability of a coincidence in counts, and τ is the time delay between separate detection events at the beam splitter output ports. In the single-photon community, the parameter is often called the second-order correlation function , but we prefer to refer to since has a different definition,132 despite the fact that in the few-photon regime the two definitions are asymptotically equivalent. Idealized plots of for different sources are illustrated in Fig. 3.
Each pulse from the SPS is expected to contain photons ( in the ideal case). Thus, by considering the proper detection model for the click/no-click detectors, the probability of the detector firing due to an optical pulse containing photons, as well as the probability of observing a coincidence between the two detectors of the HBT due to a single pulse from the SPS, can be properly evaluated.18
It is worth noting that with typical click/no-click detectors the parameter is almost independent of the detection efficiency of the detectors (once they are fairly similar), while it can be strongly affected by the presence of dark counts or counts due to stray light. For this reason, time-correlated single-photon counting (TCSPC) measurement techniques can be helpful in providing proper estimation of the background counts. Furthermore, the detector dead times and imbalance of the HBT interferometer may bias the estimation of the parameter. Proper estimation of these nonidealities is necessary to implement the needed corrections in order to have a faithful estimation for . Examples of proper detection models in HBT interferometers can be found in Brida et al.133 and Migdall et al.18
A two-time correlation function , where and are the delay times between the excitation pulse and photon detection on the two detectors, can be used to analyze the dynamics of the single-photon emission.134
Coherence Time, Emission Lifetime, and Indistinguishability
These measurements are important for characterizing SPSs designed for entanglement-based applications (see Sec. 1.1).
The coherence time can be measured using a Michelson or Mach-Zehnder interferometer to produce single-photon self-interference [Fig 2(b)].135 The coherence length is the 1/e decay point of the interference envelope when measured in optical path difference units, and the coherence (decay) time is obtained by dividing this distance by the speed of light (Fig. 4).
The lifetime of a single emitter, which will generally be a combination of the intrinsic radiation lifetime, the dephasing time, and jitter (due to nonradiative transitions), can be measured by using pulsed excitation. Exploiting TCSPC, and performing coincidence measurements between the triggering signal and the detected photon, one obtains a temporal profile corresponding to the convolution of the different components of the experiment, i.e., the radiation from the emitter, the detector, and the TCSPC instrumentation (the latter is usually negligible). The ideal situation is when both TCSPC electronics as well as the single-photon detector have negligible jitter with respect to the source. When this is not the case, a proper characterization of the detector and TCSPC electronics should be performed in order to deconvolve the profile of interest. A single-photon detector with the lowest possible jitter is needed for this measurement. Superconducting nanowire detectors, which can have a jitter of just tens of picoseconds,16,18,69,136 appear to be the best for this purpose.
Indistinguishability is measured using Hong-Ou-Mandel (HOM) interference.137,138 If two photons are perfectly indistinguishable, i.e., they are in exactly the same mode and are each incident at the same time at the separate input ports of a 50/50 beam splitter, they will bunch or coalesce, i.e., both will exit together from one of the exit ports [Fig. 2(c)]. The interference curve is usually measured by placing a photon-counting detector at each output port of the beam splitter and measuring the detection coincidences as the time delay between the photons being incident at the input ports is varied. The detection coincidences will be a minimum for zero time delay; fully destructive interference will occur only if the two photons are completely indistinguishable. The widely accepted definition for the measured Hong-Ou-Mandel visibility is given by
Data fitting is usually applied to the interference curve, whose form depends on the spectrum of the interfering photons,139 as well as imperfections in the experimental setup, in order to extract a reliable value for the interference visibility and, hence, the indistinguishability (Fig. 5).140
Wavelength and Spectral Line-Width
It is possible to measure the wavelength and spectral line-width of an SPS with a monochromator or a wavemeter coupled to a detector operating at the single-photon level.
An interesting solution for measuring the spectral line-width exploits a stable, tunable Fabry-Perot resonator.141 The technique requires that the cavity free spectral range (FSR) is greater than the line-width of the SPS, i.e., , to enable an unambiguous measurement of the source spectrum, yet there must be a line-width to adequately resolve any spectral structure. When used in transmission mode, the Fabry-Perot cavity can be tuned to resonance with the SPS spectral profile using a photon-counting detector.
Mean Photon Number and the Variation in Mean Photon Number
A measurement that is important, particularly for a pseudo SPS based on an attenuated laser, is related to the estimation of the mean photon number and its variance. A solution suitable for any kind of SPS exploits a calibrated single-photon detector. The mean number of photons and its variance will be estimated on the basis of assumptions about the statistical model of the detection process and the photon statistics of the source (see Sec. 5.1.4). If the latter is not available, quantum tomographic techniques should be employed.
Quantum Tomography and State Reconstruction
Knowledge of the density matrix of a quantum optical state is fundamental for several applications, and considerable effort has been devoted to finding reliable methods to fully, or partially, reconstruct it (see Refs. 142 and 143 and references therein).
Quantum tomography is an experimental procedure to reconstruct the density matrix of an unknown quantum state when many identical copies are available in the same state, so that a different measurement can be performed on each copy of the state. Balanced homodyne detection is able to measure all possible linear combinations of position and momentum operators (the quadratures) of a quantum optical field. The probability distribution of the quadrature operators is demonstrated to be just the Radon transformation of the Wigner function of the quantum optical state.142,143
In principle, quantum tomography allows perfect reconstruction of the state in the limit of an infinite number of measurements. However, in the practical finite-measurement case, statistical errors affect the quality of the reconstruction. Data analysis strategies and optimization algorithms, e.g., adaptive tomography or maximum-likelihood strategies, have been investigated in order to obtain a physical and unbiased reconstructed density matrix.144145.146.147.–148
Direct reconstruction of the density matrix of a specific degree of freedom of a quantum optical field instead of the Wigner function has been routinely performed in finite-dimension Hilbert spaces, e.g., in the case of photon polarization (including qubits and qutrits)144,149150.151.152.–153 and in the case of photon optical angular momentum.154,155 This is achieved by making a quorum of direct projective measurements (with a single-photon detector) on the different copies of the quantum optical system (typically a single photon). Optimization algorithms have been developed to obtain the reconstructed physical density matrix that most likely corresponds to the measured data.142
It is often not necessary to reconstruct the full density matrix, as the experimentalist is interested only in reconstructing the diagonal elements in the photon number basis, i.e., the photon statistics. The most direct way to measure a photon number distribution is by using photon number resolving detectors. It is possible to deconvolve the photon statistics of the incoming light field by knowing the mode of operation of the PNR detector (e.g., linear detection in the case of TES detectors156157.–158 and nonlinear detection in the case of temporally or spatially multiplexed detectors9798.99.–100,103) as well as its inefficiencies (e.g., quantum efficiency, dark counts, reliability of the photon number discrimination, pixel cross-talk, etc.).
PNR detection has also been used to reconstruct just the underlying mode structure of multimode classical and nonclassical light fields instead of the full density matrix. Full characterization of the mode structure involves a series of separate measurements in spatial, temporal, frequency, and polarization domains, requiring a range of instrumentation. This method uses only the measurement of the photon number distribution of a field and exploits an optimization algorithm together with some hypotheses about the field modes.159
Another approach uses a non-PNR detector, which can only distinguish between when photons are absorbed by the detector (producing a “click” or an “on” signal) and when no photons are absorbed (producing an “off” signal, i.e., no “click” signal). The data used for the reconstruction of the probability distribution are the probability of “no-click” for different values of the quantum efficiency of the “on/off” detector. This quantum efficiency variation is obtained by using a calibrated attenuator in front of the detector and applying specific optimization algorithms in order to obtain the most likely photon number distribution.160,161 A minor modification of this technique has been proven to be able to also reconstruct some off-diagonal elements of the density matrix.162
Polarization state reconstruction
Photon polarization is the quantum mechanical equivalent of the classical electromagnetic light polarization. The quantum polarization state vector for a single photon, for instance, is identical with the Jones vector, usually used to describe the polarization of a classical wave. Thus, quantum state tomography is equivalent to the estimation of Stokes parameters for classical light, the only difference being that instead of measuring light power, what is experimentally observed are the relative detection frequencies of single-photon detection, i.e., conditional probabilities of detection of single photons. Tomographic reconstruction of the polarization state at the single-photon level is more affected by detection imperfections (e.g., dark counts, afterpulses, etc.) than conventional polarimetry operating in the macroscopic optical regime. For this reason, proper reconstruction algorithms, such as maximum likelihood algorithms, are employed to reconstruct the physically meaningful polarization state of the single photons.142,149 This technique also allows one to reconstruct the evolution of the qubit states traveling and/or interacting in photonic devices through the formalism of quantum process tomography.163
Characterization of Detectors
The characterization of detectors is fundamental to characterizing single-photon devices, since current traceability of optical scales, i.e., the SI system, is based on cryogenic radiometers, which are detectors of optical power [Fig. 6(a)]. Free-space monochromatic radiation at the 100-μW power level can be measured with this technique, with an uncertainty around 0.005% ().164,165
However, a cryogenic radiometer is simply a well-characterized instrument and, unless linked to a more fundamental concept, has the potential for unknown systematic errors or drifts that can then propagate into all other radiometric quantities. Comparisons with other traceability routes have been carried out, at least to uncertainty levels around 0.02%, confirming the underlying principle of cryogenic radiometry.3 In the discussion below on detection efficiency, we describe various alternatives to cryogenic radiometry that may, in time, achieve the necessary accuracy to enable such a test of cryogenic radiometry, although our focus in this review is their relevance to calibrating devices operating in the photon-counting regime.
An interesting development is a prototype microscale picowatt cryogenic radiometer for electrical substitution optical fiber power measurements [Fig. 6(b)].166 The absorber is a superconducting TES, and it, the electrical heater, and the thermometer are on a micromachined membrane of square. Initial measurements at 1550 nm with input powers from 50 fW to 20 nW ( to ) show a response inequivalence between electrical and optical power of 8%. A comparison of the response to electrical and optical input powers between 15 and 70 pW yields a repeatability better than (). The system has a noise equivalent power of .
Detection efficiency can be measured by sending single photons onto the detector at a known repetition rate and recording the number of detection events. The detection efficiency is the ratio of detection events to incident photon events. An ideal SPS that emits only one photon within a predetermined temporal window at a known (and variable) repetition rate does not yet exist.
The traditional approach is the substitution method based on the comparison of a traceably calibrated reference device (detector) with the detector under test (DUT). The power of incident radiation (CW or pulsed and in the appropriate power regime) is measured with the reference detector. This power can then be further attenuated by a measurable amount (either through the use of a monitor detector operating at high power167 or calibrated attenuators) to the single-photon regime and used to calibrate the DUT.
The low power limit of a reference detector is determined by the noise floor of the detector and any amplification used to obtain a measureable signal. The large transimpedance value () required to convert sub-picoampere-level photocurrents poses a series of challenges to the traditional current-to-voltage converter with a feedback resistor, such as noise amplification,168 long time constant, and a not negligible I/V conversion factor uncertainty (). The switched integrator amplifier that employs a capacitor in place of the feedback resistor can offer a shorter time constant, overall better noise performance, and I/V conversion uncertainty better than 0.01%.169,170
Using an attenuation chain, a reference detector can be calibrated at the 100-pW level ( to photons per second) with an uncertainty of (visible, free space) to 1% (1550 nm, fiber-coupled) ().
A synchrotron can function as a variable attenuator since the radiant intensity of the synchrotron radiation can be adjusted by many decades in a controlled manner with low uncertainties. A traceably calibrated reference detector is used to measure the photon flux at high power (high current). The ring current is then reduced and the count rate of the DUT measured. The ratio of the ring currents yields the photon flux on the DUT.
The Schwinger equation 171 describes the spectral energy irradiated per solid angle by one electron moving on a circular arc, i.e., moving in a homogeneous magnetic field. Adapting this to electron storage rings, where the electron revolves, the Schwinger equation is multiplied by the revolution frequency ν, yielding the spectral radiant intensity for one stored electron. The spectral radiant power is given by integration over the appropriate solid angle and can be calculated from theory. Synchrotron radiation can, therefore, be considered an absolute source. Its use in such a mode requires accurate knowledge of all the storage ring parameters entering into the calculation.172 Its application to absolute calibrations of detectors would require the theoretical consideration of the coupling losses into the DUT to be evaluated, which is not feasible for low uncertainty calibrations.
If electrons are stored, which is equivalent to a stored electron beam current , the spectral radiant power is written as . The parameter accounts for the influence of the finite vertical source size and vertical divergence of the stored electron beam. At the Metrology Light Source (MLS), the value of , which is dependent on the wavelength and the vertical acceptance angle, is well below for wavelengths longer than .172 The direct proportionality between radiant power and the number of stored electrons holds not only for bending magnet radiation as described above, but also for the radiation of devices, such as undulators, installed in the storage ring, which can produce radiation of a much higher power compared to bending magnet radiation. Therefore, the linearity of the reference detection system has to be known only in the microwatt power regime. At the MLS, the stored electron beam current can be varied by more than 11 decades from a maximum current of down to one stored electron (1 pA).173
Uncertainties () of 0.17% and 0.16% for the measurement of the detection efficiency of two Perkin-Elmer single-photon counting modules at 651 nm were achieved174 using a reference trap detector calibrated traceably to the Physikalisch-Technische Bundesanstalt cryogenic radiometer. The absolute photon rate per stored electron in the focus was determined in the high ring current regime, where the current can be measured with a relative standard uncertainty of better than . The ring current was then reduced to several hundred picoamperes, i.e., several hundred stored electrons, and the count rate of the DUT per stored electron was measured. Optical filters can adjust the calibration wavelength to any desired value covered by the synchrotron radiation spectrum. Recent work175 has extended this technique to fiber-coupled SNSPDs.
Predictable quantum efficient detector
A potential alternative to cryogenic radiometry for obtaining traceability to the SI is the predictable quantum efficient detector. It comprises two custom-made induced-junction silicon photodiodes operated under reverse bias voltage and arranged in a wedge light-trap configuration (Fig. 7).176177.–178 Its spectral response can, in principle, be calculated from measurements of the specular and diffuse reflectances of the photodiodes, together with calculation of the intrinsic quantum deficiency. Comparison with cryogenic radiometry has shown agreement at the 0.01% level, and its nonlinearity of response has been measured to be from 100 pW to 400 μW.177 This device could, therefore, provide traceability around the 0.02% uncertainty level for visible wavelength gigahertz photon fluxes.
Another alternative to cryogenic radiometry is a method for absolutely measuring the radiance of a fiber-coupled source at .179,180 This compares the spontaneous emission of an erbium amplifier to the emission stimulated by the source. Using spontaneous emission as a standard was originally proposed in 1970s181,182 and was implemented using SPDC in bulk crystals,183,184 but the free-space nature of the setup made it challenging to accurately define the number of spatial modes involved. The method operates best at the one photon/mode level, which is at 1550 nm, and has so far demonstrated uncertainties around the 1% level. A detector of known relative spectral response can be used to transfer this to measurements at other wavelengths and the power level attenuated to the single photon level for calibrating a photon counter.
The methods described in Secs. 5.1.1 to 5.1.3 will, at some point, include a comparison between measuring incident optical power with an analog detector, whose response is linear (in the appropriate power regime) with respect to the incoming photon flux, and measuring the response to this or an attenuated flux with a photon-counting detector. Laser light (Poissonian light) and thermal light sources have photon statistics which both give rise to multiple photon events. These have to be taken into account when calibrating non-PNR detectors.101,185,186. A convenient way of analyzing this is to model a detector with finite detection efficiency by an ideal detector () placed behind a beam splitter with transmittance . The ideal detector’s response to a train of pulses with photon statistics is to always indicate a detection event except for the case in which zero photons are in a pulse. Hence, the probability for a real detector to detect a photon event is given by
The case of incoming Poissonian light is easy to analyze, since the Bernouilli transformation leads to another Poissonian distribution, with the mean photon number reduced from to ; hence (Fig. 8)
In order to obtain from a measured and known , we rearrange Eq. (4) as follows:
Figure 9 illustrates the effect of different photon probability distributions on .
Heralded single photons
An alternative technique to the traditional one of radiometric substitution is based on the use of parametric downconversion (Sec. 2.1.1) to produce a heralded SPS.19,20 Detection of one of the downconverted photons (by a single-photon detector) heralds the existence of its twin, which can be directed to the DUT (Fig. 10). This approach still suffers from multiple photon events, and various experiments187188.189.190.–191 have been carried out to demonstrate the equivalence of the two methods at the photon-counting level. However, optical scales remain based on cryogenic radiometry since the lowest uncertainty so far achieved with the heralded single-photon approach (0.18%) (at the single-photon level) is over an order of magnitude less accurate that that based on cryogenic radiometry (0.005%) (at the 100-μW level). This is mainly due to the need to estimate the absorption in the path the heralded photon takes from the point of creation within the nonlinear medium until it is incident on the detector, which may include geometrical or absorptive spectral filtering. The method suffers from limited spectral tunability at high accuracy and is limited to detectors that are either free-running or can be randomly gated. At present, the importance of this technique lies in the fact that it establishes an absolute means of measuring detection efficiency which is independent of cryogenic radiometry, and operates in the single-/few-photon regime.
Extensions of the heralded photon technique for PNR detectors
The extension of Klyshko’s technique (Sec. 5.1.5) to other kinds of single-photon detector is quite straightforward, with careful consideration of any nonidealities associated with the detection model. For example, for a PNR detector, a generalized version of Klyshko’s technique accommodating the photon number resolving ability of the detector has been implemented.192
Another calibration technique for a PNR detector inspired by Klyshko’s technique, but explicitly taking into account multiple twin-photon events produced in the SPDC process, utilizes the PNR detector’s capability to measure the photon-number distribution of an optical mode.193 Using two PNR detectors, the joint photon-number statistics between the two electromagnetic field modes of the PDC source, including photon-number correlations and individual photon-number distributions, can be determined. For each element of the resulting joint photon statistics, one can find a formula giving the quantum efficiencies of the two PNR detectors. Optimization techniques are necessary to estimate the detector efficiencies from the increased photon number of measurements. A drawback of this technique is that it is strongly dependent on the probabilistic detection model assumed (e.g., the Bernoulli model), which should be correct in its entirety not just in terms of mean values (as is the case for Klyshko’s technique). Inadequacies in the assumptions immediately propagate to the accuracy of the estimation of photon detection efficiency. Another drawback is the use of an optimization algorithm that, in general, does not yield a provable uncertainty estimation.
Dark Count Probability
The dark count probability of a detector can be measured by recording detection events per gate or per unit time in the absence of photon flux illuminating the detector’s sensitive area. To perform the measurements, a counting device records the detector’s output signal. In order to count only detection events during gates, a time-correlated photon-counting device can be used to record the detector output signal.
In SPAD detectors, charge carriers created during the avalanche process become trapped at atomic defect sites in the multiplication region. The subsequent detrapping of these carriers at a later time can trigger spurious additional avalanches known as after-pulses. After-pulses are a type of dark count, but unlike other dark count mechanisms—such as thermal excitations or tunneling effects—that occur randomly in time, after-pulses are strongly correlated to previous avalanches during which trap sites were populated.194
The preferred measurement sequence for a detector that exhibits after-pulsing is to measure the dark count probability, followed by the after-pulse probability194,195 and then the detection efficiency, as described by Yuan et al.85
The detector is illuminated by a pulsed laser source attenuated to the single-photon level. The laser and detector are triggered by a pulse generator, where the laser pulse frequency is stepped down by an integer factor R compared to the detector gate rate. The arrival of the laser pulses at the detector is synchronized to occur during the detector gates. A time-correlated photon counting device is used to record a time histogram of laser triggers and detections. At zero time delay with respect to the laser pulse, the histogram peak is composed of detection events observed under laser light illumination (plus dark counts). Peaks at a time delay in this histogram not corresponding to an illuminated gate are generated by photon events caused by the after-pulse effect (and dark counts). By normalizing the detected count rate to the total number of applied gates, the total after-pulse probability can be calculated using Eq. (7), where and are the average number of counts per illuminated and nonilluminated gate, and is the number of dark counts, calculated from the previously established dark count probability.
With knowledge of the dark count probability and the after-pulse probability , the photon detection probability can be obtained from Eq. (8). is the probability to detect a photon at each illuminated gate and is given by , where is the total number of illuminated gates
The detection efficiency can then be calculated from Eq. (5). The mean number of photons per laser pulse, , can be obtained by calibrating the attenuated laser source against a traceable standard. The latter is currently not available at the single-photon level. A practical solution is to use a calibrated detector to measure the power for a given pulse repetition rate and then use a calibrated attenuator to reduce the pulse photon number to the single-photon level. Figure 11 shows an example of data collected using this technique. The after-pulse probability can also be analysed as a function of time after a ‘true’ detection, and this is illustrated in Fig. 12.
Dead Time, Reset Time, and Recovery Times
These parameters limit the maximum count rate of a single-photon detector. There are differing definitions of these terms in the literature, and we shall adapt those given by Migdall et al.18 After a detection event, there will be a time interval when the detector as a whole is unable to provide an output in response to incoming photons at the single-photon level, which may be due to intrinsic processes within the detector or its control electronics. We shall call this the dead time, . After the dead time has elapsed, the detector is able to detect incident photons; however, it may take some further time before its detection efficiency recovers to its steady-state value. We shall call this the reset time, . We shall define the sum of these times as the recovery time, , i.e., . If the detector recovers to its normal value slowly, it may be useful to specify a shorter recovery time where the detection efficiency is some fraction (e.g., 90%) of the final value. We shall call this the partial recovery time, . We note that in the literature, has sometimes been defined as the dead time.196
The dead time, reset time, and recovery times can be measured using the two-pulse method.196197.–198 A train of double pulses of equal intensity, separated by a tunable time and attenuated to the single-photon level, are sent to the detector. In the case of gated detectors, the photons will be synchronized to the detector gates and their time separation incremented in steps of a gating period. The probabilities of detecting the first photon, , the second photon, , and both photons, , will be recorded as a function of their time separation by recording detections for several incident pairs of pulses at each time separation. The time between pairs of pulses should be able to be made large enough to exceed the expected recovery time and, in the case of SPAD detectors, ensure a negligible after-pulse probability. We note that
From Fig. 13, will be zero for , and then will become nonzero at some point . In order to estimate , i.e., the point at which, by our definition, , we find the value of for which
Similarly, estimation of at the 90% level requires finding the value of for which
A check can be made that for , i.e., where the effect of after-pulses is negligible.
Maximum Count Rate
To measure the achievable maximum count rate, the detector is illuminated by pulsed laser light at the same frequency of the detector gating rate, corresponding to an illumination pulse every detector gate. By measuring the detector count rate as a function of the photon flux, the number of detection events per gate will saturate at the detection rate limit of the SPAD. The results can be compared with the prediction of the maximum count rate as a function of the detection efficiency and dead time of the detector.
To ensure good timing resolution of a single-photon detector, the time interval between the absorption of a photon and the generation of an output electrical signal should be stable, corresponding to a small timing jitter. A common technique to determine this parameter is to measure the full-width half-maximum (FWHM) of the detector’s instrument response function. For that purpose, the FWHM of the laser pulses illuminating the detector should be less than the timing jitter (typically ) of the detector. By correlating many detection events with the trigger signal of the laser, a time delay histogram can be observed by a time-correlated counter, from which the detector’s response function can be calculated. Many detectors have non-Gaussian and asymmetric response functions which can be taken into account in a detailed analysis.
Positive-Operator Valued Measure Reconstruction
Detector characterization is normally carried out by measuring the parameters of a trusted model describing the detector operation. Where characterization of the mode of operation of a detector without preliminary assumptions is needed, quantum detector tomography may present the ideal solution. It consists of determining the positive-operator valued measure (POVM) corresponding to the detection process, i.e., given an input quantum optical state, POVM is the operator that determines the probability of having a certain macroscopic output signal from the detector.
Measurements with a quorum of probe states enable a complete determination of the POVM of a detector to be achieved. Experiments have been performed on phase-insensitive199200.–201 and phase-sensitive202 PNR detectors using coherent states as probes. As usual with tomographic techniques, experimental errors and statistical fluctuations may lead to unphysical POVM elements, and specific optimization algorithms constraining POVM elements to be physical should be employed.
POVM extraction with a large quorum of probe states suffers from slow convergence,199,201 and it was shown203 that taking advantage of quantum resources (e.g., entanglement) can increase the speed of convergence. Taking advantage of quantum correlations with an ancillary state, a first experimental POVM reconstruction was carried out.204 Here, it was assumed that the POVM was diagonal in the Fock (photon number) basis, as was the case in most of the previous POVM reconstructions discussed,199,201 while only in the case of Natarajan et al.202 were nondiagonal POVM elements considered.
Summary and Forward Look
Measurements that were once concerned with basic research are now required to validate devices that are components of emerging industrial technologies and applications based on the production, manipulation, and detection of single- and classically and nonclassically correlated photons.
A current example of this relates to QKD; an industry specification group of the European Standards Institute (ETSI) is addressing standardization issues for faint-pulse phase-encoded QKD over fiber.205 One aspect of this is the drafting of specifications for measuring the optical performance of these systems. This requires measurement of various properties of laser pulses attenuated to single-photon level, as well as of single-photon detectors.
As the manipulation of photons and photon–matter interactions becomes more widely used in industrial applications, additional metrics and measurement methods will be required for characterizing photon states and detectors. QKD and QRNG systems drive the need for faster and more efficient detectors which do not need cryogenic cooling. Deterministic sources of single- and entangled-photons continue to be heavily researched, and their realization is likely to herald an explosion of applications. In quantum technologies, there is a movement away from large devices, which have to be mounted on a table-top or breadboard, toward integrated circuits. While detector and source metrology has already made the step from free space to fiber-coupled devices, another step will be to further transfer this measurement capability onto devices embedded or fabricated in on-chip optical integrated circuits. A similar process is occurring in the medical diagnostics field, where lab-on-a-chip technology is under development.
The move to define all of the SI base units in terms of fundamental constants opens up the prospect for directly realizing absolute scales in situ, rather than by a traceability chain to instrumentation maintained in a metrological institute. The metrology of single-photon sources and detectors will continue to face many exciting challenges.
The authors acknowledge funding from projects MIQC (contract IND06) and SIQUTE (contract EXL02) of the European Metrology Research Programme (EMRP). EMRP is jointly funded by the EMRP participating countries within EURAMET and the European Union. C.J.C. and A.G.S. also acknowledge funding from the National Measurement Office of the U.K. Department of Business, Innovation and Skills. I.P.D. also acknowledges funding from FIRB “Future in Research 2010” project (CUP code: D11J11000450001) funded by the Italian Ministry for Teaching, University and Research (MIUR).
Christopher J. Chunnilall is a senior scientist in the Quantum Detection group at the National Physical Laboratory (NPL), United Kingdom. He received his BSc and PhD degrees from Durham University and King’s College London, respectively, and is the author of more than 40 peer-reviewed and conference proceeding papers. His current research addresses the measurement needs for quantum optical technologies, such as quantum key distribution, and the application of single and entangled photons to metrology.
Ivo Pietro Degiovanni is a permanent researcher in the Optics Department at the Istituto Nazionale di Ricerca Metrologica, Italy. He received his PhD from the Polytechnic of Turin and has coauthored more than 60 peer-reviewed papers. His main research interests are in quantum radiometry, quantum enhanced measurements, quantum information (in particular QKD), and the foundations of quantum mechanics. He serves on the scientific committee of the “Single-Photon Workshop” conference series.
Stefan Kück is the head of the Photometry and Applied Radiometry Department at the Physikalisch-Technische Bundesanstalt (PTB), Germany, and a professor at the Technical University of Braunschweig. He received his diploma and doctoral degree from the University of Hamburg and is the author of 5 book articles and more than 110 journal articles and conference proceeding papers. His current research addresses the metrology for single-photon emitters and detectors, and the investigation toward single-photon standard sources.
Ingmar Müller is a postdoctoral researcher in the Detector Radiometry group at the PTB in Berlin. He received his physics diploma and his doctoral degree from the Humboldt-Universität zu Berlin. He is the author of more than 15 publications in international journals and conference proceedings. His current research interests include absolute radiometry, detectors with predictable quantum efficiency, and traceability for quantum radiometry.
Alastair G. Sinclair is a principal scientist in the Quantum Detection group at NPL, United Kingdom. He received his BSc and PhD degrees from the University of Strathclyde, Scotland, and then carried out research on the quantum statistics of ultracold atoms as a postdoctoral research fellow at Stanford University. Since then he has been at NPL, where he carries out research into trapped ions and single photons for quantum metrology.