
1.IntroductionIn recent years, much work has been directed to the development of suitable technologies for scalable quantum computation. This effort is motivated by the promise of quantum algorithmic speedup, with a notable early example provided by Shor’s algorithm for integer factoring.^{1} Despite the tremendous advances in quantum technologies^{2}^{–}^{7} reported in the last few years, the implementation of a largescale universal quantum computer is still far from our current capabilities. Hence, intermediate milestones need to be identified in the longterm effort toward harnessing the computational potential of quantum systems. A first fundamental step in this direction would be the achievement of the regime of quantum computational supremacy,^{8} i.e., the experimental demonstration of a quantum device capable of performing a computational task unambiguously faster than presentday classical computers. Within this framework, Aaronson and Arkhipov (AA)^{9} formulated a welldefined computational problem that they called boson sampling. This problem consists in sampling from the output distribution of $n$ indistinguishable bosons that interfere during the evolution through a Haarrandomchosen linear network. In Ref. 9, strong evidence was provided that boson sampling is an intractable problem for classical computers, as it is related to the evaluation of permanents of matrices with complex entries (a problem known, in computational complexity terms, to belong to the #Phard class^{10}). On the other hand, boson sampling can be tackled with a dedicated quantum hardware, which, despite not being universal for quantum computation, is capable of implementing the required dynamics. To this end, one of the most suitable platforms is provided by photonic systems, as the necessary elements (sources, linear evolution, and detection) are available with present technology.^{2} Given the lack of errorcorrection—an issue shared by all current proposals for quantum computational supremacy^{8}—it is an open question whether current technology is capable of scaling boson sampling to arbitrarily large sizes while maintaining a quantum advantage, which has also motivated research in more robust variants of the model.^{11}^{–}^{17} Following these theoretical developments, a strong experimental effort was then initiated to realize progressively larger instances of boson sampling experiments. This is leading to a race aimed at reaching the quantum advantage^{18} regime in a photonic platform, namely the condition where the experiment is outperforming a classical computer. In parallel, theoretical work was required to define suitable methods to verify that a given device is sampling from the correct distribution.^{19} This is indeed a relevant issue since the very complexity of the problem forbids the application of usual verification methods, and classical simulations become progressively intractable for the increasing system size. In this paper, we discuss recent advances in the field of photonic boson sampling, with particular attention to experimental implementations and validation methods. The remainder of this paper is organized as follows. In Sec. 2, we first provide a theoretical overview on the problem, discussing the computational model, classical simulation algorithms, and conditions that turn the system amenable to efficient classical simulation. Then in Sec. 3, we discuss variants of the original task that have been proposed to improve the efficiency of the quantum simulator, without affecting the problem’s complexity. In Sec. 4, we review the experimental implementations reported so far, discussing the employed photonic platforms. In Sec. 5, we provide an overview of the validation techniques for boson sampling that have been proposed and tested experimentally. In Sec. 6, we then discuss the scalability of photonic platforms toward achieving the quantum supremacy regime. Finally, we provide an outlook in Sec. 7, where we also discuss recently highlighted applications of boson sampling in contexts different from the original proposal. 2.Boson Sampling: Theoretical OverviewIn this section, we review the theoretical underpinnings of the boson sampling problem. In Sec. 2.1, we define the problem and discuss its computational complexity. In Sec. 2.2, we review the known classical simulation algorithms, and in Sec. 2.3 we discuss some regimes, in which the classical simulation of boson sampling is known to become tractable. 2.1.ModelConsider a set of $m$ modes with associated creation operators ${a}_{i}^{\u2020}$, for $i=1,\dots ,m$, satisfying bosonic commutation relations. A Fock state of $n$ photons in these modes can be written as Eq. (1)$$S\u27e9={s}_{1}{s}_{2}\dots {s}_{m}\u27e9=\prod _{i=1}^{m}\frac{{({a}_{i}^{\u2020})}^{{s}_{i}}}{{s}_{i}!}0\u27e9,$$Consider now an $m$mode linearoptical transformation (i.e., an interferometer) described by $U\in SU(m)$. Its action is fully determined by the evolution of the creation operators: It follows^{9}^{,}^{20}^{,}^{21}^{,}^{22} that the transition probability between an input state $S\u27e9={s}_{1}{s}_{2}\dots {s}_{m}\u27e9$ and an output state $T\u27e9={t}_{1}{t}_{2}\dots {t}_{m}\u27e9$ can be written as Eq. (3)$$\mathrm{Pr}[S\to T]=\frac{{\mathrm{Per}({U}_{S,T})}^{2}}{{s}_{1}!\dots {s}_{m}!{t}_{1}!\dots {t}_{m}!},$$Since the permanent is, in general, hard to compute,^{10} Eq. (3) underpins the complexity of a particular class of linear optical experiments. In complexity theory terms, the permanent is #Phard,^{10} and the best known classical algorithm for computing it, due to Ryser, takes $O(n{2}^{n})$ steps for an $n\times n$ matrix. The connection to complexity theory was noted by Troyansky and Tishby^{21} who attempted to leverage Eq. (3) to build a quantummechanical algorithm to compute permanents. They realized that their algorithm was not efficient, as exponentially many experimental samples would be needed to produce a sufficiently good approximation. AA explored this connection further by shifting to sampling problems, where the computational task is to produce a sample from some probability distribution sufficiently close to the ideal one.^{9} We now outline the argument of AA, emphasizing the requirements it raises for experimental demonstrations of boson sampling. Further developments that attempted to soften the initial requirements are reviewed in Sec. 3. The standard boson sampling setup^{9} consists of the following ingredients (see Fig. 1).
Let $\mathcal{D}$ be the probability distribution predicted by quantum mechanics for the nocollision outcomes of such an experiment, and let $\Vert \xb7\Vert $ represent the total variation distance (TVD).^{9} The main result of AA states (up to two additional complexitytheoretic conjectures) that, if a classical algorithm existed that could produce a sample from any distribution ${\mathcal{D}}^{\prime}$ such that in time $\mathrm{poly}(n,1/\u03f5)$, the polynomial hierarchy (PH) would collapse to the third level. Since PH is a tower of complexity classes that is strongly believed to be infinite, the result of AA provides strong evidence against the possibility of an efficient classical simulation of boson sampling.The proposal of AA uses an approximate notion of simulation, as seen in Eq. (5). Thus one could attempt a classical simulation using an algorithm for approximating the permanent, rather than Ryser’s exact algorithm. The problem with this approach is that it can be shown that the permanent would also be #Phard to approximate on the worst case^{9} to the required precision. One classical algorithm to approximate the permanent is due to Gurvits.^{26} It takes time $O({n}^{2}/{\u03f5}^{2})$ to compute the permanents of the $n\times n$ matrices in a boson sampling distribution to precision $\u03f5$. This is not sufficient to give a simulation of boson sampling because there we typically have exponentially many permanents that are exponentially small, and so approximating them to a nontrivial precision via this algorithm would take exponential time. In very broad strokes, the argument of AA is as follows. If $U$ is a Haarrandom matrix and $m=O({n}^{2})$, it can be proven that $n\times n$ submatrices of $U$ look approximately Gaussian. Since Gaussian matrices do not seem to possess any special structure that a classical algorithm could exploit, it is reasonable to conjecture that, with high probability, the permanent of a Gaussian matrix is among the hardest ones to compute (this is the permanentofGaussians conjecture). However, if a classical algorithm could sample from a distribution $\u03f5$close in TVD to the ideal one, it would have to be approximating most probabilities very well. Using an algorithm attributable to Stockmeyer,^{27} it would then be possible to solve a #Phard problem inside BPP^{NP}, which is a complexity class that resides inside the third level of the PH. Using a theorem due to Toda,^{28} this would imply that the entire hierarchy would have to collapse to its third level. This result, thus, establishes that boson sampling is hard to simulate, even in an approximate sense, up to three conjectures that: (i) PH is infinite, (ii) permanents of Gaussian matrices are #Phard to approximate, and (iii) permanents of Gaussian matrices are not too concentrated around zero. The third conjecture is more technical but is necessary to relate two versions of the problem of approximating permanents of Gaussian matrices: to within additive and multiplicative errors. 2.2.Simulation AlgorithmsThe AA argument provides evidence that any boson sampling simulation on a classical computer must take a time which grows exponentially with the number of photons $n$. It is important to find the best classical algorithms for boson sampling simulation. This will help to establish the goals for experimentally demonstrating unequivocal quantum computational advantage. From a more practical point of view, simulation algorithms are required for analyzing current, smallscale implementations. Optimal simulation algorithms also help clarify how inevitable experimental imperfections affect the computational power of the model; this includes partial photon distinguishability and photon loss. In this section, we review known classical simulation algorithms for boson sampling. The bruteforce approach for simulating boson sampling would involve calculating the probabilities associated with all input–output possibilities (in case we are using the variableinput, scattershot approach to boson sampling, see Sec. 3.1). Doing this for $n$ photons in $m={n}^{2}$ modes with Ryser’s algorithm, and for all $\left(\begin{array}{c}{n}^{2}\\ n\end{array}\right)$ input–output combinations, would take $O(n{2}^{n})\left(\begin{array}{c}{n}^{2}\\ n\end{array}\right)$ time steps. As regards space (memory) usage, it is not necessary to store all calculated permanents.^{29} This bruteforce approach is computationally intensive in the extreme; we now review recently proposed algorithms that have considerably improved on this. In Ref. 18, two different new algorithms were proposed. The first is based on the wellknown technique of rejection sampling: we sample from a simple (e.g., uniform) distribution over all events, then test whether the drawn event could be an output by computing its probability (which, in our case, is given in terms of a matrix permanent). This results in exact sampling, at a reasonable computational cost if the largest event probability is known, and not too large. The implementation of this algorithm for exact simulation of boson sampling requires an estimate of the largest event probability. If this maximum is underestimated, the simulation will be only approximate, and if this maximum is too large, it will result in a computational bottleneck due to the large number of rejected samples. The second algorithm of Ref. 18 is a variant of the Markov chain Monte Carlo method known as metropolised independence sampling. The key idea involves constructing a Markov chain whose stationary distribution is the boson sampling distribution. To improve the performance, it is necessary to choose an initial, efficiently computable distribution as close as possible to the target distribution; Neville et al.^{18} proposed to use the distribution corresponding to distinguishable photons as the initial distribution. The algorithm also requires a burnin phase, in which the Markov chain approaches the stationary case. Using validation methods we review in Sec. 5, Neville et al.^{18} showed that the algorithm seems to perform well in simulating a 20photon boson sampling experiment on a standard laptop while enabling one to simulate a 30photon, 900mode apparatus on a local server. Clifford and Clifford^{30} proposed a breakthrough boson sampling simulation algorithm. Their result applies methods of hierarchical sampling commonly used in Bayesian computation.^{31}^{,}^{32} The algorithm samples from the exact boson sampling distribution and requires only $O[n{2}^{n}+\mathrm{poly}(m,n)]$ steps to output each sample, where $n$ is the number of photons and $m$ is the number of modes. The time it takes to output each sample corresponds roughly to the time of computing two permanents of a general $n\times n$ matrix. This algorithm has the same complexity also for the scattershot variation of boson sampling (see Sec. 3.1). Using recent feasibility studies on the calculation of permanents of large matrices,^{33} it is possible to estimate^{18} that boson sampling problems will only become reasonably hard for classical computers for experiments with around $n=50\text{\hspace{0.17em}\hspace{0.17em}}\text{photons}$. 2.3.Classically Simulable Limits of Boson SamplingOne important way to investigate the limitations of boson sampling is to search for physically realistic circumstances that might alter its computational capabilities, for example, by making it efficiently simulable. In this section, we review the known results regarding the complexity of boson sampling under realistic types of errors. The most physically relevant experimental imperfection in boson sampling is photon loss. Formally, loss has been described in two alternative models. One model, more common in the quantum optics literature, replaces losses in linearoptical elements by beam splitters that have some probability of routing photons into unmeasured modes.^{34} The second model considers the modification of Eq. (3) when a specific number of photons is lost, i.e., by averaging the final probability over all possible input photons that might have been lost.^{35}^{,}^{36} Brod and Oszmaniec^{36} discuss in detail in which regimes these two models are computationally equivalent. On the positive side, Aaronson and Brod^{35} showed that boson sampling retains its computational complexity if the number of lost photons is constant. This is done by replacing the permanent with a different matrix function in Eq. (3) and showing that in this limit that function is as hard to compute as the permanent. On the other hand, upper bounds on allowed photon losses are also known. If only $O(\mathrm{log}\text{\hspace{0.17em}}n)$ photons are left, Aaronson and Brod^{35} claimed that boson sampling becomes efficiently classically simulable. Although this was claimed without proof, it follows immediately from the algorithm of Clifford and Clifford.^{30} One can also provide an efficient classical simulation by showing that the lossy boson sampling state is sufficiently close to some wellknown state that is itself classically simulable, such as thermal^{37} or particleseparable states.^{36} This approach was used to show that boson sampling becomes classically simulable when less than $O(\sqrt{n})$ photons remain. In a very recent development, Renema et al.^{38} proposed an algorithm that gives an efficient classical simulation when a constant fraction of photons remain. Although the previous results all regard lossy boson sampling in some asymptotic limit, the latter would be directly applicable to finitesize experiments. It is also possible to combine the effects of losses with other imperfections, such as done by RahimiKeshari et al.^{34} Using phasespace methods (i.e., based on positive quasiprobability distributions), they showed that boson sampling becomes classically simulable under a combination of (sufficiently high) constant rates of losses and dark counts. Another relevant experimental imperfection that has been extensively considered in the boson sampling literature is partial photon distinguishability, e.g., due to some internal unmeasured degree of freedom such as frequency, polarization, and arrival time. Already in their original paper, AA pointed out that if all photons are completely distinguishable boson sampling becomes classically simulable. Interestingly, the transition probabilities for distinguishable photons are also given by permanents, but in this case they are permanents of matrices with only positive entries and can be approximated classically in polynomial time.^{39} Intermediate regimes of distinguishability have been considered by several authors.^{40}^{–}^{43} In particular, the formalism of Tichy^{40} has been used to describe an algorithm for simulating boson sampling when the photons have highpairwise distinguishability.^{44} From the other side, it is also known that if partial pairwise distinguishability decreases sufficiently fast as the number of photons increases,^{45} the complexity of boson sampling is unaffected by it. Beyond losses and partial distinguishability, boson sampling has also been investigated under the effects of other sources of noise, such as fabrication imperfections^{46}^{,}^{47} and Gaussian noise in the output data.^{48} Finally, very recently, a tensor network approach for the simulation of noisy quantum circuits has been reported in Ref. 49. More specifically, such a simulation approach leads to the consequence that most circuits with constant levels of noise can be simulated classically with polynomial resources. Although this approach has been tailored for quantum circuits, the application of this method can be potentially envisaged to other quantum manybody systems including boson sampling. 3.Variants of Boson SamplingAfter the initial boson sampling proposal by AA, other variants of the problem were proposed.^{11}^{,}^{13}^{–}^{17} These variants include the use of variable input modes, as opposed to a fixed set of input modes; the use of different input quantum states and detectors different from singlephoton bucket detectors. In this section, we review some variants of the boson sampling problem (see Fig. 2). 3.1.Scattershot Boson SamplingThe initial implementations of boson sampling used probabilistic sources based on spontaneous parametric downconversion (SPDC). One of the two generated photons is detected, heralding the other photon. Those SPDC singlephoton sources suffer from two drawbacks. The first is the probabilistic nature of the source; the second is the requirement of low pump intensity, so as to avoid higherorder nonlinear effects, which can generate more than two photons per pulse. Scattershot boson sampling^{11} is a variant of the boson sampling problem, which is particularly suitable to implementation using SPDC sources. Simply put, we connect $k$ ($k>n$) such single photon, SPDC sources to different input ports of the linear interferometer [see Fig. 2(a)]. The expected number of single photons per pulse is increased by a factor $\left(\begin{array}{c}k\\ n\end{array}\right)$, which for $k\gg n$ represents an exponential improvement in generation rate with respect to the fixed input version of boson sampling. This scattershot boson sampling setup naturally solves a version of the boson sampling problem, in which the inputs are chosen uniformly at random, and we are required to sample from the output distribution of this interferometer. Note that this variant does not require an increase in the pump laser power, as the laser can pump the SPDC sources sequentially, with little loss to downconverted photons. One variant of scattershot boson sampling is the socalled driven boson sampling of Barkhofen et al.^{13} The key idea is to add $k$ beam splitter layers prior to the $m$mode Haarrandom interferometer [see Fig. 2(b)]. These beam splitters can be used to couple up to $k\times m$ heralded, SPDC singlephoton sources to the main interferometer, increasing the $n$photon event rate by a factor of $k$ with respect to the original scattershot proposal with $m$ sources. 3.2.Gaussian Boson SamplingThe scattershot boson sampling variant described in Sec. 3.1 is already an example of a variant using different input states—in that case, Gaussian states that, upon measurement of a single photon, herald single photons. Hamilton et al.^{14}^{,}^{50} analyzed the advantages and complexity of using Gaussian states directly as input states in a linear interferometer. Hamilton et al.’s proposal of Gaussian boson sampling^{14}^{,}^{50} relies on inputs of singlemode squeezed states, linear interferometers, and singlephoton detectors at the output [see Fig. 2(c)]. They showed that the output statistics is given by the hafnian^{20} of a matrix that includes information about both the linear interferometer and the Gaussian input state. Like the permanent, calculating the hafnian is a problem in the #P complexity class. This made it possible for them to define a variant of the boson sampling problem called Gaussian boson sampling. By relying on some conjectures about hafnians, which mirror the conjectures about the permanent used in the usual boson sampling problem, they showed that also this problem is intractable for classical computers. Classical benchmarking of Gaussian boson sampling was also performed with the Titan supercomputer.^{51} In that paper, it was shown that classical simulation of a 800mode Gaussian boson sampling experiment where 20 output detectors over 800 modes provide a click can be performed in $\sim 2\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{h}$. Gaussian boson sampling uses all the squeezed photons, rather than using half of them to herald the other half. This advantage, however, is offset by the fact that hafnians are slightly easier to calculate.^{50}^{,}^{52} Gaussian boson sampling allows for the use of higher pump power, which in scattershot boson sampling had to be kept low to avoid multiplephoton generation. This variant also inherits the exponential advantage in event rate of the scattershot variant. In comparison with scattershot boson sampling, Gaussian boson sampling events form a much smaller sampling space, as variable inputs are not required. This results in an experimental advantage over the scattershot variant if one wants to completely characterize the probabilities associated with each event. For larger implementations, however, this is not a tenable goal; we will discuss the problem of boson sampling validation in Sec. 5. By considering the timereversed version of Gaussian boson sampling, we see that a variant of boson sampling with singlephoton inputs and Gaussian basis measurements is exactly as hard to simulate as Gaussian boson sampling. This variant has been recently analyzed in detail.^{53}^{,}^{54} Furthermore, a variant of Gaussian boson sampling with threshold detectors has been also investigated.^{55} Some possible computational applications of Gaussian boson sampling, which we will revisit in Sec. 7, include finding dense subgraphs^{56} and perfect matching of graphs,^{57} improving the performance of stochastic optimization algorithms,^{58} and simulation of molecular dynamics.^{59}^{,}^{60} Gaussian boson sampling has also been linked to the dynamical Casimir effect in multimode waveguide resonators.^{61} 3.3.Other VariantsBesides scattershot and Gaussian boson sampling, other variants of the task have been theoretically proposed and investigated. In Refs. 16 and 17, the sampling complexity has been analyzed for photonadded and photonsubtracted states, and different detection schemes [see Fig. 2(d)]. Depending on the specific configuration, the complexity is maintained for all system parameters or shows a transition toward a classically simulable regime when using a large number of photons. Finally, the complexity of multiboson correlation sampling with photons having no spectral overlap has been analyzed in Ref. 15. In this case, the output photons after evolution through a random linear network are measured by a time and polarizationresolved correlation detection apparatus [see Fig. 2(e)]. In this case, the complexity is provided by interference of all $n!$ paths that are indistinguishable to the detectors. This variant of the original problem has also provided an insight on the role of the measurement type in the particle indistinguishabilty regime.^{43} Finally, a scattershot version of multiboson correlation sampling has been proposed in Ref. 62, enabling enhanced events rate by sampling also from time and frequency modes of the input photons. 4.Experimental Boson SamplingTo perform a standard boson sampling experiment with photons, one needs the following three building blocks: singlephoton sources, unitary transformations (representing possible linear interferometers), and singlephoton detectors. In Fig. 3, we summarize the technical solutions adopted in the experimental demonstrations of boson sampling so far. A list of the experiments reported in the literature is provided in Table 1, where the main features of the three building blocks are also detailed. Table 1Relevant details of the main photonic boson sampling experiments reported in the literature. Note: n is the maximum number of detected photons in the boson sampling experiment after unitary evolution; m is the number of available optical modes. SBS, scattershot boson sampling; GBS, Gaussian boson sampling.
While diverse in the experimental details, the first demonstrations of boson sampling^{63}^{–}^{67}^{,}^{69}^{,}^{76} share the main features of their experimental platforms. In particular, in all these experiments, several identical photons are generated by SPDC in nonlinear crystals [Fig. 3(a)], they undergo a unitary transformation in a guidedwave architecture [Figs. 3(e) and 3(g)], and they are finally detected by silicon singlephoton avalanche photodiodes (SPADs) [Fig. 3(i)]. In order to produce more than two identical photons, the pump intensity of the SPDC process is increased so that the probability of simultaneous generation of two or more pairs becomes significant. If the different pairs of photons are collected on distinct spatial or polarization modes, the photons can easily be directed to separate input ports of the linear optical circuit. However, emission of multiple pairs is exponentially less efficient than the generation of a single pair. In fact, in the mentioned experiments, at most four identical photons were generated on different modes, pumping one^{63}^{,}^{65}^{,}^{66}^{,}^{69}^{,}^{76} or two^{64}^{,}^{67} distinct nonlinear crystals. In addition, it should be noted that in this process the emission of two pairs on the same two modes (an outcome that is not desired) has the same probability of the emission of two pairs on four distinct modes. The correct event is identified by employing one of the four output photons as trigger while only the remaining three are injected in the interferometer, and postselection is operated on the overall detection of four photons. This is a relevant aspect since multiple emissions of photons on the same mode must be avoided in boson sampling experiments. In this case, transition amplitudes are related to matrix permanents with repeated rows that can be easier to compute. As a result, all these first experiments report at most threephoton genuinebosonsampling instances. In Ref. 67, the output distribution of four photons on different modes is reconstructed a posteriori from a probabilistic input. The implementation of the unitary transformation was done with a guidedwave architecture, which enables one to cascade interferometers with compactness and stability impossible to achieve by a conventional bulkoptics approach. While boson sampling experiments have been reported also making use of fiber splitters^{63}^{,}^{70} or arrays of parallel and continuously coupled waveguides^{67} [Fig. 3(g)], the most common layout employs an integrated network of directional couplers (the integrated version of the bulk beam splitter) and phase shifters. Indeed, by arranging such discrete elements in a triangular^{23}^{,}^{66}^{,}^{69} or rectangular^{24} layout, with judiciously chosen values of the reflectivities and phase shifts, it is possible to realize any arbitrary linear transformation of the optical modes. In Ref. 77, it was also shown how to construct an operational approach to directly implement Haarrandom matrices from appropriate distributions for the optical components. One should note that the capability of producing arbitrary transformations is definitely needed to extract and implement Haarrandom unitaries, and thus it is essential for the demonstration of a real quantum advantage in a boson sampling experiment. Up to now, the realization of arbitrarily chosen transformations by the mentioned triangularnetwork approach has been experimentally reported in femtosecond laser written^{66} or silicaonsilicon^{69} waveguide circuits. The femtosecond laser writing technology^{78} gives the unique possibility to realize threedimensional (3D) waveguide circuits;^{79} the third dimension was exploited in Ref. 66 to deform the waveguide paths out of the plane and thus independently adjust phases and reflectivities within the interferometer. In particular, such an innovative design enabled the realization of an arbitrarily chosen fivemode transformation [Fig. 3(e)]. More recently, a sixmode interferometer with full reconfiguration capabilities was demonstrated in Ref. 69, using stateofart silicaonsilicon lithographic technology. The authors developed an integrated interferometer provided with 30 electronically controlled thermooptic phase shifters, which allow the dynamical tuning of the circuit to obtain any sixmode linear interferometer. In that device, the individual splitting elements in the triangular network are replaced by Mach–Zehnder interferometers: in this way, it is possible to control the equivalent directional coupler’s reflectivity by acting on the internal phase of the Mach–Zehnder. As a matter of fact, all the experiments discussed above suffer from a few common and very relevant limitations, which hamper an effective scaling of photonic boson sampling up to the level of a provable quantum advantage over current classical computers. As a first point, the SPDC generation scheme presented above hardly allows the scaling of the number of input photons beyond the small numbers already demonstrated. One possible experimental route to achieve multiphoton input states with higher efficiency while still relying on nonlinear frequency conversion, is the scattershot approach [Fig. 3(b)], whose theory was discussed in Sec. 3.1. An experimental demonstration of this method was given in Ref. 68. There, six sources of downconverted photon pairs (implemented using three distinct BBO crystals) operate in parallel in order to inject threephoton states into integrated interferometers with up to 13 modes. One of the sources feeds a photon pair to a fixed couple of inputs. The other five sources produce random, but heralded, single photons that are coupled to other input ports of the interferometer. Detection of three photons at the output, in coincidence with one heralding event, postselects the occurrence of a proper input state with three photons in separate modes. This technique enabled a 4.5times enhanced generation rate of threephoton states with respect to the multipair SPDC scheme adopted previously. In a recent paper,^{74} the scattershot approach has been scaled up to 12 sources, located in six nonlinear crystals. Each of the crystals can emit probabilistically up to two heralded photons with horizontal and/or vertical polarization, multiplexed in the same spatial mode. The sources were then connected to the input ports of an interferometer consisting of six spatial modes. By considering the polarization degree of freedom, this experimental scheme implemented a 12mode transformation composed by two $6\times 6$ unitaryindependent blocks. This led to the observation of three, four, and fivephoton events thanks to the increased signal rate provided by the scattershot approach. The measured event rates were, respectively, 3.9 kHz, 44 Hz, and 0.3 Hz. When trying to scale up the number of sources, a bulk implementation of SPDC will eventually reach a limit size, whereas a more promising platform for larger experiments can be provided by the adoption of arrays of integrated sources^{80}^{–}^{83} and multiplexing.^{84} Demonstrations of arrayed integrated sources have indeed been reported using different technologies and approaches, including spontaneousfourwavemixing (SFWM) in silicon photonics^{75}^{,}^{83} and glass waveguides,^{80} or SPDC in nonlinear waveguides.^{81}^{,}^{82} Very recently, scattershot and Gaussian boson sampling in a silicon chip was reported online,^{75} in a device that includes four waveguideintegrated photonpair sources, based on nondegenerate SFWM. These sources consist in spiralling waveguides (to increase the nonlinear interaction region), where twomode vacuum squeezing is produced via a singlewavelength laser pump [Fig. 3(c)]. In the chip, the sources are directly connected to an array of 12 evanescently coupled waveguides, enabling detection of 8 photons in the chip corresponding to fourphoton boson sampling experiments. The measured rates were four events/hour for the scattershot regime, for an overall number of $\sim 20$ recorded events and 1.1 Hz for the Gaussian scenario. A different strategy involves the use of quantumdot sources to generate trains of identical single photons ^{70}^{–}^{73} with high efficiency. Such quantumdot sources are based on singledefect states in the semiconductor lattice, localized within a resonant microcavity [Fig. 3(d)]. A laser pulse excites the defect state, which, upon spontaneous decay, emits a single photon. These defect states are discrete and welldefined in energy: consecutive excitation pulses thus result in the emission of highly indistinguishable single photons. The use of modelocked oscillators as pump lasers enables singlephoton trains at tens of megahertz repetition rates from the quantum dot. To efficiently route the photons into different spatial optical modes, i.e., distinct inputs of the unitary circuit, active timetospace demultiplexing^{71}^{–}^{73} is preferably adopted (passive demultiplexing has also been used,^{70} but it is not efficient when scaling up the number of photons). With this kind of source, boson sampling experiments with up to five detected photons have been demonstrated,^{72}^{,}^{73} with fivephoton events rates of 4 Hz^{72} and 0.78 kHz,^{73} respectively. In those works, active demultiplexing was performed by a chain of Pockels cells. However, the switching frequency of those bulkoptics devices did not exceed the megahertz. Miniaturized guidedwave electrooptic switches,^{85} fabricated in nonlinear substrates, might enable scaling up the operating frequencies to match the pulse frequency of modelocked oscillators. Indeed, it is still a challenge to produce such devices with reasonably low insertion losses. As a further aspect to note, semiconductor quantum dot sources typically emit at nearinfrared wavelengths around 900 nm, close to the sensitivity limits of silicon detectors. Superconducting nanowire singlephoton detectors (SNSPDs), see Fig. 3(j) may thus be a convenient choice in this case.^{71}^{,}^{73} A second important problem of the first experiments of photonic boson sampling is the relevant insertion loss of the circuit implementing the linear interferometer: in fact, the typical transmission of an integratedoptics device with reasonable complexity and 10 to 20 modes is in the order of 30%. Losses are a very relevant problem as the number $n$ of photons increases. Indeed, the success rate of the boson sampling experiment scales as the circuit losses to the power of $n$. Two innovative experimental implementations have emerged recently that may allow one to substantially reduce the loss of the interferometric apparatus. Wang et al.^{72}^{,}^{73} proposed and demonstrated boson sampling using multimode interferometers operating in freespace that consist in microoptical assemblies. These are built of several fusedquartz trapezoids, covered with differently reflecting optical coatings, stacked together [see Fig. 3(f)] to produce a device with the size of a few centimetres. Collimated light beams, injected at the sides, experience multiple reflections and interference inside the structure, each interface between two trapezoids corresponding to a row of beam splitters. Such structures have been reported with geometries that mimic both the triangular^{72} and the rectangular^{73} interferometric layouts that would allow one, in principle, to implement arbitrary linearoptical transformations. However, while the achieved experimental results prove the optimal transmission (over 99%) and high phasestability of these structures, it is not clear whether they could be adopted to realize linearoptical transformations that are truly Haarrandom, as the free parameters of the circuit appear to be quite reduced. In fact, the reflectivity of one interface defines the characteristics of an entire row of beam splitters, and phases also cannot be controlled punctually within a given trapezoid. A fundamentally different approach was proposed theoretically in Ref. 86 and recently reported experimentally in Ref. 71, and relies on the use of temporal modes instead of spatial ones. The available modes are a train of time bins separated by an interval $\tau $. The unitary transformation is implemented by means of two different lowloss fiber loops where the photons can be routed by means of rapidly switching modulators [see Fig. 3(h)]. The smaller loop, introducing a $\tau $ delay on the injected photons, enables one to interfere adjacent time bins. The larger loop, instead, produces a delay much larger than $\tau $ and feeds back the circuit with the same photon packets, thus allowing for successive interference steps. It is easily shown that with this scheme one can implement the same beamsplitter networks that, in space, allow for arbitrary linear interferometers.^{86} Such an apparatus was used, in conjunction with a quantum dot source, to realize a boson sampling experiment with four photons interfering in eight modes.^{71} Notably, in this approach, the photonpulse train generated by the quantum dot source does not need to be spatially demultiplexed. An experimental implementation of the multiboson correlation sampling scheme^{15} with photons generated from three atomicensemble quantum memories has been reported^{87} very recently, showing that measurement over inner modes can be a useful resource in multiphoton networks.^{88} 5.Validation of Boson SamplingBy its very nature, boson sampling^{9} is a sampling task, whose solution might not be efficiently verifiable (contrarily to problems in the complexity class NP, such as factoring). It is nevertheless crucial to identify suitable methodologies able to tackle this fundamental aspect. Indeed, any experimental instance approaching the regime of quantum advantage must be necessarily accompanied by appropriate evidence that the employed device is correctly performing the sampling process, rather than drawing data from other distributions that can be efficiently classically sampled that merely resemble the correct one. The first experimental instances of boson sampling^{63}^{–}^{66} have employed a direct comparison between the measured sample and the expected distribution, evaluated classically with a numerical approach, as a means to verify the device operation. However, such an approach cannot be scaled to larger instances since calculating the expected distribution is exponentially hard, and an exponentially large sample has to be collected in order to obtain a meaningful estimate of the output probabilities. While full certification is believed to be out of reach, a bottomup approach to assess the correct functioning of a boson sampling machine is currently under investigation. The idea behind such schemes is to identify suitable classically computable mockup distributions that represent plausible error models for the device operation. Appropriate methods should then be applied to exclude that the data were sampled from such mockup distributions. Finally, progressively refined models representing more stringent tests can be included in the toolbox. Starting from Ref. 89, which raised the fundamental question of boson sampling verification, some initial approaches to validation were developed (a schematic view is shown in Fig. 4). The very first example of mockup distribution is the uniform one, for which an efficient method requiring a very small sample size has been proposed^{19} and tested experimentally.^{67}^{,}^{76} The validation technique is based on the evaluation of an efficiently computable quantity, which is obtained from the elements of the unitary matrix and is only partially correlated to the permanents expressing the input–output probabilities. A less trivial mockup distribution is the one obtained by sampling an $m$mode interferometer, described by the same unitary transformation $U$, with distinguishable particles as inputs. In the latter case, no quantum interference occurs and an efficient classical sampling algorithm can be employed to draw data samples.^{9} To discriminate between this distinguishablephoton scenario and a boson sampling machine, a method based on likelihood ratio tests has been developed and experimentally tested^{76} with up to 3 photons in a 13mode integrated interferometer, also considering a Bayesian variant^{90} and its extension to the analysis of scattershot boson sampling data.^{68} More recently, likelihood ratio tests have been employed in more complex boson sampling experiments^{70}^{–}^{73} and even used to assess a method for the classical simulation of boson sampling based on Markov chains.^{18} The upside of likelihood ratio tests can be found in the fast convergence rate in terms of sample size, whereas the main downside is that they rely on the evaluation of matrix permanents, and thus they are not computationally efficient. A different approach to discriminate against distinguishable particles can be found in exploiting collective properties of multiphoton interference such as bunching or clouding,^{67} the latter being the tendency of bosons to exit on nearby output modes in quantum walks unitaries. By combining the observation of such property with Bayesian hypothesis testing, it was possible to discriminate multiphoton interference with respect to distinguishable particles for 3 photons in an integrated quantum walk unitary of 21 modes. As a matter of fact, approaches employing bunching or clouding as a signature of the correct behavior of a boson sampling machine can be tricked by a classically computable configuration called meanfield sampler.^{91} The latter is a model originating from independent, noninterfering singlephoton states with random phases and is able to mimic some (but not all) features of genuine multiphoton interference. To help discriminate against this model, zero transmission laws^{92} based on peculiar transformations can be employed. More specifically, for certain unitary matrices and input states possessing specific symmetries, it is possible to efficiently predict (without relying on permanent calculations) that a significant subset of input–output combinations will never be observed when genuine multiphoton interference is present. Hence, by implementing such transformations and by directly counting how many events are recorded in the forbidden combinations, it is possible to verify whether the input state is a meanfield sampler. These zerotransmission laws have been proposed^{91}^{,}^{92} for Fourier interferometers, and tested experimentally in integrated devices by employing a universal design^{69} or an efficient 3D layout,^{93} leading to the observation of zero transmission law with 2 and 3photons in devices with up to 8 modes. Theoretical^{94}^{,}^{95} and experimental^{96} papers have extended this approach to socalled Sylvester interferometers, which were analyzed using a general mathematical framework.^{97}^{,}^{98} Sylvester interferometers have been also suggested and tested as being optimal transformations for certain number of photons and modes,^{99} in terms of minimizing the achievable error rate as a function of the sample size in a general test of genuine multiphoton interference. Further studies focusing on specific interferometer designs have shown the possibility of constructing a genuine indistinguishability witness in a similar spirit to the detection of genuine multipartite entanglement.^{100} Moving a step further, other validation methods that can be adopted for any linear evolution and do not require the implementation of purposebuilt interferometers have been devised. A first class of validation tests based on pattern recognition techniques^{101} was developed for both boson sampling validation and its scattershot variant. Such an approach constructs a compatibility test^{102} that compares two different samples and determines whether they have been both generated by indistinguishable (distinguishable) particles or not. This is obtained using machine learning techniques for the identification of internal patterns within the measured samples. This approach has been tested in 3photon, 13mode experiments. A different methodology relying on statistical signatures of manybody quantum interference has been recently proposed^{103} (with an associated perspective article^{104}). This method is based on advanced and mostly analytic tools from statistical physics and random matrix theory and is efficient in the number of photons $n$ and modes $m$. This approach has been implemented experimentally^{105} in a threephoton, sevenmode experiment, by including machine learning techniques to optimize the identification whether the sample has been generated by indistinguishable or distinguishable photons. Further methods have been proposed recently,^{106}^{,}^{107}^{,}^{108} but they are still lacking of experimental implementation. The latter two methods we have discussed in some detail are actually representatives of the two main categories of validation schemes: (i) approaches based on computational criteria and (ii) physically motivated approaches. In the first case, the validator employs computational techniques to find patterns or peculiar features in data samples. Such techniques can be generalpurpose and do not rely on knowledge of the system under investigation (multiphoton evolution in multimode interferometers). Those approaches directly look at the output data produced by the quantum device and search for common properties without making any specific assumption on the system under investigation. In the second case, the validation methods exploit the physical phenomenon behind the boson sampling task, and thus design techniques that rely on the knowledge that output data have been produced by multiphoton interference. These physically motivated approaches can also employ tools inspired by other physical systems, such as manybody physics. Note that the two distinct methodologies described above (computationally or physically based) can also be combined and benefit from each other. On the one side, computationally oriented approaches can provide new insights in the physical problem, thus highlighting hidden features otherwise not detectable. On the other hand, the performances of physically based approaches can be boosted by adding computational modules in the data identification stage. 6.Scalability Toward Quantum SupremacyIt is important to note that many of the requirements for boson sampling may be relaxed if future theoretical work manages to provide evidence for hardness of simulation of simpler experiments. For example, the requirements of using $m=O({n}^{2})$ modes and that interferometers be drawn from the uniform, Haar ensemble, were assumptions made for technical convenience in the original work by AA,^{9} but it is an open question whether they are strictly necessary. One major scalability issue that seems to follow from the requirement of Haarrandom interferometers is related to losses. Arbitrary linearoptical networks have worstcase depth that is linear (in the number of modes).^{23}^{,}^{24}^{,}^{109} If each beam splitter has a fixed loss factor, this implies an exponentially small perphoton probability of survival in the interferometer, which would imply that boson sampling is not scalable. One important avenue of theoretical work is to investigate the hardness of boson sampling with other interferometer ensembles that hopefully can be built with shorter depth (similar to the idea of $t$ designs in quantum computing^{110}) and thus circumvent losses. Another theoretical improvement that would help in this direction would be proving the hardness of boson sampling with fewer than $O({n}^{2})$ modes [hopefully only $O(n)$ modes]. A caveat is that this might introduce the need for numberresolving detectors. Interestingly, in the context of exact boson sampling it is known that interferometers of depth 4 and $O(2n)$ modes are already hard to simulate,^{12} although it remains an important open question whether a similar result holds also for approximate simulation. The other main scalability issue to reach the quantum advantage regime with photonic boson sampling devices is represented by photon sources. The main requirements toward a scalable implementation are high generation probability, high level of photon indistinguishability, and low probability of generating higher order photon number terms. Current approaches employ parametric downconversion, spontaneous four wave mixing, and solidstate quantumdot sources. The first category of photon generation modules suffers from an intrinsic small generation probability. Indeed, parametric downconversion is a probabilistic process where the generation efficiency $g$ must be kept low to avoid significant contributions from higher order terms with more than one photon per mode. This feature becomes an issue when using parametric downconversion sources for fixed input boson sampling since the probability of generating the correct $n$photon input is exponentially decreasing as ${g}^{n}$. Nevertheless, some of the variants of the original task, such as scattershot and Gaussian boson sampling, employ a careful design leading to a significant enhancement in the generation rate with parametric downconversion sources. Scaling up to a larger number of photons requires reaching high values of heralding efficiencies and photon indistinguishability. Recent studies have shown the possibility of tailoring the source characteristics to obtain significant improvements in those parameters.^{74} Further perspectives for scaling up the number of photons can be envisaged with multiplexing,^{84} within an integrated platform.^{80} The second approach employing solidstate sources has been recently introduced in the boson sampling context. Such a platform presents significantly higher efficiency and generation rate^{73} than parametric downconversion, maintaining a negligible probability of generating multiple photons at the same time. Current experiments employ single quantumdot sources combined with timetospatial multiplexing,^{85} thus distributing a train of $n$ single photons generated at subsequent time bins in $n$ different spatial modes. Such an approach requires implementing multiplexing with a low level of losses for scaling up to large photon number. An alternative approach would require generating highly indistinguishable photons from different quantumdot sources. The problem of devising photon sources of high efficiency and indistinguishability is also strongly related to the transition between a classically simulable regime and the hardtosimulate regime. More specifically, recent studies^{38}^{,}^{44} have shown that partial photon indistinguishability $p$ limits the hardness of a classical simulation. Indeed, given a certain value of $p$, it has been shown that an $n$photon system presents a complexity that corresponds to a lower number of photons $k<n$. Furthermore, such threshold only depends on the photon indistinguishability $p$ and is actually independent from $n$, meaning that it is not possible to overcome this issue only by increasing $n$. Thus strong effort should be dedicated to building sources with a high level of photon indistinguishability. 7.OutlookIn this review, we have discussed recent advances in the implementation of photonic boson sampling. Starting from the original proposal by AA^{9} and from the first instances reported in Refs. 6364.65.–66, significant progress has been reported on both theory and experiments. Several theoretical studies have focused on different aspects of the problem, including the definition of the limits for classical simulations, the role of experimental imperfections in the complexity of classical simulation, as well as proposals of new variants of the original problem. In parallel, technological advances have led to the implementation of progressively larger experimental instances. Indeed, tools such as quantum dot singlephoton sources or superconducting nanowire singlephoton detector have enabled the possibility to generate and measure samples with larger number of photons, opening the route to further advances. Additionally, progress in the implementation of integrated multimode interferometers allowed the inclusion of reconfigurable elements in the structures, in order to realize arbitrary linear operations. A fundamental issue toward achieving the regime of quantum advantage is the capability to identify whether a given set of data has been sampled from the correct distribution. This is particularly important for future, largerscale implementations where the classical simulation will become unfeasible. The first results focusing on identifying specific errors (such as sampling from a distribution with distinguishable particles) via different techniques have been developed and tested, but additional work in this direction is required. Finally, the first proposals of application of boson sampling devices outside its original scope have been reported. For instance, it has been shown that Gaussian boson sampling represents a tool for photonic simulation of vibronic spectra in molecular dynamics.^{59}^{,}^{60}^{,}^{75} Promising perspectives have been also pointed out in the context of hybrid quantum computation: in this approach, specific quantum routines are embedded in a more complex computation task performed with a classical approach, providing an overall quantum boost. Some examples have very recently been proposed for the Gaussian boson sampling variant, which is related to the sampling of hafnians of the evolution matrix. More specifically, two relevant scenarios have been identified. On one side, Arrazola et al.^{58} have introduced an NPhard optimization problem (MaxHaf). A Gaussian boson sampling device can be inserted within the computation performed by stochastic algorithms (such as random search or simulated annealing) to improve the algorithm performance. The key ingredient can be found in replacing the uniform sampling part of the algorithm with a quantum device that performs sampling from a probability distribution proportional to matrix hafnians, thus leading to a faster convergence to the output of the computation. In parallel, the connection between hafnians and graph theory^{111} has led to a few proposals aiming to exploit Gaussian boson sampling in the analysis of complex graphs. For instance, in Ref. 57, it has been shown that the estimation of the number of perfect matchings in undirected graphs can be performed more efficiently by encoding the adjacency matrix of a graph in a Gaussian state, allowing one to improve the sampling success rate. Furthermore, stochastic algorithms that tackle an NPhard problem, such as the densest $k$subgraph problem, can have its performance enhanced when sampling is performed via a Gaussian boson sampling device.^{56} Finally, a very recent work^{112} has shown the connection between such computational model and graph isomorphism. These ideas can represent a starting point for the application of quantum boson sampling devices also to problems not envisaged in the original theoretical proposal. AcknowledgmentsThe authors declare no conflicts of interest. This work was supported by the European Research Council Advanced Grant CAPABLE (Composite integrated photonic platform by femtosecond laser micromachining, Grant Agreement No. 742745), the QuantERA ERANET Cofund in Quantum Technologies 2017 project HiPhoP (HighDimensional Quantum Photonic Platform, Project ID 731473), and the European H2020FETPROACT2014 Grant QUCHIP (Quantum Simulation on a Photonic Chip, Grant Agreement No. 641039). This work was also supported by CNPq project INCT de Informação Quântica. ReferencesP. W. Shor,
“Polynomialtime algorithms for prime factorization and discrete logarithms on a quantum computer,”
SIAM J. Comput., 26
(5), 1484
–1509
(1997). https://doi.org/10.1137/S0097539795293172 SMJCAT 00975397 Google Scholar
F. Flamini, N. Spagnolo and F. Sciarrino,
“Photonic quantum information processing: a review,”
Rep. Prog. Phys., 82 016001
(2019). https://doi.org/10.1088/13616633/aad5b2 RPPHAG 00344885 Google Scholar
T. Schetz,
“Trapping ions and atoms optically,”
J. Phys. B: At. Mol. Opt. Phys., 50 102001
(2017). https://doi.org/10.1088/13616455/aa69b2 JPAPEH 09534075 Google Scholar
T. Xin et al.,
“Nuclear magnetic resonance for quantum computing: techniques and recent achievements,”
Chin. Phys. B, 27 020308
(2018). https://doi.org/10.1088/16741056/27/2/020308 16741056 Google Scholar
Quantum Dots for Quantum Information Technologies, NanoOptics and Nanophotonics, Springer, Cham
(2017). Google Scholar
G. Wendin,
“Quantum information processing with superconducting circuits: a review,”
Rep. Prog. Phys., 80 106001
(2017). https://doi.org/10.1088/13616633/aa7e1a RPPHAG 00344885 Google Scholar
X. Gu et al.,
“Microwave photonics with superconducting quantum circuits,”
Phys. Rep., 718719 1
–102
(2017). https://doi.org/10.1016/j.physrep.2017.10.002 PRPLCM 03701573 Google Scholar
A. W. Harrow and A. Montanaro,
“Quantum computational supremacy,”
Nature, 549 203
–209
(2017). https://doi.org/10.1038/nature23458 Google Scholar
S. Aaronson, A. Arkhipov,
“The computational complexity of linear optics,”
in Proc. 43rd Annu. ACM Symp. Theory of Comput.,
333
–342
(2011). https://doi.org/10.1145/1993636.1993682 Google Scholar
L. Valiant,
“The complexity of computing the permanent,”
Theor. Comput. Sci., 8
(2), 189
–201
(1979). https://doi.org/10.1016/03043975(79)900446 TCSCDI 03043975 Google Scholar
A. P. Lund et al.,
“Boson sampling from a Gaussian state,”
Phys. Rev. Lett., 113 100502
(2014). https://doi.org/10.1103/PhysRevLett.113.100502 PRLTAO 00319007 Google Scholar
D. J. Brod,
“Complexity of simulating constantdepth boson sampling,”
Phys. Rev. A, 91 042316
(2015). https://doi.org/10.1103/PhysRevA.91.042316 Google Scholar
S. Barkhofen et al.,
“Driven boson sampling,”
Phys. Rev. Lett., 118 020502
(2017). https://doi.org/10.1103/PhysRevLett.118.020502 PRLTAO 00319007 Google Scholar
C. S. Hamilton et al.,
“Gaussian boson sampling,”
Phys. Rev. Lett., 119 170501
(2017). https://doi.org/10.1103/PhysRevLett.119.170501 PRLTAO 00319007 Google Scholar
S. Laibacher and V. Tamma,
“From the physics to the computational complexity of multiboson correlation interference,”
Phys. Rev. Lett., 115 243605
(2015). https://doi.org/10.1103/PhysRevLett.115.243605 PRLTAO 00319007 Google Scholar
K. P. Seshadreesan et al.,
“Boson sampling with displaced singlephoton Fock states versus singlephotonadded coherent states: the quantumclassical divide and computationalcomplexity transitions in linear optics,”
Phys. Rev. A, 91 022334
(2015). https://doi.org/10.1103/PhysRevA.91.022334 Google Scholar
J. P. Olson et al.,
“Sampling arbitrary photonadded or photonsubtracted squeezed states is in the same complexity class as boson sampling,”
Phys. Rev. A, 91 022317
(2015). https://doi.org/10.1103/PhysRevA.91.022317 Google Scholar
A. Neville et al.,
“Classical boson sampling algorithms with superior performance to nearterm experiments,”
Nat. Phys., 13 1153
–1157
(2017). https://doi.org/10.1038/nphys4270 NPAHAX 17452473 Google Scholar
S. Aaronson and A. Arkhipov,
“Boson sampling is far from uniform,”
Quantum Inf. Comput., 14 1383
–1423
(2014). QICUAW 15337146 Google Scholar
E. R. Caianiello,
“On quantum field theory—I: explicit solution of Dyson’s equation in electrodynamics without use of Feynman graphs,”
Il Nuovo Cimento, 10
(12), 1634
–1652
(1953). https://doi.org/10.1007/BF02781659 Google Scholar
L. Troyansky and N. Tishby,
“Permanent uncertainty: on the quantum evaluation of the determinant and the permanent of a matrix,”
in Proc. Phys. Comp.,
1
–5
(1996). Google Scholar
S. Scheel,
“Permanents in linear optical networks,”
Acta Phys. Slovaca, 58 675
(2008). https://doi.org/10.2478/v101550100092x APSVCO 03230465 Google Scholar
M. Reck et al.,
“Experimental realization of any discrete unitary operator,”
Phys. Rev. Lett., 73 58
–61
(1994). https://doi.org/10.1103/PhysRevLett.73.58 PRLTAO 00319007 Google Scholar
W. R. Clements et al.,
“An optimal design for universal multiport interferometers,”
Optica, 3
(12), 1460
–1465
(2016). https://doi.org/10.1364/OPTICA.3.001460 Google Scholar
A. Arkhipov and G. Kuperberg,
“The bosonic birthday paradox,”
Geometry and Topology Monographs, 18 1
–7
(2012). https://doi.org/10.2140/gtm Google Scholar
L. Gurvits,
“On the complexity of mixed discriminants and related problems,”
Lect. Notes Comput. Sci., 3618 447
–458
(2005). https://doi.org/10.1007/11549345 LNCSD9 03029743 Google Scholar
L. Stockmeyer,
“On approximation algorithms for #P,”
SIAM J. Comput., 14
(4), 849
–861
(1985). https://doi.org/10.1137/0214060 Google Scholar
S. Toda,
“PP is as hard as the polynomialtime hierarchy,”
SIAM J. Comput., 20
(5), 865
–877
(1991). https://doi.org/10.1137/0220053 SMJCAT 00975397 Google Scholar
P. S. Efraimidis,
“Weighted random sampling over data streams,”
Lect. Notes Comput. Sci., 9295 183
–195
(2015). https://doi.org/10.1007/9783319240244 LNCSD9 03029743 Google Scholar
P. Clifford and R. Clifford,
“The classical complexity of boson sampling,”
in Proc. 29th ACMSIAM Symp. Discrete Algorithms (SODA ‘18),
146
–155
(2018). Google Scholar
J. S. Liu, Monte Carlo Strategies in Scientific Computing, Springer Science & Business Media, New York
(2008). Google Scholar
J. P. Green et al.,
“Bayesian computation: a summary of the current state, and samples backwards and forwards,”
Stat. Comput., 25
(4), 835
–862
(2015). https://doi.org/10.1007/s1122201595745 STACE3 09603174 Google Scholar
J. Wu et al.,
“A benchmark test of boson sampling on Tianhe2 supercomputer,”
Nat. Sci. Rev., 5 715
–720
(2018). https://doi.org/10.1093/nsr/nwy079 NPSREL Google Scholar
S. RahimiKeshari, T. C. Ralph and C. M. Caves,
“Sufficient conditions for efficient classical simulation of quantum optics,”
Phys. Rev. X, 6 021039
(2016). https://doi.org/10.1103/PhysRevX.6.021039 PRXHAE 21603308 Google Scholar
S. Aaronson and D. J. Brod,
“Boson sampling with lost photons,”
Phys. Rev. A, 93 012335
(2015). https://doi.org/10.1103/PhysRevA.93.012335 Google Scholar
M. Oszmaniec and D. J. Brod,
“Classical simulation of photonic linear optics with lost particles,”
New J. Phys., 20 092002
(2018). https://doi.org/10.1088/13672630/aadfa8 NJOPFM 13672630 Google Scholar
R. GarciaPatrón, J. J. Renema and V. Shchesnovich,
“Simulating boson sampling in lossy architectures,”
(2017). Google Scholar
J. J. Renema, V. Shchesnovich and R. GarciaPatrón,
“Classical simulability of noisy boson sampling,”
(2018). Google Scholar
M. Jerrum, A. Sinclair and E. Vigoda,
“A polynomialtime approximation algorithm for the permanent of a matrix with nonnegative entries,”
J. ACM, 51
(4), 671
–697
(2004). https://doi.org/10.1145/1008731 Google Scholar
M. C. Tichy,
“Sampling of partially distinguishable bosons and the relation to the multidimensional permanent,”
Phys. Rev. A, 91 022316
(2015). https://doi.org/10.1103/PhysRevA.91.022316 Google Scholar
P. P. Rohde,
“Boson sampling with photons of arbitrary spectral structure,”
Phys. Rev. A, 91 012307
(2015). https://doi.org/10.1103/PhysRevA.91.012307 Google Scholar
V. S. Shchesnovich,
“Partial indistinguishability theory for multiphoton experiments in multiport devices,”
Phys. Rev. A, 91 013844
(2015). https://doi.org/10.1103/PhysRevA.91.013844 Google Scholar
V. Tamma and S. Laibacher,
“Multiboson correlation interferometry with arbitrary singlephoton pure states,”
Phys. Rev. Lett., 114 243601
(2015). https://doi.org/10.1103/PhysRevLett.114.243601 PRLTAO 00319007 Google Scholar
J. J. Renema et al.,
“Efficient classical algorithm for boson sampling with partially distinguishable photons,”
Phys. Rev. Lett., 120 220502
(2018). https://doi.org/10.1103/PhysRevLett.120.220502 PRLTAO 00319007 Google Scholar
V. S. Shchesnovich,
“Sufficient condition for the mode mismatch of single photons for scalability of the bosonsampling computer,”
Phys. Rev. A, 89 022333
(2014). https://doi.org/10.1103/PhysRevA.89.022333 Google Scholar
A. Arkhipov,
“Boson sampling is robust against small errors in the network matrix,”
Phys. Rev. A, 92 062326
(2015). https://doi.org/10.1103/PhysRevA.92.062326 Google Scholar
A. Leverrier and R. GarciaPatron,
“Analysis of circuit imperfections in boson sampling,”
Quantum Inf. Comput., 15 489
–512
(2015). QICUAW 15337146 Google Scholar
G. Kalai and G. Kindler,
“Gaussian noise sensitivity and boson sampling,”
(2014). Google Scholar
X. Gao and L. Duan,
“Efficient classical simulation of noisy quantum computation,”
(2018). Google Scholar
K. Kruse et al.,
“A detailed study of Gaussian boson sampling,”
(2018). Google Scholar
B. Gupt et al.,
“Classical benchmarking of Gaussian boson sampling on the Titan supercomputer,”
(2018). Google Scholar
A. Björklund,
“Counting perfect matchings as fast as Ryser,”
in Proc. TwentyThird Annu. ACMSIAM Symp. Discrete Algorithms,
914
–921
(2012). Google Scholar
L. Chakhmakhchyan and N. J. Cerf,
“Boson sampling with Gaussian measurements,”
Phys. Rev. A, 96 032326
(2017). https://doi.org/10.1103/PhysRevA.96.032326 Google Scholar
A. P. Lund, S. RahimiKeshari and T. C. Ralph,
“Exact boson sampling using Gaussian continuousvariable measurements,”
Phys. Rev. A, 96 022301
(2017). https://doi.org/10.1103/PhysRevA.96.022301 Google Scholar
N. Quesada, J. M. Arrazola and N. Killoran,
“Gaussian boson sampling using threshold detectors,”
Phys. Rev. A, 98 062322
(2018). https://doi.org/10.1103/PhysRevA.98.062322 Google Scholar
J. M. Arrazola and T. R. Bromley,
“Using Gaussian boson sampling to find dense subgraphs,”
Phys. Rev. Lett., 121 030503
(2018). https://doi.org/10.1103/PhysRevLett.121.030503 PRLTAO 00319007 Google Scholar
K. Bradler et al.,
“Gaussian boson sampling for perfect matchings of arbitrary graphs,”
Phys. Rev. A, 98 032310
(2018). https://doi.org/10.1103/PhysRevA.98.032310 Google Scholar
J. M. Arrazola, R. R. Bromley and P. Rebentrost,
“Quantum approximate optimization with Gaussian boson sampling,”
Phys. Rev. A, 98 012322
(2018). https://doi.org/10.1103/PhysRevA.98.012322 Google Scholar
J. Huh et al.,
“Boson sampling for molecular vibronic spectra,”
Nat. Photonics, 9 615
–620
(2015). https://doi.org/10.1038/nphoton.2015.153 NPAHBY 17494885 Google Scholar
J. Huh and M.H. Yung,
“Vibronic boson sampling: generalized Gaussian boson sampling for molecular vibronic spectra at finite temperature,”
Sci. Rep., 7 7462
(2017). https://doi.org/10.1038/s4159801707770z SRCEC3 20452322 Google Scholar
B. Peropadre, J. Huh and C. Sabin,
“Dynamical Casimir effect for Gaussian boson sampling,”
Sci. Rep., 8 3751
(2018). https://doi.org/10.1038/s41598018220862 SRCEC3 20452322 Google Scholar
S. Laibacher and V. Tamma,
“Toward quantum computational supremacy of boson sampling with random overlap in the photonic spectra,”
(2018). Google Scholar
M. A. Broome et al.,
“Photonic boson sampling in a tunable circuit,”
Science, 339
(6121), 794
–798
(2013). https://doi.org/10.1126/science.1231440 SCIEAS 00368075 Google Scholar
J. B. Spring et al.,
“Boson sampling on a photonic chip,”
Science, 339
(6121), 798
–801
(2013). https://doi.org/10.1126/science.1231692 SCIEAS 00368075 Google Scholar
M. Tillmann et al.,
“Experimental boson sampling,”
Nat. Photonics, 7 540
–544
(2013). https://doi.org/10.1038/nphoton.2013.102 NPAHBY 17494885 Google Scholar
A. Crespi et al.,
“Integrated multimode interferometers with arbitrary designs for photonic boson sampling,”
Nat. Photonics, 7 545
–549
(2013). https://doi.org/10.1038/nphoton.2013.112 NPAHBY 17494885 Google Scholar
J. Carolan et al.,
“On the experimental verification of quantum complexity in linear optics,”
Nat. Photonics, 8 621
–626
(2014). https://doi.org/10.1038/nphoton.2014.152 NPAHBY 17494885 Google Scholar
M. Bentivegna et al.,
“Experimental scattershot boson sampling,”
Sci. Adv., 1 e1400255
(2015). https://doi.org/10.1126/sciadv.1400255 STAMCV 14686996 Google Scholar
J. Carolan et al.,
“Universal linear optics,”
Science, 349
(6249), 711
–716
(2015). https://doi.org/10.1126/science.aab3642 SCIEAS 00368075 Google Scholar
J. C. Loredo et al.,
“Boson sampling with singlephoton Fock states from a bright solidstate source,”
Phys. Rev. Lett., 118 130503
(2017). https://doi.org/10.1103/PhysRevLett.118.130503 PRLTAO 00319007 Google Scholar
Y. He et al.,
“Timebinencoded boson sampling with a singlephoton device,”
Phys. Rev. Lett., 118 190501
(2017). https://doi.org/10.1103/PhysRevLett.118.190501 PRLTAO 00319007 Google Scholar
H. Wang et al.,
“Highefficiency multiphoton boson sampling,”
Nat. Photonics, 11 361
–365
(2017). https://doi.org/10.1038/nphoton.2017.63 NPAHBY 17494885 Google Scholar
H. Wang et al.,
“Toward scalable boson sampling with photon loss,”
Phys. Rev. Lett., 120 230502
(2018). https://doi.org/10.1103/PhysRevLett.120.230502 PRLTAO 00319007 Google Scholar
H.S. Zhong et al.,
“12photon entanglement and scalable scattershot boson sampling with optimal entangledphoton pairs from parametric downconversion,”
Phys. Rev. Lett., 121 250505
(2018). https://doi.org/10.1103/PhysRevLett.121.250505 PRLTAO 00319007 Google Scholar
S. Paesani et al.,
“Generation and sampling of quantum states of light in a silicon chip,”
(2018). Google Scholar
N. Spagnolo et al.,
“Experimental validation of photonic boson sampling,”
Nat. Photonics, 8 615
–620
(2014). https://doi.org/10.1038/nphoton.2014.135 NPAHBY 17494885 Google Scholar
N. J. Russell et al.,
“Direct dialling of Haar random unitary matrices,”
New J. Phys., 19 033007
(2017). https://doi.org/10.1088/13672630/aa60ed NJOPFM 13672630 Google Scholar
R. Osellame, G. Cerullo and R. Ramponi, Femtosecond Laser Micromachining: Photonic and Microfluidic Devices in Transparent Materials, 123 Springer Science & Business Media, Berlin
(2012). Google Scholar
S. Nolte et al.,
“Femtosecond waveguide writing: a new avenue to threedimensional integrated optics,”
Appl. Phys. A, 77
(1), 109
–111
(2003). https://doi.org/10.1007/s0033900320886 Google Scholar
J. B. Spring et al.,
“Chipbased array of nearidentical, pure, heralded singlephoton sources,”
Optica, 4
(1), 90
–96
(2017). https://doi.org/10.1364/OPTICA.4.000090 Google Scholar
L. Sansoni et al.,
“A twochannel, spectrally degenerate polarization entangled source on chip,”
npj Quantum Inf., 3 5
(2017). https://doi.org/10.1038/s415340160005z Google Scholar
S. Atzeni et al.,
“Integrated sources of entangled photons at telecom wavelength in femtosecondlaserwritten circuits,”
Optica, 5
(3), 311
–314
(2018). https://doi.org/10.1364/OPTICA.5.000311 Google Scholar
J. Wang et al.,
“Multidimensional quantum entanglement with largescale integrated optics,”
Science, 360
(6386), 285
–291
(2018). https://doi.org/10.1126/science.aar7053 SCIEAS 00368075 Google Scholar
F. Kaneda and P. Kwiat,
“Highefficiency singlephoton generation via largescale active time multiplexing,”
(2018). Google Scholar
F. Lenzini et al.,
“Active demultiplexing of single photons from a solidstate source,”
Laser Photonics Rev., 11
(3), 1600297
(2017). https://doi.org/10.1002/lpor.201600297 Google Scholar
K. R. Motes et al.,
“Scalable boson sampling with timebin encoding using a loopbased architecture,”
Phys. Rev. Lett., 113 120501
(2014). https://doi.org/10.1103/PhysRevLett.113.120501 PRLTAO 00319007 Google Scholar
X.J. Wang et al.,
“Experimental timeresolved interference with multiple photons of different colors,”
Phys. Rev. Lett., 121 080501
(2018). https://doi.org/10.1103/PhysRevLett.121.080501 PRLTAO 00319007 Google Scholar
S. Laibacher and V. Tamma,
“Symmetries and entanglement features of innermoderesolved correlations of interfering nonidentical photons,”
Phys. Rev. A, 98 053829
(2018). https://doi.org/10.1103/PhysRevA.98.053829 Google Scholar
C. Gogolin et al.,
“Bosonsampling in the light of sample complexity,”
(2013). Google Scholar
M. Bentivegna et al.,
“Bayesian approach to boson sampling validation,”
Int. J. Quantum Inf., 12 1560028
(2014). https://doi.org/10.1142/S021974991560028X 02197499 Google Scholar
M. C. Tichy et al.,
“Stringent and efficient assessment of bosonsampling devices,”
Phys. Rev. Lett., 113 020502
(2014). https://doi.org/10.1103/PhysRevLett.113.020502 PRLTAO 00319007 Google Scholar
M. C. Tichy et al.,
“Zerotransmission law for multiport beam splitters,”
Phys. Rev. Lett., 104 220405
(2010). https://doi.org/10.1103/PhysRevLett.104.220405 PRLTAO 00319007 Google Scholar
A. Crespi et al.,
“Suppression law of quantum states in a 3D photonic fast Fourier transform chip,”
Nat. Commun., 7 10469
(2016). https://doi.org/10.1038/ncomms10469 NCAOBW 20411723 Google Scholar
A. Crespi,
“Suppression laws for multiparticle interference in Sylvester interferometers,”
Phys. Rev. A, 91 013811
(2015). https://doi.org/10.1103/PhysRevA.91.013811 PHRVAO 0031899X Google Scholar
C. Dittel, R. Keil and G. Weihs,
“Manybody quantum interference on hypercubes,”
Quantum Sci. Technol., 2 015003
(2017). https://doi.org/10.1088/20589565/aa540c Google Scholar
N. Viggianiello et al.,
“Experimental generalized quantum suppression law in Sylvester interferometers,”
New J. Phys., 20 033017
(2018). https://doi.org/10.1088/13672630/aaad92 NJOPFM 13672630 Google Scholar
C. Dittel et al.,
“Totally destructive manyparticle interference,”
Phys. Rev. Lett., 120 240404
(2018). https://doi.org/10.1103/PhysRevLett.120.240404 PRLTAO 00319007 Google Scholar
C. Dittel et al.,
“Totally destructive interference for permutationsymmetric manyparticle states,”
Phys. Rev. A, 97 062116
(2018). https://doi.org/10.1103/PhysRevA.97.062116 Google Scholar
N. Viggianiello et al.,
“Optimal photonic indistinguishability tests in multimode networks,”
Sci. Bull., 63 1470
–1478
(2018). https://doi.org/10.1016/j.scib.2018.10.009 Google Scholar
D. J. Brod et al.,
“Witnessing genuine multiphoton indistinguishability,”
Phys. Rev. Lett., 122 063602
(2019). https://doi.org/10.1103/PhysRevLett.122.063602 PRLTAO 00319007 Google Scholar
I. Agresti et al.,
“Pattern recognition techniques for boson sampling validation,”
Phys. Rev. X, 9 011013
(2019). https://doi.org/10.1103/PhysRevX.9.011013 PRXHAE 21603308 Google Scholar
S.T. Wang and L.M. Duan,
“Certification of boson sampling devices with coarsegrained measurements,”
(2016). Google Scholar
M. Walschaers et al.,
“Statistical benchmark for boson sampling,”
New J. Phys., 18 032001
(2016). https://doi.org/10.1088/13672630/18/3/032001 NJOPFM 13672630 Google Scholar
M. Bentivegna, N. Spagnolo and F. Sciarrino,
“Is my boson sampler working?,”
New J. Phys., 18 041001
(2016). https://doi.org/10.1088/13672630/18/4/041001 NJOPFM 13672630 Google Scholar
T. Giordani et al.,
“Experimental statistical signature of manybody quantum interference,”
Nat. Photonics, 12 173
–178
(2018). https://doi.org/10.1038/s4156601800974 NPAHBY 17494885 Google Scholar
L. Aolita et al.,
“Reliable quantum certification of photonic state preparations,”
Nat. Commun., 6 8498
(2015). https://doi.org/10.1038/ncomms9498 NCAOBW 20411723 Google Scholar
K. Liu et al.,
“A certification scheme for the boson sampler,”
J. Opt. Soc. Am. B, 33 1835
–1841
(2016). https://doi.org/10.1364/JOSAB.33.001835 JOBPDE 07403224 Google Scholar
V. S. Shchesnovich,
“Universality of generalized bunching and efficient assessment of boson sampling,”
Phys. Rev. Lett., 116 123601
(2016). https://doi.org/10.1103/PhysRevLett.116.123601 PRLTAO 00319007 Google Scholar
F. Flamini et al.,
“Benchmarking integrated linearoptical architectures for quantum information processing,”
Sci. Rep., 7 15133
(2017). https://doi.org/10.1038/s41598017151742 SRCEC3 20452322 Google Scholar
F. G. S. L. Brandão, A. W. Harrow and M. Horodecki,
“Local random quantum circuits are approximate polynomialdesigns,”
Commun. Math. Phys., 346 397
–434
(2016). https://doi.org/10.1007/s0022001627068 CMPHAY 00103616 Google Scholar
C. Moore and S. Mertens, The Nature of Computation, Oxford University Press, Oxford
(2011). Google Scholar
K. Bradler et al.,
“Graph isomorphism and Gaussian boson sampling,”
(2018). Google Scholar
BiographyDaniel J. Brod received his PhD from the Universidade Federal Fluminense in 2014, with a period in the Institute for Quantum Computing. He was a postdoctoral fellow in the Perimeter Institute until 2017 and has been a professor in the Physics Institute of the Universidade Federal Fluminense since 2018. His research interests include quantum computation, quantum optics, and restricted models of quantum computing with emphasis on quantum computational supremacy. Ernesto F. Galvão received his PhD from the University of Oxford, UK, in 2002. He was a postdoctoral fellow in the Perimeter Institute until 2005 and has been a professor in the Physics Institute of the Universidade Federal Fluminense since 2006. His research interests include quantum computation and information, as well as foundations of quantum mechanics. Andrea Crespi received his PhD in physics in 2012 from Politecnico di Milano, Italy. He is a temporary researcher in the Physics Department of Politecnico di Milano. His research interests regard femtosecond laser microfabrication of threedimensional optical circuits in glass, mainly for applications in quantum information processing and simulation of quantum phenomena. Roberto Osellame received his PhD in physics from Politecnico di Torino, Italy, in 2000. He is a director of research at the Institute for Photonics and Nanotechnologies of the Italian National Research Council. He is one of the pioneers in femtosecond laser micromachining of transparent materials. His research activity includes the development of photonic circuits for quantum information protocols, of micro/nanostructures by twophoton polymerization, and of labonachip and optofluidic devices. He is a fellow member of the Optical Society of America and an ERC Advanced Grant recipient. Nicolò Spagnolo received his PhD in 2012 in physical science of matter, with a thesis on experimental multiphoton quantum optical states. He is a temporary researcher in the Department of Physics of Sapienza Università di Roma. His research interests have been focused on quantum information and simulation protocols employing different photonic platforms. These research activities include the implementation of boson sampling instances and of validation protocols with integrated photonics and quantum phase estimation experiments. Fabio Sciarrino received his PhD in 2004 with a thesis in experimental quantum optics. He is a full professor and head of the Quantum Information Lab in the Department of Physics of Sapienza Università di Roma. Since 2013, he has been a fellow of the Sapienza School for Advanced Studies. His main field of research is quantum information and quantum optics, with works on quantum teleportation, optimal quantum machines, fundamental tests, quantum communication, and orbital angular momentum. 