Homomorphic encryption is a cryptographic primitive that enables information processing on encrypted data. Such primitives are useful in a delegated computing setting in which a client delegates processing to a server, but does not trust the server fully with her information. Here, I discuss a way of performing quantum homomorphic encryption. A homomorphic encryption protocol comprises of four algorithms: random key generation, encryption, evaluation, and decryption. Our technique requires that a pair of commuting operators be used to implement the encryption and evaluation. The client randomly selects the former, while the server implements the latter. Then post-evaluation, if the client decrypts using the inverse of the encryption, he retrieves the input state with the desired computation implemented. The code space has to be chosen carefully so that non-trivial evaluations can be performed on the logical qubits. Since the encryption key is not known to the server, the randomness introduced by key generation hides some information about the encoded data despite it being put into the hands of the server.
This framework is powerful because it gives a general description for implementing quantum homomorphic encryption. For example, families of commuting groups of operators are known to exist via the Schur-Weyl duality. However, it is limited by a no-go theorem that states that an exponential overhead is needed if arbitrary computation and perfect security are desired. Nonetheless, our protocols are still interesting as an application for near-term quantum computers, especially when the computational tasks require only low-depth circuits.
Limitations due to high costs and technological requirements will require users to access the first quantum processors through the cloud. Delegating a computation, however, raises privacy issues about the clients' data and poses questions about the verifiability of computations in high complexity regimes. These concerns have inspired a plethora of blind and verifiable quantum computation protocols, which allow a client with limited quantum capabilities to delegate a quantum computation to a remote server while hiding her data and preserving the integrity of the computation.
The majority of these protocols are constrained to discrete quantum systems, but quantum information can also be processed by continuous-variable architectures. These offer a competitive alternative to their discrete-variable counterparts for numerous practical benefits: They rely on well-established quantum optical techniques, allow for the generation of very large optical resource states, offer higher detection efficiencies, and can be integrated into existing optical-fibre networks, all of which are highly desirable features for cloud quantum computing.
In this work we fill this gap by presenting a blind and verifiable quantum computing protocol tailored to the unique features of continuous-variable systems. One such feature is the experimental accessibility of Gaussian operations. Our protocol is then based on the delegation of the experimentally challenging non-Gaussian operations. In this sense, it is experimentally friendly to the client, who only needs to perform Gaussian operations. Furthermore, unlike previous schemes, this protocol does not require repeated interactions between the client and server because it only involves the server sending the client non-Gaussian states. We prove universality and blindness using standard techniques, and we introduce an efficient fidelity test - based on homodyne detection - that allows the client to verify the correctness of the computation. This test is interesting in its own right because it could be employed in the context of state-certification of optical systems.
The division of quantum hardware between client and server assumed here is typical of the experimental constraints expected in realistic, commercially useful schemes for continuous-variable cloud quantum computing. As such, we believe our protocol constitutes a significant advance towards their actual realisation.
Harnessing entanglement as a resource is the main workhorse of many quantum protocols, and establishing the degree of quantum correlations of quantum states is an important certification process that has to take place prior to any implementations of these quantum protocols. The emergence of photodetectors known as photon-number-resolving detectors (PNRDs) that allow for accounting of photon numbers simultaneously arriving at the detectors has led to the need for modeling accurately and applying them for use in the certification process. Here we study the variance of difference of photocounts (VDP) of two PNRDs, which is one measure of quantum correlations, under the effects of loss and saturation. We found that it would be possible to distinguish between the classical correlation of a two-mode coherent state and the quantum correlation of a twin-beam state within some photo count regime of the detector. We compare the behavior of two such PNRDs. The first for which the photocount statistics follow a binomial distribution accounting for losses, and the second is that of Agarwal, Vogel, and Sperling for which the incident beam is first split and then separately measured by ON/OFF detectors. In our calculations, analytical expressions are derived for the variance of difference where possible. In these cases, Gauss' hypergeometric function appears regularly, giving an insight to the type of quantum statistics the photon counting gives in these PNRDs. The different mechanisms of the two types of PNRDs leads to quantitative differences in their VDP.