We report the implementation of a novel entanglement-enabled quantum state communication protocol, known as
SuperDense Teleportation, using photons hyperentangled in polarization and orbital angular momentum. We used these
techniques to transmit unimodular ququart states between distant parties with an averaged fidelity of 86.2±3%; almost
twice the classical limit of 44%. We also propose a method to use SuperDense Teleportation to communicate quantum
states from a space platform, such as the International Space Station, to a terrestrial optical telescope. We evaluate
several configurations and investigate the challenges arising from the movement of the space station with respect to the
Quantum Key Distribution (QKD) has been shown to be provably secure when certain idealized conditions are
met in a physical realization. All implementations of QKD to date require non-orthogonal basis measurements
to implement it; making it commonly assumed that measurement basis variation is fundamental to making QKD
protocols secure from eavesdropping. We show here that in particular physical conditions this assumption is
incorrect, and that provable security can be achieved without use of multiple bases. Basis setting information
can in fact be shared with all potential eavesdroppers, as they are unable to use it to acquire or influence any
part of the encryption key generation. Furthermore the key generation efficiency is limited to 100 % as
compared with an inherent 50 % limit for alternating bases in BB84 or Entangled Ekert protocols.
Low Density Parity Check (LDPC) error correction is a one-way algorithm that has become popular for quantum key
distribution (QKD) post-processing. Graphic processing units (GPUs) provide an interesting attached platform that may
deliver high rates of error correction performance for QKD. We present the details of our various LDPC GPU
implementations and both error correction and execution throughput performance that each achieves. We also discuss
the potential for implementation on a GPU platform to achieve Gbit/s throughput.
Polar coding is the most recent encoding scheme in the quest for error correction codes that approaches the Shannon
limit, has a simple structure, and admits fast decoders. As such, it is an interesting candidate for the quantum key
distribution (QKD) protocol that normally operates at high bit error rates and requires codes that operate near the
Shannon limit. This paper describes approaches that integrate Polar codes into the QKD environment and provides
performance results of Polar code designs within the QKD protocol.
We describe a Quantum Key Distribution protocol that combines temporal-, spectraland
polarization-encoding of photons for secure communication over an interconnected
network of users. Temporal encoding is used to identify a user’s location or address on
the network. Polarization encoding is used to generate private cryptographic key.
Polarization encoded information is locally and randomly generated by users and
exchanged only over a dedicated secure channel. Spectral encoding allows for the
detection of eavesdropping and tampering by a malicious agent. Temporal-spectral
signals sent from the network administrator (Alice) to a user are bright light source. On
the other hand spectral-temporal signal from a network user (Bob) to the administrator
(Alice) are single photons. Signals are sent across the network as ordered light pairs. The
ordering format is randomly chosen and are revealed only at the time of key selection
between the parties so that a secure one-time cryptographic pad can be generated
We propose the adaptive multicarrier quadrature division (AMQD) modulation technique for continuous-variable
quantum key distribution (CVQKD). The method granulates the Gaussian random input into Gaussian subcarrier
continuous variables in the encoding phase, which are then decoded by a continuous unitary transformation. The
subcarrier coherent variables formulate Gaussian sub-channels from the physical link with strongly diverse transmission
capabilities, which leads to significantly improved transmission efficiency, higher tolerable loss, and excess noise. We
also investigate a modulation-variance adaption technique within the AMQD scheme, which provides optimal capacityachieving
communication over the sub-channels in the presence of a Gaussian noise.
This paper presents the concept and implementation of a Braided Single-stage Protocol for quantum secure
communication. The braided single-stage protocol is a multi-photon tolerant secure protocol. This multi-photon tolerant
protocol has been implemented in the laboratory using free-space optics technology. The proposed protocol capitalizes
on strengths of the three-stage protocol and extends it with a new concept of braiding. This protocol overcomes the
limitations associated with the three-stage protocol in the following ways: It uses the transmission channel only once as
opposed to three times in the three-stage protocol, and it is invulnerable to man-in-the-middle attack. This paper also
presents the error analysis resulting from the misalignment of the devices in the implementation. The experimental
results validate the efficient use of transmission resources and improvement in the data transfer rate.
A crucial issue with hybrid quantum secret sharing schemes is the amount of data that is allocated to the
participants. The smaller the amount of allocated data, the better the performance of a scheme. Moreover, quantum data is very hard and expensive to deal with, therefore, it is desirable to use as little quantum data as possible. To achieve this goal, we first construct extended unitary operations by the tensor product of n, n ≥ 2, basic unitary operations, and then by using those extended operations, we design two quantum secret sharing schemes. The resulting dual compressible hybrid quantum secret sharing schemes, in which classical data play a complementary role to quantum data, range from threshold to access structure. Compared with the existing hybrid quantum secret sharing schemes, our proposed schemes not only reduce the number of quantum participants, but also the number of particles and the size of classical shares. To be exact, the number of particles that are used to carry quantum data is reduced to 1 while the size of classical secret shares also is also reduced to l−2/m−1 based on ((m+1, n′)) threshold and to l−2/r2 (where r2 is the number of maximal unqualified sets) based on adversary structure. Consequently, our proposed schemes can greatly reduce the cost and difficulty of generating
and storing EPR pairs and lower the risk of transmitting encoded particles.
Is it possible for a simple lumped parameter model of a circuit to yield correct quantum mechanical predictions of its behavior, when there is quantum entanglement between components of that circuit? This paper shows that it is possible in a simple but important example – the circuit of the original Bell’s Theorem experiments, for ideal polarizers. Correct predictions emerge from two alternative simple models, based on classical Markov Random Fields (MRF) across spacetime. Exact agreement with quantum mechanics does not violate Bell’s Theorem itself, because the interplay between initial and final outcomes in these calculations does not meet the classical definition of time-forwards causality. Both models raise interesting questions for future research. The final section discusses several possible directions for following up on these results, both in lumped system modeling and in more formal and general approaches. It describes how a new triphoton experiment, not yet performed, may be able to discriminate between MRF models and the usual measurement formalism of Copenhagen quantum mechanics.
The design of any system of quantum logic must take into account the implications of the Landauer
limit for logical bits. Useful computation implies a deterministic outcome, and so any system of
quantum computation must produce a final deterministic outcome, which in a quantum computer
requires a quantum decision that produces a deterministic qubit. All information is physical, and any
bit of information can be considered to exist in a physicality represented as a decision between the two
wells of a double well potential in which the energy barrier between the two wells must be greater than
kT·ln2. Any proposed system of quantum computation that does not result in such a deterministic
outcome can only be considered stochastically as a probability distribution (i.e. a wave function). An
example of such determinism in a quantum logic system is theorized to exist in the DNA molecule,
where the decoherence of quantum decision results in an enantiomeric shift in the deoxyribose moiety
that is appropriate to the Landauer limit.
Quantum networks provide conduits capable of transmitting quantum information that connect to nodes at remote
locations where the quantum information can be stored or processed. Fiber-based transmission of quantum information
over long distances may be achieved using quantum memory elements and quantum repeater protocols. However, atombased
quantum memories typically involve interactions with light fields outside the telecom window needed to minimize
absorption in transmission by optical fibers. We report on progress towards a quantum memory based on the generation
of 795 nm spontaneously emitted single photons by a write-laser beam interacting with a cold 87Rb ensemble. The single photons are then frequency-converted into (out of) the telecomm band via difference (sum) frequency generation in a
PPLN crystal. Finally, the atomic state is read out via the interaction of a read-pulse with the quantum memory. With
such a system, it will be possible to realize a long-lived quantum memory that will allow transmission of quantum
information over many kilometers with high fidelity, essential for a scalable, long-distance quantum network.
3D topological insulator (3D TI) materials have interesting surface states that are protected
against scattering due to non-magnetic impurities. They turn out to be useful in quantum information
processing. Here, using the 3D Dirac equation, we show that the transitions between positive
and negative energy solutions in a 3D TI heterostructure junction and in a 3D TI quantum dot (QD)
obey strict optical selection rules. We calculate the optical conductivity tensor of a 3D TI double
interface made of a PbTe/Pb0:31Sn0:69Te/PbTe heterostructure using Maxwell's equations, which
reveals a giant Faraday rotation effect due to Pauli exclusion principle. A transfer matrix method
is employed to calculate the transmittance in a multilayer stacking of PbTe/Pb0:31Sn0:69Te/PbTe
heterostructure. We show that while the Faraday rotation is giant for a single double interface,
it takes about 60 double interfaces to absorb incoming radiation completely. We also present the
model of a QD consisting of a spherical core-bulk heterostructure made of 3D TI materials, such
as PbTe/Pb0:31Sn0:69Te/PbTe , with bound massless and helical Weyl states existing at the interface and
being confined in all three dimensions. We calculate the Faraday rotation effect coming from the
polarization of single electron-hole pairs. We show that the semi-classical Faraday effect can be used
to read out spin quantum memory.
Low-photon-number sources can exhibit non-classical, counterintuitive behavior that can be exploited in the developing
field of quantum technology. Single photons play a special role in this arena since they represent the ultimate lowphoton-
number source. They are considered an important element in various applications such as quantum key
distribution, optical quantum information processing, quantum computing, intensity measurement standards, and others
yet to be discovered in this developing field. True deterministic sources of single photons on demand are currently an
area of intensive research, but have not been demonstrated in a practical setting. As a result, researchers commonly
default to the well-established workhorse: spontaneous parametric down-conversion generating entangled signal-idler
pairs. Since this source is thermal-statistical in nature, it is common to use a detected idler photon to herald the
production of a signal photon. The need exists to determine the quality of the single photons generated in the heralded
signal beam. Quite often, the literature reports a "heralded second-order coherence function" of the signal photons
conditioned on the idler photons using readily available single-photon detectors. In this work, we examine the
applicability of this technique to single-photon characterization and the consequences of the fact that the most commonly
used single-photon detectors are not photon-number resolving. Our results show that this method using non-photonresolving
detectors can only be used to characterize the signal-idler correlations rather than the nature of the signalphoton
Compared to classical light sources, quantum sources based on N00N states consisting of N photons achieve an N-times higher phase sensitivity, giving rise to super-resolution.1, 2, 3 N00N-state creation schemes based on linear optics and projective measurements only have a success probability p that decreases exponentially with N,4, 5, 6 e.g. p = 4.4x10-14 for N = 20.7 Feed-forward improves the scaling but N fluctuates nondeterministically in each attempt.8, 9 Schemes based on parametric down-conversion suffer from low production efficiency and low fidelity.9 A recent scheme based on atoms in a cavity combines deterministic time evolution, local unitary operations, and projective measurements.10 Here we propose a novel scheme based on the off-resonant interaction of N photons with four semiconductor quantum dots (QDs) in a cavity to create GHZ states, also called polarization N00N states, deterministically with p = 1 and fidelity above 90% for N≤ 60, without the need of any projective
measurement or local unitary operation. Using our measure we obtain maximum N-photon entanglement EN = 1 for arbitrary N. Our method paves the way to the miniaturization of N00N and GHZ-state sources to the nanoscale regime, with the possibility to integrate them on a computer chip based on semiconductor materials.
Our previous work brought some interesting results of the discrete Quantum Walks in the regime of Weak
Measurement (QWWM or QWWV). Using the knowledge of such results of QWWM, we are now exploring the search algorithms and investigating the factors associated with such walk. The study of such factors like dimensionality, connectivity of the dataset and the strength of disorder or percolation are already studied by others in the context of general quantum walks. It is our interest to show the similarities and/or differences of such factors of general quantum walks with QWWV. The subject of decoherence in quantum walks is another challenging research topic at present. We are also exploring the topic of
decoherence in QWWM or QWWV.
An array of hyper-entanglement based sensors made up of quantum hyper-entangled systems will be considered. Each
hyper-entangled system will consists of a single hyper-entangled signal and single ancilla photon. The effect of noise in
every mode as well as loss is included. The signal photon will experience classical loss and each ancilla photon will
suffer a low level of loss. Forming an array offers the further advantage of a greater reduction in measurement time. It
is shown mathematically that in the large d limit, where d is the number of modes, that different members of the array do
not interfere with each other implying they can be put close together. This permits an enormous reduction in the
measurement time, i.e. the time-on-target. Each hyper-entangled system making up the array receives a factor of d
improvement in the signal-to-noise ratio (SNR), signal-to-interference ratio (SIR) and a factor of d reduction in
measurement time. If M measurements are needed for a given level of resolution or decision quality then instead of
having one hyper-entanglement pair, M hyper-entanglement pairs can be used. Unlike a classical radar or ladar, this
system can image a target essentially with a snapshot from the many photon sources making up the array. Closed form
results for the wave function, reduced density operator, gamma-expansion, probability of detection, probability of false
alarm, SNR, SIR, Quantum Fisher information (QFI), quantum-Cramer-Rao lower bound (QCRLB), quantum Chernoff
bound (QCB), and estimates for the number of required measurements are provided.
Local energy in a component of a multipartite quantum system is the maximum energy that can be extracted by a general (Kraus, operator-sum) local operation on just that component. A component’s local energy is greater or less, or even completely absent, depending on extant correlations with the system’s other components. This is illustrated in different cases of quantum systems of spin-1/2 particles. These cases include a class of two-particle systems with different degrees of coupling anisotropy, three-particle systems, and systems of N particles, generally, with ring and star coupling topologies. Conditions are given in each case for zero local energy. Th3ese conditions establish for each case that, fir systems with a non-degenerate entangled ground state, local energy is absent when the system state is anywhere in a neighborhood of the ground state when the temperature is below critical value in a Gibbs thermal state even systems of many particles.
A stronger foundation for earlier work on the effects of number scaling, and local mathematics is described. Emphasis is placed on the effects of scaling on coordinate systems. Effects of scaling are represented by a scalar field, θ, that appears in gauge theories as a spin zero boson. Gauge theory considerations led to the concept of local mathematics, as expressed through the use of universes, ∪ x, as collections of local mathematical systems at each point, x, of a space time manifold, M. Both local and global coordinate charts are described. These map M into either local or global coordinate systems within a universe or between universes, respectively. The lifting of global expressions of nonlocal physical quantities, expressed by space and or time integrals or derivatives on M, to integrals or derivatives on coordinate systems, is described. The assumption of local mathematics and universes makes integrals and derivatives, on M or on global charts, meaningless. They acquire meaning only when mapped into a local universe. The effect of scaling, by including the effect of θ into the local maps, is described. The lack of experimental evidence for θ so far shows that the coupling constant of θ to matter fields must be very small compared to the fine structure constant. Also the gradient of θ must be very small in the local region of cosmological space and time occupied by us as observers. So far, there are no known restrictions on θ or its gradient in regions of space and/or time that are far away from our local region.
A possible topological quantum computation of the Dold-Thom functor is presented. The method that will be used is the
following: a) Certain 1+1-topological quantum field theories valued in symmetric bimonoidal categories are converted
into stable homotopical data, using a machinery recently introduced by Elmendorf and Mandell; b) we exploit, in this
framework, two recent results (independent of each other) on refinements of Khovanov homology: our refinement into a
module over the connective k-theory spectrum and a stronger result by Lipshitz and Sarkar refining Khovanov homology
into a stable homotopy type; c) starting from the Khovanov homotopy the Dold-Thom functor is constructed; d) the full
construction is formulated as a topological quantum algorithm. It is conjectured that the Jones polynomial can be
described as the analytical index of certain Dirac operator defined in the context of the Khovanov homotopy using the
Dold-Thom functor. As a line for future research is interesting to study the corresponding supersymmetric model for
which the Khovanov-Dirac operator plays the role of a supercharge.
We study multi-dimension quantum walks and its dimension reduction model. By using an waveguide-based
optical quantum device, we demonstrate the quantum-walk in searching algorithms such as 2-D glued tree and
3-D hypercube graph. We discuss that the use of waveguide-based device is a good candidate to implement the
A clock steps a computer through a cycle of phases. For the propagation of logical symbols from one computer to another,
each computer must mesh its phases with arrivals of symbols from other computers. Even the best atomic clocks drift
unforeseeably in frequency and phase; feedback steers them toward aiming points that depend on a chosen wave function
and on hypotheses about signal propagation. A wave function, always under-determined by evidence, requires a guess.
Guessed wave functions are coded into computers that steer atomic clocks in frequency and position—clocks that step
computers through their phases of computations, as well as clocks, some on space vehicles, that supply evidence of the
propagation of signals. Recognizing the dependence of the phasing of symbol arrivals on guesses about signal propagation
elevates ‘logical synchronization.’ from its practice in computer engineering to a dicipline essential to physics. Within this
discipline we begin to explore questions invisible under any concept of time that fails to acknowledge the unforeseeable.
In particular, variation of spacetime curvature is shown to limit the bit rate of logical communication.