The standard approach to quantum fault tolerance is to calculate error thresholds on basic gates in the limit of arbitrarily
many concatenation levels. In contrast this paper takes the number of qubits and the target implementation accuracy as
given, and provides a framework for engineering the constrained quantum system to the required tolerance. The approach
requires solving the full dynamics of the quantum system for an arbitrary admixture (biased or unbiased) of Pauli errors.
The inaccuracy between ideal and implemented quantum systems is captured by the supremum of the Schatten k-norm of
the difference between the ideal and implemented density matrices taken over all density matrices. This is a more complete
analysis than the standard approach, where an intricate combination of worst case assumptions and combinatorial analysis
is used to analyze the special case of equiprobable errors. Conditions for fault tolerance are now expressed in terms of
error regions rather than a single number (the standard error threshold). In the important special case of a stochastic noise
model and a single logical qubit, an optimization over all 2×2 density matrices is required to obtain the full dynamics. The
complexity of this calculation is greatly simplified through reduction to an optimization over only three projectors. Error
regions are calculated for the standard 5- and 7-qubit codes. Knowledge of the full dynamics makes it possible to design
sophisticated concatenation strategies that go beyond repeatedly using the same code, and these strategies can achieve
target fault tolerance thresholds with fewer qubits.
Conventional vector-based simulators for quantum computers are quite limited in the size of the quantum circuits they
can handle, due to the worst-case exponential growth of even sparse representations of the full quantum state vector as a
function of the number of quantum operations applied. However, this exponential-space requirement can be avoided by
using general space-time tradeoffs long known to complexity theorists, which can be appropriately optimized for this
particular problem in a way that also illustrates some interesting reformulations of quantum mechanics. In this paper, we
describe the design and empirical space/time complexity measurements of a working software prototype of a quantum
computer simulator that avoids excessive space requirements. Due to its space-efficiency, this design is well-suited to
embedding in single-chip environments, permitting especially fast execution that avoids access latencies to main
memory. We plan to prototype our design on a standard FPGA development board.
An extension to vonNeumann's analysis of quantum theory suggests self-measurement is a
fundamental process of Nature. By mapping the quantum computer to the brain architecture we will argue
that the cognitive experience results from a measurement of a quantum memory maintained by biological
entities. The insight provided by this mapping suggests quantum effects are not restricted to small atomic
and nuclear phenomena but are an integral part of our own cognitive experience and further that the
architecture of a quantum computer system parallels that of a conscious brain.
We will then review the suggestions for biological quantum elements in basic neural structures
and address the de-coherence objection by arguing for a self- measurement event model of Nature. We will
argue that to first order approximation the universe is composed of isolated self-measurement events which
guaranties coherence. Controlled de-coherence is treated as the input/output interactions between quantum
elements of a quantum computer and the quantum memory maintained by biological entities cognizant of
the quantum calculation results.
Lastly we will present stem-cell based neuron experiments conducted by one of us with the aim of
demonstrating the occurrence of quantum effects in living neural networks and discuss future research
projects intended to reach this objective.
The object of this paper is to mathematically investigate characteristics of a geodesic equation describing
possible minimum complexity paths in the special unitary group manifold representing the unitary evolution
of n qubits associated with a quantum computation. Simple solutions are elaborated for the case of three
Motivated by an interest in quantum sensing, we define carefully a degree of entanglement, starting with bipartite pure
states and building up to a definition applicable to any mixed state on any tensor product of finite-dimensional vector
spaces. For mixed states the degree of entanglement is defined in terms of a minimum over all possible decompositions of
the mixed state into pure states. Using a variational analysis we show a property of minimizing decompositions. Combined
with data about the given mixed state, this property determines the degrees of entanglement of a given mixed state. For
pure or mixed states symmetric under permutation of particles, we show that no partial trace can increase the degree of
entanglement. For selected less-than-maximally-entangled pure states, we quantify the degree of entanglement surviving
a partial trace.
In the Riemannian geometry of quantum computation, the quantum evolution is described in terms of the special unitary group SU(2n) of n-qubit unitary operators with unit determinant. To elaborate on one aspect of the methodology, the Riemann curvature and sectional curvature are explicitly derived using the Lie algebra su(2n). This is important for investigations of the global characteristics of geodesic paths in the group manifold.
A quantum entangled radar uses entangled photons instead of separate photons. It has been shown that for quantum
entangled interferometers that the value of quantum entanglement deteriorates in an environment with attenuation. This
paper introduces a correction method that allows the quantum radar to maintain excellent performance even when
dealing with an environment with attenuation. The correction method is analogous to techniques used in adaptive optics
to improve images. Correction approaches based on signal sources deliberately introduced into the environment and
electromagnetic sources already present in the environment are considered. Closed form expressions for estimating the
range error are derived for the cases when the radar uses N entangled photons for imaging or N separate imaging
photons. Simulations of radar range error estimates for entangled and separate photon cases for propagation media with
widely varying attenuation properties are provided. Comparisons of estimates with and without atmospheric correction
are given. The atmospheric correction method extends the range of the beneficial effects of entanglement by a factor of
82, i.e. to 5000 km for a slowly varying propagation medium. For a propagation medium with 50 times as much
variation, the atmospheric correction method offers super sensitivity for three times the range of the uncorrected case.
We review recent research in the field of quantum imaging. Quantum imaging deals with the formation of images
that possess higher resolution or better signal-to-noise characteristics than conventional images by making use
of the coherence properties of quantum light fields. Quantum imaging also deals with indirect imaging methods
such as ghost imaging, in which image information is conveyed not by a single light field but by the correlations
between two separate light fields. In this contribution we concentrate primarily on recent results in the area of
A new approach to analyzing visual images is proposed, based on the idea of converting an optical image into a spatially
varying pattern of polarized squeezed light, which is then used to produce a pattern of chiral edge currents in a thin film
topological insulator. Thin films of Bi or Bi doped with Sb which are punctured with an array of sub-micron holes may
be a way of realizing this kind of optical quantum information processing.
The Stern-Gerlach (SG) apparatus for measuring the spin of an uncharged spin-1/2 particle is the archetypal quantum
sensing device. We study this device for the new problem of measuring the spin of a particle that is coupled externally to
another particle. Specifically, we treat two coupled particles in which a single particle is measured by the SG device
while the other is not. We show simulations of how the binding energy associated with the external coupling is
completely converted to potential energy and kinetic energy as the single particle separates spatially within the magnetic
field of the SG device. Additionally we show simulations of how the initial particle acceleration within the SG devices
relates to the coupling, the quantum state of the two-particle system, and the initial spatial dispersion of the particle
within the SG device. The results of our analysis, though obtained specifically for the SG apparatus, may be generic to
other quantum measurement devices with similar external coupling.
We introduce a notation for probability current and operator current in a normal quantum mechanics setting.
We then extend these concepts to the concept of post-selection that was introduced by Schordinger and has
found wide application by Aharonov and his colleagues. We introduce the concept of a post-selection operator
current and then use it as an alternative means of examining concepts of weak values. The concept of weak
energy introduced by Parks is derived in this setting. Also, the configuration space based on geometric phase
and the Fubini-Study metric that was introduced by Parks defines an operator current as well. We provide a
connection between this current and post-selection operator current.
We describe the use of quantum-mechanically entangled photons for sensing intrusions across a physical perimeter. Our
approach to intrusion detection uses the no-cloning principle of quantum information science as protection against an
intruder's ability to spoof a sensor receiver using a 'classical' intercept-resend attack. Moreover, we employ the
correlated measurement outcomes from polarization-entangled photons to protect against 'quantum' intercept-resend
attacks, i.e., attacks using quantum teleportation. We explore the bounds on detection using quantum detection and
estimation theory, and we experimentally demonstrate the underlying principle of entanglement-based detection using
the visibility derived from polarization-correlation measurements.
The quantum mechanical phenomenon of entanglement can be utilized to beat the the Rayleigh limit, the classical
bound on image resolution. This is done by entangling the photons that are used as the signal states. Using
entanglement, the best possible image resolution is instead given by the Heisenberg limit, an improvement proportional
to the number of entangled photons in the signal. Here, we present a novel application of entanglement
by inverting the above procedure. We show that the resolution obtained by an imaging system utilizing separable
photons can be achieved by an imaging system making use of entangled photons, but with the advantage of a
smaller aperture. This results in a smaller, lighter imaging system. Smaller imaging systems can be especially
valuable in satellite imaging where weight and size play a vital role.
We study a new realizable architecture for a universal quantum computer based on different optimized components
and computational models. Simulation demonstrates it has a higher computing efficiency compared with
others. Error correction, fault tolerance and robustness are also discussed for this architecture.
We review the q-deformed spin networks and apply these methods to produce unitary representations of the
braid groups that are dense in the unitary groups. The simplest case of these models is the Fibonacci model,
itself universal for quantum computation. We formulate these braid group representations in a form suitable for
computation and algebraic work and apply them to quantum algorithms for the computation of colored Jones
polynomials and the Witten-Reshetikhin-Turaev Invariants.
The dynamics of vortex solitons is studied in a BEC superfluid. A quantum lattice-gas algorithm (measurementbased
quantum computation) is employed to examine the dynamical behavior vortex soliton solutions of the
Gross-Pitaevskii equation (φ4 interaction nonlinear Schroedinger equation). Quantum turbulence is studied in
large grid numerical simulations: Kolmogorov spectrum associated with a Richardson energy cascade occurs on
large flow scales. At intermediate scales, a new k-5.9 power law emerges, due to vortex filamentary reconnections
associated with Kelvin wave instabilities (vortex twisting) coupling to sound modes and the exchange of
intermediate vortex rings. Finally, at very small spatial scales a k-3 power law emerges, characterizing fluid
dynamics occurring within the scale size of the vortex cores themselves. Poincaré recurrence is studied: in the
free non-interacting system, a fast Poincaré recurrence occurs for regular arrays of line vortices. The recurrence
period is used to demarcate dynamics driving a nonlinear quantum fluid towards turbulence, since fast recurrence
is an approximate symmetry of the nonlinear quantum fluid at early times. This class of quantum algorithms
is useful for studying BEC superfluid dynamics and, without modification, should allow for higher resolution
simulations (with many components) on future quantum computers.
The quantum Fisher information is derived for the task of identifying the qudit depolarizing channel
with any dimension. The identification scheme in which a pure state channel input is entangled with an
ancilla system is shown to dominate other schemes for all qudit depolarizing channels of any
dimension and any depolarizing rate. This extends known results for the qubit depolarizing channel.
This dominance, though, is not robust; if the ancilla system itself undergoes any degree of
depolarization, no matter how small, entanglement with the ancilla is no longer necessarily optimal.
The quantum score operator for the qudit depolarizing channel has a special "quasi-classical" form,
readily yielding these various results.
A model of a D-Brane Topological Quantum Computer (DBTQC) is presented and sustained. The model is
based on four-dimensional TQFTs of the Donaldson-Witten and Seiber-Witten kinds. It is argued that the DBTQC is
able to compute Khovanov homology for knots, links and graphs. The DBTQC physically incorporates the
mathematical process of categorification according to which the invariant polynomials for knots, links and graphs such
as Jones, HOMFLY, Tutte and Bollobás-Riordan polynomials can be computed as the Euler characteristics
corresponding to special homology complexes associated with knots, links and graphs. The DBTQC is conjectured as a
powerful universal quantum computer in the sense that the DBTQC computes Khovanov homology which is considered
like powerful that the Jones polynomial.
Presented is a second quantized technology for representing fermionic and bosonic entanglement in terms of
generalized joint ladder operators, joint number operators, interchangers, and pairwise entanglement operators.
The joint number operators generate conservative quantum logic gates that are used for pairwise entanglement
in quantum dynamical systems. These are most useful for quantum computational physics. The generalized joint
operator approach provides a pathway to represent the Temperley-Lieb algebra and to represent braid group
operators for either fermionic or bosonic many-body quantum systems. Moreover, the entanglement operators
allow for a representation of quantum measurement, quantum maps (associated with quantum Boltzmann equation
dynamics), and for a way to completely and efficiently extract all accessible bits of joint information from
entangled quantum systems in terms of quantum propositions.