This PDF file contains the front matter associated with SPIE Proceedings Volume 8832, including the Title Page, Copyright information, Table of Contents, Introduction, and Conference Committee listing.
We are told that our present understanding of physical law was ushered in by the Quantum Revolution, which began around 1900 and was brought to fruition around 1930 with the formulation of modern Quantum Mechanics. The photon" was supposed to be the centerpiece of this revolution, conveying much of its conceptual avor. What happened during that period was a rather violent redirection of the prevailing world view in and around physics|a process that has still not settled. In this paper I critically review the evolution of the concepts involved, from the time of Maxwell up to the present day. At any given time, discussions in and around any given topic take place using a language that presupposes a world view or zeitgeist. The world view itself limits what ideas are expressible. We are all prisoners of the language we have created to develop our understanding to its present state. Thus the very concepts and ways of thinking that have led to progress in the past are often the source of blind spots that prevent progress into the future. The most insidious property of the world view at any point in time is that it involves assumptions that are not stated. In what follows we will have a number of occasions to point out the assumptions in the current world view, and to develop a new world view based on a quite di erent set of assumptions.
Proc. SPIE 8832, A method of comparing the speed of starlight and the speed of light from a terrestrial source, 883203 (1 October 2013); doi: 10.1117/12.2023127
The speed of light is an important physical parameter. Currently it is a common belief of the constance of the speed of light regardless of the relative velocity between the source and the observer. Because the speed of light is very fast, if the relative velocity is small compared with the speed of light, it is difficult to detect the effect of the relative velocity on the measurement of the speed of light. In this paper we present a method of comparing the speeds of starlight and the light emitting from a terrestrial source. We use a telescope to collect the light from the star having significant relative velocity with respect to the earth, e.g. Capella. Then we modulate the starlight and the light emitted from the local source into pulses i.e. these pulses leave the modulator simultaneously. After travelling 4.2 km, these pulses are detected by a receiver. If the starlight and the terrestrial light have the same speed, then these pulses must arrive at the receiver at the same time. Our results show that the arrival times of the pulses of starlight are different from that of the local light. For example, the Capella is leaving away from the earth. The Capella pulses arrive later than the local light pulses. It indicates that the speed of Capella starlight is slower than the common believed value, c. The presented method uses one clock and one stick, so the clock synchronization problem and any physical unit transformation can be avoided.
We propose a quantum protocol for 1-out-of-2, 1-out-of-n and k-out-of-n oblivious transfer. Oblivious transfer is a counter-intuitive cryptographic primitive in which the receiver receives only the information he is authorized for, while the sender does not know what information receiver received. In 1-out-of-2 oblivious transfer, Alice sends two bits to Bob who can choose only one of them to read, while Alice does not know which bit Bob read. k-out-of-n oblivious transfer is an extension of this idea. The proposed protocols do not require any classical communication, are practical, secure and can be implemented using current technology.
Proc. SPIE 8832, A high-speed GaAs-based electro-optic modulator for polarization, intensity, and phase modulation, 883205 (1 October 2013); doi: 10.1117/12.2024475
We review the development of a unique, electro-optic, polarization modulator fabricated on epitaxial layers of aluminum gallium arsenide, grown on a gallium arsenide (GaAs) substrate. The device has a single waveguide structure combined with travelling-wave, slow-wave electrodes. This design allows for high-speed modulation of the polarization state of light with low differential group delay and low optical loss at frequencies in excess of 50 GHz. The devices are TE↔TM mode convertors that modulate the state of light from one linear polarization state to an orthogonal linear state passing through elliptical and circular polarization states. These devices can also be configured to modulate the phase or intensity of an optical signal by appropriate alignment of the polarization axis of the input light or by placing a polarizer at the output. Key characteristics and important performance advantages of such devices are discussed. Applications that use these devices for enhancing digital and analog communication links, analog-to-digital signal conversion, and sending keys for encryption are reviewed to illustrate the diverse nature of the systems being developed and provide an overview of the versatility of the ways in which the GaAs polarization modulator may be used.
Proc. SPIE 8832, Numerical examination of acousto-optic Bragg interactions for profiled lightwaves using a transfer function formalism, 883206 (1 October 2013); doi: 10.1117/12.2025059
Classically, acousto-optic (AO) interactions comprise scattering of photons by energetic phonons into higher and lower orders. Standard weak interaction theory describes diffraction in the Bragg regime as the propagation of a uniform plane wave of light through a uniform plane wave of sound, resulting in the well-known first- and zeroth-order diffraction. Our preliminary investigation of the nature of wave diffraction and photon scattering from a Bragg cell under intensity feedback with profiled light beams indicates that the diffracted (upshifted photon) light continues to maintain the expected (uniform plane wave) behavior versus the optical phase shift in the cell within a small range of the Q-parameter, and at larger Qs, begins to deviate. Additionally, we observe the asymptotic axial shift of the beam center as predicted by the transfer function formalism.
The mathematical model of light, using the simple summation of sine waves, gives excellent predictive capabilities for light. It predicts the results of the interactions for components of light, e.g., photons. However, it cannot provide a valid physical model of their interactions or of the photons themselves. This paper attempts to provide such a model that also fits and explains the known characteristics of light and its interactions. The simplest version of this model assumes that there exists, within a photon, source terms that produce the observed and mathematically modeled, transverse electric and magnetic fields. The nature and consequence of such sources is explored. This is a ‘what if’, rather than a ‘how’ and ‘why’, paper.
Waves carry energy, force and momentum. Therefore, colliding wave trains have the physical capacity to interact and to form and maintain standing waves. Standing waves form in liquids, gases, electrical circuits, and electromagnetic (EM) waves. Because wave behavior exists, irrespective of media type, we have investigated the possibility that waves in various media do interact and do so through a common physical mechanism. We propose that standing waves are composed of alternating segments of reflection zones where potential energy resides with little lateral energy flow and of kinetic-energy zones where lateral energy flow is maximal. Interference patterns form when waves collide. During the collision, two force vectors resolve. One, the longitudinal vector, forms along the direction of the bisect of the convergence angle. In that direction, it is the vector sum of the two, converging wave trains. The second, vector, is derived from colliding wave forces acting transverse (orthogonal) to the longitudinal vector. These transverse acting forces will produce the standing wave component. Action of the standing wave components separates the longitudinal, moving waves transversely into parallel zones where energy is primarily potential in one zone and kinetic in the next. For example, in the interference pattern formed by converging trains of light waves, potential energy resides in the longitudinal, dark zones (null zones), kinetic energy in the parallel, bright zones. From experiments using two converging laser beams, we have produced evidence that the colliding forces in their standing waves can redirect and concentrate their light energy without using lenses.
The NIW (non-interaction of waves) property has been proposed by one of the coauthors. The NIW property states that in the absence of any “obstructing” detectors, all the Huygens-Fresnel secondary wave lets will continue to propagate unhindered and without interacting (interfering) with each other. Since a coherent lidar system incorporates complex behaviors of optical components with different polarizations including circular polarization for the transmitted radiation, then the question arises whether the NIW principle accommodate elliptical polarization of light. Elliptical polarization presumes the summation of orthogonally polarized electric field vectors which contradicts the NIW principle. In this paper, we present working of a coherent lidar system using Jones matrix formulation. The Jones matrix elements represent the anisotropic dipolar properties of molecules of optical components. Accordingly, when we use the Jones matrix methodology to analyze the coherent lidar system, we find that the system behavior is congruent with the NIW property.
Optical beam alignment in a coherent lidar (or ladar) receiver system plays a critical role in optimizing its performance. Optical alignment in a coherent lidar system dictates the wavefront curvature (phase front) and Poynting vector) matching of the local oscillator beam with the incoming receiver beam on a detector. However, this alignment is often not easy to achieve and is rarely perfect. Furthermore, optical fibers are being increasingly used in coherent lidar system receivers for transporting radiation to achieve architectural elegance. Single mode fibers also require stringent mode matching for efficient light coupling. The detector response characteristics vary with the misalignment of the two pointing vectors. Misalignment can lead to increase in DC current. Also, a lens in front of the detector may exasperate phase front and Poynting vector mismatch. Non-Interaction of Waves, or the NIW property indicates the light beams do not interfere by themselves in the absence of detecting dipoles. In this paper, we will analyze the extent of misalignment on the detector specifications using pointing vectors of mixing beams in light of the NIW property.
A richly diverse range of phenomena and applications, frequently in the context of laser applications, owe their means of operation to the properties of the photon. Yet, since the arrival of the laser, the distinctive and often paradoxical nature of the photon has become more than ever evident, and what the optics community now understands by a ‘photon’ has become richer – certainly less simple, than Einstein’s original conception. There has been a marked expansion in the pace of development since the now familiar derivative term ‘photonics’ first emerged, and in much current theory any dividing line between ‘real’ and ‘virtual’ photons proves to be illusory. So if, in this technical sense, no photon can ever be regarded as entirely real, one is drawn to deeper questions of whether the photon is ‘real’ in the broader sense of reality. Some would argue that electromagnetic fields are closer to irreducible reality. Yet whether we elect to describe optical phenomena in terms of fields or photons, neither represents what is actually measured. The surest ground has to be found where theory is cast in terms that explain or predict actual observations, under given conditions. It is consistent with the path integral formulation of quantum mechanics that derivations should not prescribe what intervenes between setup and measurement, but instead allow for all possibilities. Indeed, one of the beauties of the associated mathematics is its capacity to home in on possibilities that most closely conform to post-event physical interpretation. Still, we can ask: how much do or can we know about the photon itself? How much information could this entity contain or convey? And, how essential is a photonic formulation of theory? This study focuses on some of the key issues.
"Hidden Variables" in QM are entangled with dipole elongation (ao) and linewidth (η). It turned out the quantity ao 2/η plays a dominant role. Rules of QM transition themself are not questioned. This quantity can be fixed for a "single" transition (no interaction with other atoms or molecules) as well as for an ensemble within a thermal bath, and when developed further unveils details of radiating matter especially when the linewidth of the radiating source is precisely known. In principle a "single" isolated local photon emanated by an adequate quantum transition is not detectable. Stepping towards non-thermal radiation sources (lasers) quantitative assertions can be made in regard of entanglement respectively the lateral dimension affected by such a process. Moreover, this concept applied to γ- radiation reveals inherent linewidths of γ-ray transitions fixes the dimension such emanation stems from and predominantly this lies within the influence sphere of one nucleon. A further aspect of this investigation makes it obvious: The Stefan-Boltzmann constant is not natural; in fact it is a composition of the constants of Planck and Boltzmann and the velocity of light as well. Exemplary analyzing a specific hydrogen line in the solar spectrum (λ= 486 nm) one must infer from its linewidth data this species shows photon entanglement, or alternatively substantial density fluctuations becomes obvious.
The initial theoretical finding that eventually led to laser development was Einstein’s prediction, based upon statistical considerations, that the energy of quanta of light be given by Planck’s constant times the frequency of the light. A new theoretical development based upon Weyl’s gauge field theory predicts that photon energies are quantized with the energy given by N^{2}hν. Such quantization of photon energy changes the character of the photon from the Einstein photon that does not have a quantum number. Photon energy that includes a quantum number means that for a given energy the frequency may have more than one value. Conversely, photons of a given frequency may be found that have more energy than the Einstein photon. Further, the phat photons, all at a given frequency will have energy proportional to the number of phat photons and N^{2}. For these phat photons the electric field strength, which causes breakdown in optical fibers or air, depends linearly on N. Thus, more energy may be transmitted using phat photons of higher quantum numbers than increasing the number of photons of lesser quantum numbers while still keeping the electric field below the breakdown level. Further, while the stimulated and spontaneous emission probabilities are proportional to 1/N^{2} the Rayleigh scattering cross section diminishes by 1/N^{8}. This reduction in the scattering cross section means that a laser emitting phat photons with N<1 will lose less energy traveling through the Earth’s atmosphere than lasers using N=1. This reduction in energy losses through the atmosphere means increased efficiency for Earth based beamed applications. This presentation discusses the fundamental theory, emission probabilities, and cross section calculations.
Critical analysis is given for mystical aspects of the current understanding of interaction between charged particles: wave-particle duality and nonlocal entanglement. A possible statistical effect concerning distribution functions for coincidences between the output channels of beam splitters is described. If this effect is observed in beam splitter data, ten significant evidence for photon splitting, i.e. , against the notion that light is ultimately packaged in finite chunks, has been found. An argument is given for the invalidity of the meaning attached to tests of Bell inequalities. Additionally, a totally classical paradigm for the calculation of the customary expression for the “quantum” coincidence coefficient pertaining to the singlet state is described. If fully accounts for the results of experimental tests of Bell inequalities taken nowadays to prove the reality of entanglement and non-locality in quantum phenomena of, inter alia, light. Described. It fully accounts for the results of experimental tests of Bell inequalities take n nowadays to prove the reality of entanglement and non-locality in quantum phenomena of inter alia, light.
Is the nature of light observed only indirectly after its interaction with matter? We currently lack an experimental technique which would isolate the nature of light without the use of matter i.e., a detector. We will present additional experiments which, although again indirect, may reveal new understandings (but mostly more questions) about the nature of light. This talk will not report research results, but rather be an invitation for others to conduct similar such measurements as will be proposed. Past measurements have measured no absorption of radiation in the dark fringe (concluding no light is present due to canceling of the fields). Does the radiation in a dark fringe cause stimulated emission? What are the photon statistics in a dark fringe?
Within this investigation it is critically questioned, if we really can detect "single photons", respectively the response of a single quantum transition by use of modern photon detectors. In the course it is shown that avalanche photodiodes (AVDs) especially in the "Geiger" mode by virtue of its geometry (effective area) indeed can detect "single photon" events as proclaimed by the manufacturers, but they tacitly assume the bandwidth of originating visible source being not greater than ~ 2.10^{7} [Hz]. A short excurse to solid state basic physics makes it obvious applying the adequate doping accomplishes "single photon detection". Nevertheless this does not mean there is a 1:1 correspondence between a photon emanated from the source location and that detected within the detector module. Propagation characteristics were simply overlooked during the numerous discussions about "single photon" detection. Practical examples are worked out on hand of a pin- / and a AVDphotodiode.
The particle model presented here is able to explain the structure of leptons and quarks without reference to quantum mechanics. In particular, it is able to explain quantitatively the existence of inertial mass without any use of a Higgs field. – An essential difference to the Standard Model of present-day particle physics is the fact that in the model presented, particles are viewed as being not point-like but extended. In addition it becomes apparent that the strong force is the universal force that is effective in all particles.
Proc. SPIE 8832, Consequences of partitioning the photon into its electrical and magnetic vectors upon absorption by an electron, 88320I (1 October 2013); doi: 10.1117/12.2023511
This research uses classical arguments to develop a blackbody spectral equation that provides useful insights into heat processes. The theory unites in a single equation, the heat radiation theory of Planck and the heat of molecular motion theory of Maxwell and Boltzmann. Light absorption is considered a two-step process. The first is an adiabatic reversible step, wherein one-dimensional light energy is absorbed in a quantum amount, ! h" , by an electron. The absorbed quanta is still 1-dimensional(1-D), and remains within the domain of reversible thermodynamics. There is no recourse to the Second Law during this first step. The absorption process' second step is a dimensional restructuring wherein the electrical and magnetic vectors evolve separately. The 1-D electrical quanta transforms into its 3-D equivalent, electrical charge density. The resulting displacement of the generalized coordinates translates to 3-D motion, the evolution of Joule heat, and irreversible thermodynamics. The magnetic vector has no 3-D equivalent, and can only transform to 1-D paramagnetic spin. Accordingly, photon decoupling distorts time's fabric, giving rise to the characteristic blackbody spectral emittance. This study’s spectral equation introduces a new quantity to physics, the radiation temperature. Where it is identical to the classical thermodynamic temperature, the blackbody spectral curves are consistent with Planck’s. However, by separating these two temperatures in a stable far-from-equilibrium manner, new energy storage modes become possible at the atomic level, something that could have profound implications in understanding matter’s living state.
The physical observations that light exerts pressure on objects and light is bent by gravity can be considered as evidence that photons do indeed have mass. The requirement for mass to be convertible to radiant energy which is net electrically neutral yet has alternating positive and negative potentials and an alternating magnetic field leads to the consideration of a photon as an electron and positron joined in a two body orbital union traveling through space. This simple mechanical model comes directly from conservation of matter and observations of electron-positron annihilation and electronpositron pair production. Since electrons and positrons are generally considered to be fundamental particles, we should believe the electron and positron to be conserved throughout the process of photon creation and destruction. Most interpretations of the mathematics of special relativity would lead one to believe that mass cannot travel at the speed of light because the mass or momentum would be infinite at that velocity. However, under the influence of an inverse square force, speed of light particles are actually predicted using accepted mathematical models. Furthermore, the equations of motion of charged particle interactions can be rearranged and interpreted so that the force varies with velocity instead of the mass. This change in perspective makes the concept of speed of light particles entirely plausible and allows a renewed appreciation for the concepts and definitions of classical mechanics.
Proc. SPIE 8832, Further investigation of an integrated picture of photon diffraction described by virtual particle momentum exchange, 88320L (1 October 2013); doi: 10.1117/12.2023796
An alternative picture for photon diffraction had been proposed describing diffraction by a distribution of photon paths determined through a Fourier analysis of a scattering lattice. The momentum exchange probabilities are defined at the location of scattering, not the point of detection. This contrasts with the picture from classical optical wave theory that describes diffraction in terms of the Huygens-Fresnel principle and sums the phased contributions of electromagnetic waves to determine probabilities at detection. This revised picture, termed “Momentum Exchange Theory,” can be derived through a momentum representation of the diffraction formulas of optical wave theory, replacing the concept of Huygens wavelets with photon scattering through momentum exchange with the lattice. Starting with the Rayleigh-Sommerfeld and Fresnel-Kirchoff formulas, this paper demonstrates that diffraction results from positive and negative photon dispersions through virtual particle exchange probabilities that depend on the lattice geometry and are constrained by the Heisenberg uncertainty principle. The positive and negative increments of momentum exchange exhibit harmonic probability distributions characteristic of a “random walk,” dependent on the distance of momentum exchange. The analysis produces a simplified prediction for the observed intensity profile for a collimated laser beam diffracted by a long, straight edge that lends conceptual support for this alternative picture.
The size stability of the photon has been a puzzle for many decades. However, the self-focusing of a laser beam in matter may provide some guidance to this problem. A conceptual basis for this effect will be accompanied by calculations based on the laser models. It does not necessarily fit. Nevertheless, it leads to better models. The notion of a self-generated light pipe for a photon supports the image of the photon as a soliton. Two photon models are developed. In one, there is a circulating high-energy density that creates a finite-radius light pipe, which thus confines the photon to a core, but with an extended evanescent wave about it. It confines by total internal reflection. In the other, the confinement mode is that of a self-generated graded refractive index (GRIN) optical lens, which does not produce an evanescent wave; but, the electromagnetic energy is spread into the region that would be occupied by the evanescent wave of the first model. The models may be equivalent; both indicate the nature of a relativistic boundary condition that should be imposed on Maxwell’s equations to allow inclusion of photons into his model.
Proc. SPIE 8832, The photon concept revisited: role of cooperative virtual processes in light harvesting, 88320N (1 October 2013); doi: 10.1117/12.2025974
The usefulness of the photon concept has been debated since its original proposal by Einstein in 1905. The photon picture has evolved via the “fuzzy-ball” and stochastic to the entangled, intrinsically quantum mechanical entity. However, in spite of the recent progress, questions about the nature of light still arise and often cause confusion from misinterpretations of various light-matter interaction phenomena. Questions are frequently asked about the existence of a `wave function for the photon’ and about the photon localization. Many light-matter interactions may be described using semiclassical theory and the question is when and where the quantum theory of radiation is needed to provide the best description? Here, we extend the discussion of these issues from several previous works on this subject. We use the definition of the photon wave function described by Scully and coworkers to discuss the photon detection in the context of single atoms and “giant atom” ensembles. We review the role of virtual photons, and discuss the effects of cooperative virtual processes on light harvesting efficiency of photodetectors and photosynthetic biological systems. Noise-induced quantum coherence may increase the performance of these quantum heat engines and the efficiency of photon harvesting.
The proposed paper calls attention towards the unobserved mathematical and conceptual inadequacies persisting in the wave-particle duality and matter wave’s concepts, given by Louis de Broglie. Matter wave’s frequency and phase velocity expressions, shown to be inappropriate, are the consequences of these inadequate concepts. The rectifications in these concepts are presented through the corrected implementation of analogy between light waves and matter waves and thus modified frequency and phase velocity expressions are introduced. The proposed expressions are free from all the inadequacies and negations, contrary to that confronted by de Broglie’s proposed expressions. Mathematical proofs for the proposed modified frequency and phase velocity expression are also presented. A novel General Quantum Mechanical Wave Equation is proposed involving the modified phase velocity expression, which itself can precisely derive out Schrodinger’s and Dirac’s Equation.
I will describe explanations of 1) Michelson-Morley type experiments, 2) Muon decay puzzle, and 3) Doppler’s effect in light. The explanation of Michelson-Morley type experiments is based on the emission-wave mechanism. The resolution of the muon decay puzzle is then based on Pauli’s exclusion principle as applied to a system of interacting muon and electrons surrounding it during its passage in the Earth’s atmosphere. The explanation of the Doppler shift in frequency is then based on the analysis of proper locations on the light fronts. Notably, these explanations do not involve either the time-dilation or the length contraction as in the Special Theory of Relativity. That is to say, these explanations do not use the Lorentz transformations. Nevertheless, they are based on the principle that the speed of light is the same for all observers, whether accelerated or not. These results then render the concepts of special relativity, like time-dilation and length-contraction, to be inessential for physics, in general. But, these results are consistent with the framework of the Universal Theory of Relativity that allows for universal time running at the same rate for all observers and are definite pointers to subtlety in the applications of the concepts of relativity, therefore.
A spacetime based model of an electric field and a photon is presented. The model assumes that 4 dimensional spacetime has vacuum fluctuations at all frequencies up to Planck frequency. Gravitational wave theory is used to give insights into electromagnetic (EM) radiation. From gravitational wave equations it is possible to derive the impedance of spacetime and quantify energy propagating in the medium of spacetime. EM radiation is shown to experience the same impedance as gravitational waves. This implies that photons also are waves in the medium of spacetime. The distortion of spacetime produced by a photon is calculated. Experiments are suggested including an experiment that may improve the sensitivity of experiments attempting to detect gravitational waves.
n attempts to explain dark matter, an important component is often neglected: particles such as ultrafast protons/neutrons, which have a relativistic mass comparable to the Planck mass. They are thus invisible. At a critical speed the mass of a particle reaches that of the Planck mass. The Compton wavelength reaches Its Schwarzschild radius, i.e. the particle becomes invisible. For protons/neutrons this is approx. 10^{19} times of its mass at rest. With the assumption that particles with a rest mass of only the order of 10 000 Sun masses still have such an ultra relativistic speed, it can be explained that the major part of the mass of the Universe appears as “dark”. It also becomes plausible that even today visible mass is generated from virtually the vacuum, simply by decelerating down such fast particles by collisions with slow matter.
Antennas in resonant circuits can present an effective energy absorption cross-section much larger than its physical dimensions to the impinging EM waves. Similarly, atoms can absorb energy from fields with energy densities so low that the atom must have an effective interaction cross-sectional diameter on the order of tens of microns. It appears that resonant energy absorption exhibits a sort of "suction" effect by the absorbing dipole, or a "pushing" effect by the field, or a combination of both. This allows the field energy to converge from a larger volume into a smaller region. We will argue that this effect may actually correspond to the field preferentially directing energy into such resonant systems, and discuss how this provides further evidence for the utility of our proposition of a universal, complex tension field (CTF). We have proposed that CTF can support propagating field gradients, like EM waves, as well as resonant, localized and self-looped oscillations representing various particles. Different gradients in the CTF, generated by different kinds of particle-oscillations, represent the various forces experienced by particles within each others’ physical domain. Even time emerges as a secondary property. Thus, the CTF postulate provides an excellent platform to re-invigorate attempts to build a unified field theory.
Although photons can be extremely energetic and each form of energy is inseparably associated with gravitation, the Theory of Special Relativity nevertheless assumes that the gravitation of photons - always supposed to be static in nature - is vanishingly negligible. For this reason the photon’s gravitation is completely ignored in that theory. This paper, however, casts doubt on the correctness of this assumption and will examine the actual role played by gravitation in electromagnetism. In the course of this paper, an analysis will lead to the new insight that the gravitation of a photon is as dynamic as the photon itself, but static gravitation does not exist for photons. The dynamic gravitation of a photon appears as gravitational radiation locally bound to the photon and is in close interaction with the photon’s electromagnetic radiation. Dynamic gravitation represents the hitherto unknown physical quantity acting in an opposite manner to electrodynamics, thus closing an evident gap in physics. Furthermore, it will be shown that dynamic gravitation determines the physical properties of photons, such as the speed of light, and must therefore be taken into account with all associated physical considerations. The dynamic gravitation of photons is produced by gravitational quanta, and thus appears in quantised form. Consequently there must exist exactly the same number of gravitational quanta as there are of photons themselves. It is therefore necessary to rethink the physics of photons.
There is great interest in quantum mechanics as an "emergent" phenomenon. The program holds that nonobvious patterns and laws can emerge from complicated physical systems operating by more fundamental rules. We find a new approach where quantum mechanics itself should be viewed as an information management tool not derived from physics nor depending on physics. The main accomplishment of quantum-style theory comes in expanding the notion of probability. We construct a map from macroscopic information as data" to quantum probability. The map allows a hidden variable description for quantum states, and efficient use of the helpful tools of quantum mechanics in unlimited circumstances. Quantum dynamics via the time-dependent Shroedinger equation or operator methods actually represents a restricted class of classical Hamiltonian or Lagrangian dynamics, albeit with different numbers of degrees of freedom. We show that under wide circumstances such dynamics emerges from structureless dynamical systems. The uses of the quantum information management tools are illustrated by numerical experiments and practical applications
Fourier ontology and consequent quantum indeterminism and the way to overcome it shall be discussed. Furthermore it shall be proven that recent experimental technology goes far beyond the limits imposed by Heisenberg indetermination relations. These experiments are perfectly integrated in the new causal nonlinear quantum physics. Keywords: Orthodox quantum mechanics, Fourier ontology, Heisenberg indetermination relations, nonlinear quantum physics, general uncertainty relations, beyond Heisenberg limits.
The question posed in the title represents an impossible approach to scientific investigation, but the approach is like a subjectivist. Obviously, photons cannot express their views; neither can we ask directly any scientific questions to the photons. The purpose is to draw the attention of the reader that even our strongly mathematically driven scientific enterprise is full of subjectivism when we start dissecting our thinking process. First, we frame questions in our mind to understand a natural phenomenon we have been observing. Let us not forget that framing the question determine the answer. The answers guide us to frame the foundational hypotheses to build a theory to “explain” the phenomenon under study. Our mind is a product of biological evolutionary requirements; which is further re-programmed by strong human social cultures. In other words, human constructed theories cannot spontaneously become rigorously objective, unless we consciously make them so. We need to develop a methodology of scientific thinking that will automatically force us to make repeated iterative corrections in generating questions as objectively as possible. Those questions will then guide us to re-construct the foundational hypotheses and re-frame the working theories. We are proposing that we add Interaction Process Mapping Epistemology (IPM-E) as a necessary extra thinking tool; which will complement the prevailing Measurable Data Modeling Epistemology (MDM-E). We believe that ongoing interaction processes in nature represent reality ontology. So the iterative application of IPM-E, along with MDM-E, will keep us along the route of ontological reality. We apply this prescription to reveal the universal property, Non-Interaction of Waves, which we have been neglecting for centuries. Using this property, we demonstrate that a large number of ad hoc hypotheses from Classical-, QM-, Relativity- and Astro-Physics can be easily modified to make physics more causal and understandable through common sense logics.
“There is always another way to say the same thing that doesn’t look at all like the way it was said before.” Richard Feynman. In this essay, a novel approach to cosmology is presented that mathematically models the Universe as an iterated function system (IFS) analogous to the famous Mandelbrot Set IFS (M): z=z2+c, where z and c are complex numbers. In theoretical physics, wavefunctions are functions of a complex space that are commonly used to model the dynamics of particles and waves. In the IFS framework presented herein, complex dynamical systems are generated via the iteration process, where the act of iteration corresponds to 1) a change in the state of the system and 2) a change to the wavefunction itself. In this manner, M can be considered a wavefunction generator. In this framework, all observables, including gravity and time, are thought to be generated by the iteration process. Feynman understood that there are many ways of looking at the Universe that are equivalent in nature but different psychologically. Understanding cosmology in terms of fractals and iterated function systems requires a paradigm shift in the way we approach cosmology. This is an evidence based dissertation and does not contradict the standard model; rather, it attempts to reconstruct it using the principles of the fractal paradigm as outlined in this essay. It is the contention of the author that in order to understand the true nature of light, the universe and everything, we must first understand the important role that fractal cosmology plays in the study of our complex dynamical universe.
Proc. SPIE 8832, Quantum theory as the most robust description of reproducible experiments: application to a rigid linear rotator, 883212 (1 October 2013); doi: 10.1117/12.2026998
It is shown that the Schr¨odinger equation of a rigid linear rotator can be obtained from a straightforward application of logical inference, providing another illustration that basic equations of quantum theory follow from inductive inference, applied to experiments for which there is uncertainty about individual events and for which the frequencies of the observed events are robust with respect to small changes in the conditions under which the experiments are carried out.
The formulation of the wave function of a single photon has always been in a con ict with the spatial localization of a photon. Lack of the correct treatment for the probability and current density of a single photon resisted self-consistent treatment of the wave function of a single photon. Here we proceed with the construction of the wave function for a single photon that allows us to introduce probability and current density of a single photon. The wave function of a single photon is constructed from the matrix elements of the electric and magnetic elds that couple vacuum state with the Fock state occupied by a single photon. Further we show that the spin of a photon, being projected on the direction of the photon ight propagation, de nes the time evolution of the photon wave function. As the result it can be identi ed with the Hamiltonian of a single photon that establishes Schrodinger equation for a single photon. The Schrodinger description naturally leads to the current and the probability density for a single photon that satis es the continuity equation. The Schrodinger formulation of the Maxwell equations provides a clear physical meaning for the spin of a photon in a similar way to the Dirac equation provides a natural explanation for the spin of an electron.
All physical measurements are based on finite intervals of space and time. It follows that the appropriate topologies of measurement must be finite. However, there are only two types of finite power set topologies: T0 topologies and Not-T0 topologies. All singlet subsets of T0 (Kolmogorov) topologies are topologically distinguishable. Therefor it is natural that such topologies should be called Particle-like topologies. On the otherhand, some, if not all, singlet subsets of Not-T0 topologies are indistinguishable. Hence such topologies will be called Statistical, Wave-like, or Photon topologies. This article starts with a short review of the topological properties of Kolomogorov T0 particle topologies using processes that generate homotopic evolution of those exterior differential 1-forms chosed to describe thermodynamic states. Not-T0 topologies can use homotopic evolution of N-form densities to generate systems of partial differential equations that describe both reversible and irreversible dynamics. Numerous examples will be presented to demonstrate continuous topological evolution of complex exterior differential form densities in terms of Cartan’s homotopic magic formula.
Whereas according to today’s physics energy-momentum equation holds only for fast moving particles, this paper proves it to be true for all velocities 0<v<c. More importantly, the algorithm underlying this equation is mimicked in all particleparticle interactions, (gravitational, nuclear, weak, electromagnetic), thus unifying the ‘four forces’. It reveals the existence of Nature’s fifth force– the centrifugal force beyond doubt. The algorithm is based on the relationship pc = mc^{2} tanθ, between the two left side terms of the equation, prompting right side term to be mc^{2}.sec2θ. Algorithm reveals that the problematic second order term v^{2}/c^{2} which has hitherto been neglected for the validity of classical mechanics for optical and electromagnetic phenomena has the relational expression sin^{2}θ, which represents a quantum formed in the interaction in empirical reality. In every interaction such quanta are formed by contribution of fractions of, particle’s intrinsic energy mc2 and motive energy pc, together with an influx of kinetic energy and kinetic momentum from the field. On this basis Quantum phenomena and Relativistic phenomena find a unified origin, signalling a new foundation for physics. Even though a photon moves by its own intrinsic energy, and undergoes changes of states without the direct application of external constraints, photon interactions occur by simulation of the fermion algorithm, and quanta are formed, emitted or absorbed by energy effluxes and influxes to and from the field. The veracity of the fermion and photon algorithms are demonstrated in this paper by applying them in combination to the collision of a photon and an electron in Compton’s experiment.
Fundamental constants" are thought to be discoveries about Nature that are xed and eternal, and not dependent on theory. Actually constants have no de nition outside the theory that uses them. For a century units and constants have been based on the physics of the previous millennium. The constants of physics changed radically with quantum mechanics and modern theory, but their use and interpretation was unfortunately locked in early. By critically re-examining the actual structure of the present system in a new light, we nd that obsolete concepts of Newtonian physics impede the understanding and use of quantum theory. Confronting the di erence nds that Planck's constant cannot be observed in quantum theory, and is entirely a construct of human history and convention. A cascade of seeming paradoxes and contradictions occurs when Plancks constant is eliminated, yet the end result is a simpler and cleaner vision of what quantum mechanics and quantum eld theory really involve. By eliminating redundant holdovers the number and nature of fundamental constants is revised. By avoiding the Newtonian conception of mass and associated experimental errors the electron mass is determined with a relative error 67 times smaller than before. The fundamental unit of electric charge is determined more than 100 times more accurately than the current determination of international committees.
In earlier contributions to this conference series, a photon model has been presented, where a cloud with a total charge of a Planck charge (qPlanck = e / √ α) oscillates. That model could explain, why electromagnetic radiation is transverse, why E = hν, why the spin of the Photon is 1 and why the Photon is self-propelling with speed “c”? Now it is shown, that a similar model explains mass and gravitation, again based on the Planck charge. The fine structure constant α, contained in the Planck charge, quantitatively governs all particle masses, predicts the mass of many particles exactly. Most masses of the heavy Quarks and Higgs Boson are predicted with accuracies better than 2 %. One of the predicted masses at 70 MeV/c^{2} cannot be ascribed to a known particle. However, its 1.5 fold is the Muon, its twofold is the Pion, and its sevenfold is a Kaon. All other leptons and hadrons are integer multiples of this mass m0 mostly with an accuracy better than 2 %.
The uncertainty principle is an important element of quantum mechanics. It deals with certain pairs of physical parameters which cannot be determined to an arbitrary level of precision at the same time. According to the so-called Copenhagen interpretation of quantum mechanics, this uncertainty is an intrinsic property of the physical world. – This paper intends to show that there are good reasons for adopting a different view. According to the author, the uncertainty is not a property of the physical world but rather a limitation of our knowledge about the actual state of a physical process. This view conforms to the quantum theory of Louis de Broglie and to Albert Einstein’s interpretation.
Proc. SPIE 8832, On the possibility of laser-assisted production and detection of low-energy neutrino beams, 88321A (1 October 2013); doi: 10.1117/12.2021117
The possible production and detection of collimated beams of low-energy sub-keV neutrinos with the assistance of lasers is examined. For a neodymium laser, the relative probability that recoiling co- or counter-propagating neutrino-antineutrino pairs are created in stimulated emission events instead of photons is calculated to be about 10-7. The effect generates a coherent monochromatic neutrino beam that overlaps the internal standing-wave photon beam, exiting at both ends of the laser. To detect such (anti)neutrino beams without the help of absorption, one can exploit (anti)neutrino-stimulated deexcitations of lasing levels in a second laser aligned with the first, whose lasable transitions are resonant with the undulation frequency of the (anti)neutrinos.
Proc. SPIE 8832, Inadequacies in De Broglie's Theory: rectifications, verifications, and applications, 88321B (1 October 2013); doi: 10.1117/12.2023843
The proposed paper calls attention towards the unobserved mathematical and conceptual inadequacies persisting in the wave-particle duality and matter wave’s concepts, given by Louis de Broglie. Matter wave’s frequency and phase velocity expressions, shown to be inappropriate, are the consequences of these inadequate concepts. The rectifications in these concepts are presented through the corrected implementation of analogy between light waves and matter waves and thus modified frequency and phase velocity expressions are introduced. The proposed expressions are free from all the inadequacies and negations, contrary to that confronted by de Broglie’s proposed expressions. Mathematical proofs for the proposed modified frequency and phase velocity expression are also presented. A novel General Quantum Mechanical Wave Equation is proposed involving the modified phase velocity expression, which itself can precisely derive out Schrodinger’s and Dirac’s Equation.
While particle physicists around the world rejoice the announcement of discovery of Higgs particle as a momentous event, it is also an opportune moment to assess the physicists' conception of nature. Particle theorists, in their ingenious efforts to unravel mysteries of the physical universe at a very fundamental level, resort to macroscopic many body theoretical methods of solid state physicists. Their efforts render the universe a superconductor of correlated quasi-particle pairs. Experimentalists, devoted to ascertain the elementary constituents and symmetries, depend heavily on numerical simulations based on those models and conform to theoretical slang in planning and interpretation of measurements . It is to the extent that the boundaries between theory/modeling and experiment are blurred. Is it possible that they are meandering in Dante's Inferno?
When we create mathematical models for Quantum Mechanics we assume that the mathematical apparatus used in modeling, at least the simplest mathematical apparatus, is infallible. In particular, this relates to the use of ”infinitely small” and ”infinitely large” quantities in arithmetic and the use of Newton Cauchy definitions of a limit and derivative in analysis. We believe that is where the main problem lies in contemporary study of nature. We have introduced a new concept of Observer’s Mathematics (see www.mathrelativity.com). Observer’s Mathematics creates new arithmetic, algebra, geometry, topology, analysis and logic which do not contain the concept of continuum, but locally coincide with the standard fields. We prove that Euclidean Geometry works in sufficiently small neighborhood of the given line, but when we enlarge the neighborhood, non-euclidean Geometry takes over. We prove that the physical speed is a random variable, cannot exceed some constant, and this constant does not depend on an inertial coordinate system. We proved the following theorems: Theorem A (Lagrangian). Let L be a Lagrange function of free material point with mass m and speed v. Then the probability P of L = ^{m}_{2} v^{2} is less than 1: P(L = ^{m}_{2} v^{2}) < 1. Theorem B (Nadezhda effect). On the plane (x, y) on every line y = kx there is a point (x_{0}, y_{0}) with no existing Euclidean distance between origin (0, 0) and this point. Conjecture (Black Hole). Our space-time nature is a black hole: light cannot go out infinitely far from origin.
Proc. SPIE 8832, Can one distinguish between Doppler shifts due to source-only and detector-only velocities?, 88321E (1 October 2013); doi: 10.1117/12.2018342
This paper revisits the optical Doppler shift as the classical Doppler shift based upon spectroscopic line broadening of spontaneous emission (moving source) and quantum mechanical conditions for stimulated absorption and emissions (moving detector). We find that excited emitting source-atoms and stimulated detecting-atoms clearly discern their individual absolute velocities with respect to the cosmic vacuum (Complex Tension Field, or CTF). In other words, the optical Doppler shift does not depend solely upon the relative velocity between a pair of source and detector; as the prevailing assumption is. The implication is that Doppler shifts of light coming from distant galaxies are determined by the local velocities of the emitting and detecting atoms with respect to the CTF; and the emissions frequencies remain completely independent of the velocities of the various detectors in various other galaxies; because they obey quantum mechanical transition rule Δν _{mn} = hν _{mn} . The released energy _{mn} hν evolves into a wave packet of frequency _{mn} ν only when the velocity of the source atom is zero w.r.t. CTF. Atom velocity in CTF introduces a real physical frequency shift from _{mn} ν into _{med} . ν . Then a detector would perceive this med . ν as det . ν due to its own velocity w.r.t CTF. The key assertion of this paper is that, the classical Doppler shift for material based waves and the optical Doppler shift for CTF based EM waves, follow the same and two different physical processes during emission and absorption and hence the representative mathematical formulation should be same as classical Doppler shift formula. Light emitted by an atom in a star in a galaxy at a distance of 10 billion years from the Sun, could not have coordinated its Doppler shift “knowing” its relative velocity with an earth based detector’s; because the earth did not exist! The Sun was born barely 4 billion years ago. Calculation of optical Doppler shift based upon current relative velocity between the two galaxies is a noncausal model and hence can lead to erroneous physical conclusions like Expanding Universe, which may not be true. It is more likely that the distance dependent Hubble redshift is due to a distant dependent frequency (energy) loss of photon wave packets engendered by very weak dissipative property of the CTF, like the postulate of Tired Light, or something else. We support our model by analyzing the origin of multi-longitudinal modes in He-Ne lasers. Light emitting and absorbing atoms in distant galaxies follow the same set of QM rules as those in our laboratory. We can safely assume that the physical properties of the free space between distant galaxies and that between the atoms trapped in a low pressure He-Ne laser tube are one and the same. Then we analyze the spontaneous and stimulated emission characteristics of Ne-atoms in a population inverted laser tube. The spectral line broadening measured in emission and absorption spectrometry is due to Doppler broadening introduced due to the statistical Maxwellian velocity distribution of the atoms; which is determined by the mean temperature of the surrounding of the atoms. Again, our assumption is that this Maxwellian Doppler broadening process is the same in the earth-based discharge tube and in the corona of distant stars. Both classical physics (Doppler and Maxwell) and quantum physics (emission and absorption) are same here as in the distant galaxies. And these two branches of physics are complementary, not discordant with each other.
Accepting nonlocal quantum correlations requires us to reject special relativity and/or probability theory. We can retain both by revising our interpretation of quantum mechanics regarding the handling of separated systems, as quantum mechanics conflicts with local realism only in its treatment of separated systems. We cannot use the joint probability formula for cases of separated measurements. We use the marginals (partial traces) together with whatever priors we have from an understanding of the system. This program can reconcile quantum mechanics with local realism. An apparent obstacle to this program is the experimental evidence, but we argue that the experiments have been misinterpreted, and that when correctly interpreted they confirm local realism. We describe a local realistic account of one important Einstein-Poldosky-Rosen-Bohm (EPRB) experiment (Weihs et al6) that claims to demonstrate nonlocal entanglement. We present a local realistic system (experiment) that can be calibrated into both quantum and classical correlation domains via adjustment of parameters (‘hidden variables’) of the apparatus. Weihs incorrectly dismisses these parameters as uncritical. Nonlocal entanglement is seen to be an error. The rest of quantum mechanics remains intact, and remains highly valued as a powerful probability calculus for observables. Freed from the incoherent idea of nonlocal entanglement, we can leverage powerful classical ideas, such as semiclassical radiation theory, stochastic dynamics, classical noncommutativity/contextuality, measurement effects on state, etc., to augment or complement quantum mechanics. When properly interpreted and applied, quantum mechanics lives in peaceful harmony with the local realist conception, and both perspectives offer useful paradigms for describing systems.
Particle and wave like properties of photons can impressively be demonstrated in Young’s double slit experiment. Usually, measurements behind the slit provide information either about the path of the single photons or interference can be observed. Today the question of “which-slit” versus “interference” in the double-slit configuration is as relevant as it was in the early days of quantum mechanics. To gain deeper insight we set up an experiment using a pair of photons generated by SPDC pumped with a higher order mode (TEM_{01}). One of the SPDC photons, the signal photon, was used to illuminate the double slit and measure the single photon interference behind it. The other photon, the idler photon, was used in a reference measurement at the position of the slit using a polarizing beam splitter. First, the signal photons were obtained at the position of the slit as a function of the position of the entangled idler photons in a coincidence measurement. From this coincidence measurement the “which-slit” information is available. In a second coincidence measurement the far field interference fringes were obtained for signal photons passing through one of the slits, only, selected by the position of the reference detector measuring the entangled idler photons. The newest results will be presented and discussed. This may provide new insights in the wave-particle dualism and thus inspire the discussion about the nature of photons.
Proc. SPIE 8832, Entangled photons and antibunching phenomena revisited on the basis of various models for light, 88321I (1 October 2013); doi: 10.1117/12.2023952
At what level of energy does a classical experiment become quantum mechanical? Where does classical electromagnetic theory apply and where do we have to turn to quantum optics? Is there a sharp line between these two models? And if so: why, and why at this level? Or if not: what are the common features of the formalisms, and what are the di erences? With our upcoming experimental work, we hope to be able to gain some insight into these challenging questions. The starting point is an investigation of the results from an ordinary Bell type experiment showing entanglement between two photons, where the photons has been generated using spontaneous parametric down conversion, and the same experiment but with a classical source where two highly attenuated pulses are combined. Here we will present in detail our planned experiment.
Proc. SPIE 8832, Can violations of Bell's inequalities be considered as a final proof of quantum physics?, 88321J (1 October 2013); doi: 10.1117/12.2023977
Nowadays, it is commonly admitted that the experimental violation of Bell’s inequalities that was successfully demonstrated in the last decades by many experimenters, are indeed the ultimate proof of quantum physics and of its ability to describe the whole microscopic world and beyond. But the historical and scientific story may not be envisioned so clearly: it starts with the original paper of Einstein, Podolsky and Rosen (EPR) aiming at demonstrating that the formalism of quantum theory is incomplete. It then goes through the works of D. Bohm, to finally proceed to the famous John Bell’s relationships providing an experimental setup to solve the EPR paradox. In this communication is proposed an alternative reading of this history, showing that modern experiments based on correlations between light polarizations significantly deviate from the original spirit of the EPR paper. It is concluded that current experimental violations of Bell’s inequalities cannot be considered as an ultimate proof of the completeness of quantum physics models.
Proc. SPIE 8832, Observation of bosonic coalescence and fermionic anti-coalescence with indistinguishable photons, 88321K (1 October 2013); doi: 10.1117/12.2024090
The symmetrization postulate asserts that the state of particular species of particles can only be of one permutation symmetry type: symmetric for bosons and antisymmetric for fermions. We report some experimental results showing that pairs of photons indistinguishable by all degrees of freedom can exhibit not only a bosonic behavior, as expected for photons, but also a surprisingly sharp fermionic behavior under speci c conditions.
Proc. SPIE 8832, Nonclassical effects in two-photon interference experiments: an event-by-event simulation, 88321L (1 October 2013); doi: 10.1117/12.2021862
It is shown that both the visibility V =1/2 predicted for two-photon interference experiments with two independent sources and the visibility V = 1 predicted for two-photon interference experiments with a parametric down-conversion source can be explained in terms of a locally causal, adaptive, corpuscular, classical (non-Hamiltonian) dynamical system. Hence, there is no need to invoke quantum theory to explain the so-called nonclassical effects in the interference of signal and idler photons in parametric down conversion and a revision of the commonly accepted criterion of the nonclassical nature of light is called for.
Proc. SPIE 8832, Event-by-event simulation of experiments to create entanglement and violate Bell inequalities, 88321M (1 October 2013); doi: 10.1117/12.2021863
We discuss a discrete-event, particle-based simulation approach which reproduces the statistical distributions of Maxwell’s theory and quantum theory by generating detection events one-by-one. This event-based approach gives a unified causeand- effect description of quantum optics experiments such as single-photon Mach-Zehnder interferometer, Wheeler’s delayed choice, quantum eraser, double-slit, Einstein-Podolsky-Rosen-Bohm and Hanbury Brown-Twiss experiments, and various neutron interferometry experiments. We illustrate the approach by application to single-photon Einstein-Podolsky- Rosen-Bohm experiments and single-neutron interferometry experiments that violate a Bell inequality.
Data sets produced by three different Einstein-Podolsky-Rosen-Bohm (EPRB) experiments are tested against the hypothesis that the statistics of this data is described by quantum theory. Although these experiments generate data that violate Bell inequalities for suitable choices of the time-coincidence window, the analysis shows that it is highly unlikely that these data sets are compatible with the quantum theoretical description of the EPRB experiment, suggesting that the popular statements that EPRB experiments agree with quantum theory lack a solid scientific basis and that more precise experiments are called for.
The wave-particle duality is a fundamental property of the nature. At the same time, it is one of the greatest mysteries of modern physics. This gave rise to a whole direction in quantum physics - the interpretation of quantum mechanics. The Wiener experiments demonstrating the wave-particle duality of light are discussed. It is shown that almost all interpretations of quantum mechanics allow explaining the double-slit experiments, but are powerless to explain the Wiener experiments. The reason of the paradox, associated with the wave-particle duality is analyzed. The quantum theory consists of two independent parts: (i) the dynamic equations describing the behavior of a quantum object (for example, the Schrodinger or Maxwell equations), and (ii) the Born’s rule, the relation between the wave function and the probability of finding the particle at a given point. It is shown that precisely the Born’s rule results in paradox in explaining the wave-particle duality. In order to eliminate this paradox, we propose a new rational interpretation of the wave-particle duality and associated new rule, connecting the corpuscular and wave properties of quantum objects. It is shown that this new rational interpretation of the wave-particle duality allows using the classic images of particle and wave in explaining the quantum mechanical and optical phenomena, does not result in paradox in explaining the doubleslit experiments and Wiener experiments, and does not contradict to the modern quantum mechanical concepts. It is shown that the Born’s rule follows immediately from proposed new rules as an approximation.