Pixellated Optics, a class of optical devices which preserve phase front continuity only over small sub areas of the device, allow for a range of uses that would not otherwise be possible. One potential use is as Low Vision Aids (LVAs), where they are hoped to combine the function and performance of existing devices with the size and comfort of conventional eyewear. For these devices a Generalised Confocal Lenslet Array (GCLA) is designed to magnify object space, creating the effect of traditional refracting telescope within a thin, planar device. By creating a device that is appreciably thinner than existing LVA telescopes it is hoped that the comfort for the wearer will be increased. We have developed a series of prototype GLCA-based devices to examine their real-world performance, focussing on the resolution, magnification and clarity of image attainable through the devices. It is hoped that these will form the basis for a future LVA devices. This development has required novel manufacturing techniques and a phased development approach centred on maximising performance. Presented here will be an overview of the development so far, alongside the performance of the latest devices.
We recently showed how to construct omni-directional ray-optical transformation-optics devices out of ideal thin lenses. These devices can be seen as theoretical generalisations of the paraxial, four-lens, “Rochester cloak”. Here we investigate the practical realisability of such devices. We use ray-tracing simulations to compare combinations of skew lenses of different types, including ideal lenses and phase holograms of lenses.
Many of the properties of thick lenses can be understood by considering these as a combination of parallel ideal thin lenses that share a common optical axis. A similar analysis can also be applied to many other optical systems. Consequently, combinations of ideal lenses that share a common optical axis, or at least optical-axis direction, are very well understood. Such combinations can be described as a single lens with principal planes that do not coincide. However, in recent proposals for lens-based transformation-optics devices the lenses do not share an optical-axis direction. To understand such lens-based transformation-optics devices, combinations of lenses with skew optical axes must be understood. In complete analogy to the description of combinations of pairs of ideal lenses that share an optical axis, we describe here pairs of ideal lenses with skew optical axes as a single ideal lens with sheared object and image spaces. The transverse planes are no longer perpendicular to the optical axis. We construct the optical axis, the direction of the transverse planes on both sides, and all cardinal points. We believe that this construction has the potential to become a powerful tool for understanding and designing novel optical devices.
We recently showed how structures of ideal (thin) lenses can act as (ray-optical) transformation-optics devices. This was done by breaking the structure down into all sets of ideal lenses in the structure that share a common edge, and showing that these sets have very specific imaging properties. In order to start the development of a general understanding of the imaging properties of sets of ideal lenses that share a common edge, we investigate here particularly simple and symmetric examples of combinations of ideal lenses that share a common edge. We call these combinations ideal-lens stars. An ideal-lens star is formed by N identical ideal lenses, each placed such that they share a principal point (which lies on the common edge) and such that the angles between all neighbouring lenses are the same. We find that that passage through every single ideal lens in the ideal-lens star images any point to itself. Furthermore, light-ray trajectories in ideal-lens stars are piecewise linear approximations to conic sections. (In the limit of N approaching infinity, they are conic sections.)
Pixellated optical components, for example generalised confocal lenslet arrays (GCLAs), enable the design of optical devices which cannot be realised without introducing pixellation or a similar compromise. A key concern is the degradation of imaging quality due to the combined effects of diffraction, worst for smaller pixels, and the visibility of the pixels. Here we examine the effects of these two factors on image quality through use of our custom raytracer, Dr TIM. We also outline future work in developing these ideas more rigorously and applying the conclusions to more complicated devices.
In a photo taken with a camera moving at relativistic speed, the world appears distorted. That much has long been clear, but the details of the distortion were slow to emerge correctly. We recently added relativistic raytracing capability to our custom raytracer, Dr TIM, resulting in unique combinations of capabilities. Here we discuss a few observations. In particular, photos can be sharp only if the shutter is placed correctly. A hypothetical window that changes light-ray direction like a change of inertial frame, when combined with suitable shutter placement, can correct for all relativistic-aberration effects.
We study, theoretically, omni-directional Euclidean transformation-optics (TO) devices comprising planar, light-ray-direction changing, imaging, interfaces. We initially studied such devices in the case when the interfaces are homogeneous, showing that very general transformations between physical and electromagnetic space are possible. We are now studying the case of inhomogeneous interfaces. This case is more complex to analyse, but the inhomogeneous interfaces include ideal thin lenses, which gives rise to the hope that it might be possible to construct practical omni-directional TO devices from lenses alone. Here we report on our progress in this direction.
Unstable canonical resonators can possess eigenmodes with a fractal intensity structure [Karman et al., Nature 402, 138(1999)]. In one particular transverse plane, the intensity is not merely statistically fractal, but self-similar [Courtial and Padgett, PRL 85, 5320 (2000)]. This can be explained using a combination of diffraction and imaging with magnification greater than one.
Here we show that the same mechanism also shapes the intensity cross-section in the longitudinal direction into a self-similar fractal, but with a different magnification. This results in three-dimensional, self-similar, fractal intensity structure in the eigenmodes.
We study the imaging properties of windows that rotate the direction of transmitted light rays by a fixed angle around the window normal [A. C. Hamilton et al., J. Opt. A: Pure Appl. Opt. 11,085705 (2009)]. We previously found that such windows image between object and image positions with suitably defined complex longitudinal coordinates [J. Courtial et al., Opt. Lett. 37, 701 (2012)]. Here we extend this work to object and image positions in which any coordinate can be complex. This is possible by generalising our definition of what it means for alight ray to pass through a complex position: the vector from the real part of the position to the point on the ray that is closest to that real part of the position must equal the cross product of the imaginary part of the image position and the normalised light-ray-direction vector. In the paraxial limit, we derive the equivalent of the lens equation for planar and spherical ray-rotating windows. These results allow us to describe complex imaging in more general situations, involving combinations of lenses and inclined ray-rotating windows. We illustrate our results with ray-tracing simulations.
Two microlens arrays that are separated by the sum of their focal lengths form arrays of micro-telescopes. Parallel light rays that pass through corresponding lenses remain parallel, but the direction of the transmitted light rays is different. This remains true if corresponding lenses do not share an optical axis (i.e. if the two microlens arrays are shifted with respect to each other). The arrays described above are examples of generalized confocal lenslet arrays, and the light-ray-direction change in these devices is well understood [Oxburgh et al., Opt. Commun. 313, 119 (2014)]. Here we show that such micro-telescope arrays change light-ray direction like the interface between spaces with different metrics. To physicists, the concept of metrics is perhaps most familiar from General Relativity (where it is applied to spacetime, not only space, like it is here) and Transformation Optics [Pendry et al., Science 312, 1780 (2006)], where different materials are treated like spaces with different optical metrics. We illustrate the similarities between micro-telescope arrays and metric interfaces with raytracing simulations. Our results suggest the possibility of realising transformation-optics devices with micro-telescope arrays, which we investigate elsewhere.
We define Lorentz-transformation windows as windows that change the direction of transmitted light rays like a Lorentz transformation. Similarly, Galileo-transformation windows change the direction of transmitted light rays like a Galileo transformation. This light-ray-direction change distorts the scene seen through such a window in the same way in which the scene would be distorted in a photo taken with a camera moving through the scene. Lorentz-transformation windows can also undo the distortion of the scene when moving at relativistic velocity relative to it. For small angles between the direction of the light rays and the direction of the velocity, Galileo-transformation windows can be realised with relatively simple telescope windows, which consist of arrays of identical micro-telescopes.
Ray-optically, optical components change a light-ray field on a surface immediately in front of the component into a different light-ray field on a surface behind the component. In the ray-optics limit of wave optics, the incident and outgoing light-ray directions are given by the gradient of the phase of the incident and outgoing light field, respectively. But as the curl of any gradient is zero, the curl of the light-ray field also has to be zero. The above statement about zero curl is true in the absence of discontinuities in the wave field. But exactly such discontinuities are easily introduced into light, for example by passing it through a glass plate with discontinuous thickness. This is our justification for giving up on the global continuity of the wave front, thereby compromising the quality of the field (which now suffers from diffraction effects due to the discontinuities) but also allowing light-ray fields that appear to be (but are not actually) possessing non-zero curl and there by significantly extending the possibilities of optical design. Here we discuss how the value of the curl can be seen in a light-ray field. As curl is related to spatial derivatives, the curl of a light-ray field can be determined from the way in which light-ray direction changes when the observer moves. We demonstrate experimental results obtained with light-ray fields with zero and apparently non-zero curl.
Identity certification in the cyberworld has always been troublesome if critical information and financial transaction must be processed. Biometric identification is the most effective measure to circumvent the identity issues in mobile devices. Due to bulky and pricy optical design, conventional optical fingerprint readers have been discarded for mobile applications. In this paper, a digital variable-focus liquid lens was adopted for capture of a floating finger via fast focusplane scanning. Only putting a finger in front of a camera could fulfill the fingerprint ID process. This prototyped fingerprint reader scans multiple focal planes from 30 mm to 15 mm in 0.2 second. Through multiple images at various focuses, one of the images is chosen for extraction of fingerprint minutiae used for identity certification. In the optical design, a digital liquid lens atop a webcam with a fixed-focus lens module is to fast-scan a floating finger at preset focus planes. The distance, rolling angle and pitching angle of the finger are stored for crucial parameters during the match process of fingerprint minutiae. This innovative compact touchless fingerprint reader could be packed into a minute size of 9.8*9.8*5 (mm) after the optical design and multiple focus-plane scan function are optimized.
Previously we have demonstrated that the orbital angular momentum (OAM) of the light beam may be measured by image transformation that maps the azimuthal to linear transverse co-ordinate (Berkhout et al 2010 Phys. Rev. Lett. 105 153601). For each input OAM state the transmitted light is focused to a different transverse position enabling simultaneous measurement over many states. We present a significant improvement to our earlier design, extending the measurement bandwidth to greater than 50 OAM states and showing simultaneous measurement of the radial co-ordinate. We further demonstrate the transformation working in reverse, potentially allowing for the rapid switching of OAM modes.
The desire to increase the amount of information that can be encoded onto a single photon has driven research into many areas of optics. One such area is optical orbital angular momentum (OAM) . These beams have helical phasefronts and carry an orbital angular momentum of mbar per photon, where the integer m is unbounded, giving a large state space in which to encode information.
We recently developed a telescope system comprising two bespoke refractive optical elements to transform OAM states into transverse momentum states . This is achieved by mapping the azimuthal position of the input plane to the lateral position in the output . A mapping of this type transforms a set of concentric rings at the input plane into a set of parallel lines in the output plane. A lens can then separate the resulting transverse momentum states into specified lateral positions, allowing for the efficient measurement of multiple OAM states simultaneously.
Separating OAM states in this way presents an opportunity for this larger alphabet to improve the data capacity of a free space link and has potential application in both the classical and quantum regimes.
We will present our latest design, increasing the bandwidth of measurable states to over 50 OAM modes. In such a system we study the crosstalk introduced by a thin phase turbulence, showing that turbulence similarly degrades the purity of all the modes within this range.
We report a new simple optical system for the highly efficient measurement of the Orbital Angular Momentum States of
Light. It uses an image reformatter to map each input state onto a different lateral position in the output aperture. This,
near perfect, separation of states potentially makes available the high information capacity of OAM in both classical and
We have developed an interactive user-interface that can be used to generate phase holograms for use with spatial light modulators. The program utilises different hologram design techniques allowing the user to select an appropriate algorithm. The program can be used to generate multiple beams, interference patterns and can be used for beam steering. We therefore see a major application of the program to be within optical tweezers to control the position, number and type of optical traps.
Laguerre-Gaussian (LG) light beams possess discrete values of orbital angular momentum (OAM) of l&barh; per photon, where l is the azimuthal index of the mode. In principle l can take on any integer number, resulting in an unlimited amount of information that can be carried by any part of the beam - even a single photon. We have developed a technology demonstrator that uses OAM to encode information onto a light beam for free-space optical communications. In our demonstrator units both the encoding and decoding of the orbital angular momentum states is achieved using diffractive optical components (holograms). We use 9 different OAM values; one value is used for alignment purposes, the others carry data.
The micromanipulation of objects into 2-dimensional and 3-dimensional geometries within holographic optical tweezers is carried out using a modified Gerchberg-Saxton algorithm. The modified algorithm calculates phase hologram sequences, used to reconfigure the geometries of optical traps in several planes simultaneously. The hologram sequences are calculated automatically from the initial, intermediate and final trap positions. Manipulation of multiple objects in this way is semi-automated, once the traps in their initial positions are loaded.
We use holographic optical tweezers to trap multiple micron-sized objects and manipulate them in 3-dimensions. Trapping multiple objects allow us to create 3-dimensional structures, examples of which include; simple cubes which can be rotated or scaled, complex crystal structures like the diamond lattice or interactive 3-dimensional control of trapped particles anywhere in the sample volume.