We investigate the use of photonic lanterns in a fibre beam-combiner. One of the problems of fibre combiners is that the single-mode fibres coupling is affected by the atmospheric seeing. This effect could be potentially mitigated using a "photonic lantern". A photonic lantern converts the multi-mode input at the focal plane of a telescope to N single mode outputs. In presence of atmospheric seeing the light of the telescope will couple with one or more of the single-mode outputs unlike the current interferometers where coupling can be sporadic in bad seeing conditions. In a stellar interferometer each telescope focal plane could be coupled with a photonic lantern. After beam combination each single-mode outputs of the lantern will interfere with all the others. N single mode outputs from each lantern will make N non independent closure phases that can be averaged. Using this method the noise on the closure phase on independent baseline triangles could in principle be reduced by the square root of N.
This paper compares the performance between temporal and subband Minimum Variance (MV) beamformers for medical ultrasound imaging. Both adaptive methods provide an optimized set of apodization weights but are implemented in the time and frequency domains respectively. Their performance is evaluated with simulated synthetic aperture data obtained from Field II and is quantified by the Full-Width-Half-Maximum (FWHM), the Peak-Side-Lobe level (PSL) and the contrast level. From a point phantom, a full sequence of 128 emissions with one transducer element transmitting and all 128 elements receiving each time, provides a FWHM of 0.03 mm (0.14λ) for both implementations at a depth of 40 mm. This value is more than 20 times lower than the one achieved by conventional beamforming. The corresponding values of PSL are -58 dB and -63 dB for time and frequency domain MV beamformers, while a value no lower than -50 dB can be obtained from either Boxcar or Hanning weights. Interestingly, a single emission with central element #64 as the transmitting aperture provides results comparable to the full sequence. The values of FWHM are 0.04 mm and 0.03 mm and those of PSL are -42 dB and -46 dB for temporal and subband approaches. From a cyst phantom and for 128 emissions, the contrast level is calculated at -54 dB and -63 dB respectively at the same depth, with the initial shape of the cyst being preserved in contrast to conventional beamforming. The difference between the two adaptive beamformers is less significant in the case of a single emission, with the contrast level being estimated at -42 dB for the time domain and -43 dB for the frequency domain implementation. For the estimation of a single MV weight of a low resolution image formed by a single emission, 0.44 * 109 calculations per second are required for the temporal approach. The same numbers for the subband approach are 0.62 * 109 for the point and 1.33 * 109 for the cyst phantom. The comparison demonstrates similar resolution but slightly lower side-lobes and higher contrast for the subband approach at the expense of increased computation time.
We have built a HOE-based display capable of reconstructing arbitrary images, in mid-air at fixed focal depths, that can interact with the viewer in real-time. The display system comprises the HOE, a laser projection subsystem, a Kinect motion sensor and an embedded controller. The HOE functions as a fast converging lens and is A4 page sized (20×30cm). We have written a number of simple apps for the display that allow the user to draw in mid-air or to touch icons and buttons that trigger other actions. The reconstructed holographic images are high-resolution, relatively bright and visible under ambient indoor lighting conditions.
The CANARY on-sky MOAO demonstrator is being integrated in the laboratory and a status update about its
various components is presented here. We also discuss the alignment and calibration procedures used to improve
system performance and overall stability. CANARY will be commissioned at the William Herschel Telescope at
the end of September 2010.
A simple low resolution volumetric display is presented, based on holographic volume-segments. The display system
comprises a proprietary holographic screen, laser projector, associated optics plus a control unit. The holographic screen
resembles a sheet of frosted glass about A4 in size (20x30cm). The holographic screen is rear-illuminated by the laser
projector, which is in turn driven by the controller, to produce simple 3D images that appear outside the plane of the
screen. A series of spatially multiplexed and interleaved interference patterns are pre-encoded across the surface of the
holographic screen. Each illumination pattern is capable of reconstructing a single holographic volume-segment. Up to
nine holograms are multiplexed on the holographic screen in a variety of configurations including a series of numeric
and segmented digits. The demonstrator has good results under laboratory conditions with moving colour 3D images in
front of or behind the holographic screen.
EAGLE is a multi-object 3D spectroscopy instrument currently under design for the 42-metre European Extremely Large
Telescope (E-ELT). Precise requirements are still being developed, but it is clear that EAGLE will require (~100 x 100
actuator) adaptive optics correction of ~20 - 60 spectroscopic subfields distributed across a ~5 arcminute diameter field
of view. It is very likely that LGS will be required to provide wavefront sensing with the necessary sky coverage. Two
alternative adaptive optics implementations are being considered, one of which is Multi-Object Adaptive Optics
(MOAO). In this scheme, wavefront tomography is performed using a set of LGS and NGS in either a completely open-loop
manner, or in a configuration that is only closed-loop with respect to only one DM, probably the adaptive M4 of the
E-ELT. The fine wavefront correction required for each subfield is then applied in a completely open-loop fashion by
independent DMs within each separate optical relay. The novelty of this scheme is such that on-sky demonstration is
required prior to final construction of an E-ELT instrument. The CANARY project will implement a single channel of an
MOAO system on the 4.2m William Herschel Telescope. This will be a comprehensive demonstration, which will be
phased to include pure NGS, low-order NGS-LGS and high-order woofer-tweeter NGS-LGS configurations. The LGSs
used for these demonstrations will be Rayleigh systems, where the variable range-gate height and extension can be used
to simulate many of the LGS effects on the E-ELT. We describe the requirements for the various phases of MOAO
demonstration, the corresponding CANARY configurations and capabilities and the current conceptual designs of the
We present the application of wavefront sensing to 3-dimensional particle metrology for measuring the 3-component velocity vector field in a fluid flow across a volume. The technique is based upon measuring the wavefront scattered by a tracer particle from which the 3-dimensional tracer location can be calculated. Using a temporally resolved sequence of 3-dimensional particle locations the velocity vector field is obtained. In this paper we focus on an anamophic technique to capture the data required to measure the wavefront. Data is presented from a reconstruction of the phase of the wavefront as well as from a more pragmatic approach that examines only the defocus of that wavefront. The methods are optically efficient and robust and can be applied to both coherent and incoherent light in contrast to classical interferometric methods. A focus of this paper has been the filtering techniques in order to reliably extract the particle images from the overall image field. The resolution and repeatability of the depth (or range) measurements have been quantified experimentally using a single mode fiber source representing a tracer particle. A first proof of principle experiment using this technique for 3-dimensional PIV on a sparsely seeded gas phase flow is also presented.
Imaging for exo-planet detection requires both high contrast and a small inner working angle. We show that, for several
of the techniques proposed so far to achieve this, the inner working angle can be reduced by adding pupil replication
between the telescope and the high contrast imaging system. Using pupil replication, the on-axis image of the star is
decreased to a size smaller than the diffraction limit of the telescope, and off axis the point spread function of the planet
undergoes minor changes, contained within the envelope of the point spread function of the telescope; the spectrum
remains unchanged. The principle of pupil replication was proven experimentally and can be effected by a small-sized,
high throughput optical system added between the telescope and the high contrast imaging system. High contrast
imaging systems to which pupil replication has been found to be applicable so far include apodisation techniques like
pupil apodisation, aperture masks, image plane masks, coronagraphs and combinations. Mathematical assessment and
simulations of the sensitivity of pupil replication to optical errors show that the requirements for this system are the
same as those for the primary telescope - pupil replication effectively remaps the output pupil of the telescope to the
input pupil of the high contrast imaging system.
Our results in this paper aim to show, in a realistic set-up, the feasibility of an improvement of the inner working angle
by a factor of 4 using four-fold replication optics while maintaining the contrast performance. We do this through
analysis of the pupil replication principle including off axis behavior when applied to high contrast imaging systems
using pupil apodisation or a shaped mask. We specifically look at the situations similar to that of the Terrestrial Planet
Finder Coronagraph and Darwin. We found that an inner working angle of 30 mas can be achieved with a contrast of
10-10 and a large field of view without increasing the requirements except for the pointing.
High-resolution imaging can be achieved by optical aperture synthesis (OAS). Such an imaging process is subject to aberrations introduced by instrumental defects and/or turbulent media. Redundant spacings calibration (RSC) is a snapshot calibration technique that can be used to calibrate OAS arrays without use of assumptions about the object being imaged. Here we investigate the analogies between RSC and adaptive optics in passive imaging applications.
Phase diversity measurements using diffractive encoding systems offer a means for simultaneous measurement of the angular dependence of the turbulence induced in the wavefront distortion and either the angular dependence of scintillation induced during atmospheric propagation or the turbulence-degraded point spread function.
We will describe experiments designed to measure the wavefront distortion and angular decorrelation of the atmospheric transfer functions, and discuss the observational strategy for measurement of atmospheric properties under a range of atmospheric conditions and propagation distances. By reconstructing the laser wavefront and comparing the calculated and measured images we will also aim to investigate the effect of strong scintillation on phase diversity wavefront reconstruction techniques. Laboratory tests of the equipment and preliminary measurements will be described, as well as some theory and modeling.
This paper will present some of the recent work undertaken to extend the use of the wavefront sensor to provide both surface profile measurements and thickness measurements simultaneously using a single instrument. Some theoretical studies of the effect of thin film structures on wavefront shape will be presented along with discussion on how such knowledge can be used to gain reliable measurements of thickness and surface profile. Our experimental methods will be described with the inclusion of experimental results from a number of different sample thicknesses and materials. In addition, some initial data will be presented to illustrate how the technique can be extended to carry out surface profiling measurements on some etched and periodic structures. Finally some suggestions for future work and optimisation will be made to conclude.
Generalized Phase Diversity (GPD) is a phase retrieval algorithm which requires a pair of intensity images. These are created by applying equal and opposite diversity phase to the input wavefront. Unlike traditional phase diversity methods GPD is not limited to the use of defocus as the applied diversity phase. The conditions that a suitable diversity function must satisfy for use in a null sensor were presented at the 4th IWAOIM. Following our recent development of a small angle solution to the inverse problem, in this paper the GPD method will be extended to use as a full wavefront sensor. This method has a wide range of applications, including laser beam shaping, analysis of segmented optics, and metrology. Results will be presented to show the versatility and accuracy of this novel wavefront sensing method.
Phase diversity is a phase-retrieval algorithm that uses a pair of defocused intensity images taken symmetrically about the wavefront to be determined. Generalised phase diversity is a phase-retrieval algorithm that uses diversity functions other than defocus. The approach adopted assumes that unknown phase changes satisfy the small-angle approximation over spatial regions that can be selected by choice of the diversity function. For smooth functions, and for discontinuous functions with only small discontinuities, this leads to a very simple analytic solution. Computer simulations were used to validate this method for the retrieved phase.
Applications of adaptive optics to terrestrial imaging involve anisoplanatic imaging conditions in which the turbulence-distorted wavefront may be highly scintillated and have present phase discontinuities. We will describe experiments designed to assess these properties of the wavefront, and discuss the observational strategy for measurement of atmospheric properties under a range of atmospheric conditions and propagation distances. By reconstructing the wavefront and comparing the calculated and measured images we will also aim to investigate the effect of strong scintillation on phase diversity wavefront reconstruction techniques. Laboratory tests of the equipment and preliminary measurements will be described, as well as some theory and modeling.
The principles for defining, comparing and calculating the signal-to-noise ratio performance of imaging spectrometers are presented. The relative signal-to-noise ratios (SNRs) of the main classes of imaging spectrometer are discussed both in general terms and with an emphasis on real-time, low spectral resolution applications. This general analysis is based on some simplifying assumptions and SNRs are also calculated for a typical application without these assumptions. These SNRs are compared to the signal-to-noise ratios typically required in imaging spectrometry. It is shown that for low resolution imaging spectrometry of low radiance scenes there are only small differences in SNR between the four main classes of instrument. For high spectral resolution imaging of low radiance scenes Fourier-transform techniques offer higher SNRs, but for high radiance scenes the impact of detector saturation tends to favor direct imaging spectrometry. It is noted however, that real-time, temporally scanned, imaging spectrometry requires track and stare stabilization to fully realize its potential.
Phase-diversity wavefront sensing has been implemented for the measurement of turbulence-distorted atmospheric wavefronts in applications of adaptive optics for essentially-horizontal propagation paths. The selected implementation of phase-diversity provides a wavefront sensor capable of estimating atmospheric distortions when observing extended scenes and provides a range-weighted sensing of the atmospheric distortions dependent on the angular region of the scene used for measurement. The data inversion, based on a Green's function analysis, is fast and robust enough for real-time implementation. For measurements of the atmospheric properties this wavefront sensor is being used with bright, compact sources to give high signal to noise measurements for integrated atmospheric effects along defined optical paths. The implementation used facilitates measurements of the atmospheric distortions along separate propagation paths. By simultaneous measurements along 3 separate paths a library of spatio-temporal atmospheric distortions and information about the isoplanicity of the distortions will be compiled for use in assessing applications of adaptive optics in horizontal propagation conditions. The principles of measurement, the details of implementation and some preliminary results will be described.
Millimeter-wave radiometry of the earth's surface from Low Earth Orbit (LEO) with a resolution of a few km requires antenna apertures several meters across and sub-second scanning times. Fulfilling these requirements with a mechanically scanned real-aperture antenna presents formidable mechanical challenges. An attractive alternative described here is to use synthetic aperture techniques employing a sparse-array of antennas that trade the mechanical complexity of real-aperture imaging for the electrical complexity of synthetic aperture imaging. We present results of an ESA- sponsored study aimed at seeking the optimum technique for high performance synthetic aperture mm-wave radiometry from LEO.
Turbulence effects close to the air-ground interface may be expected to be non-Kolmogorov, even if that model is an adequate description of free-air turbulence effects. Direct measurements of the optical effects of propagation through the boundary layer are therefore required and are being undertaken as part of a program in which various potential applications of adaptive optics are being examined. The measurements are intended to characterize the spatio- temporal characteristics of optical wavefronts after propagation through the air-ground boundary layer. The objective in these measurements is to describe the level of performance that will be required in an adaptive system intended to mitigate the deleterious effects of atmospheric propagation on image formation and on other optical measurements. The principles of measurements and the preliminary results are presented.
The long-term potential of terrestrial passive millimeter-wave imaging is contingent upon demonstrating real-time imaging with a system that can be conveniently integrated into a ship, land vehicle or aircraft. For small imagers with modest resolution, this can be readily achieved using fully-staring focal plane arrays, but as in the infrared, the high cost per pixel means that scanning (preferably electronic) of a smaller number of detectors, across the image is attractive. Aperture synthesis using sparse and filled arrays of antennas offers high sensitivity and resolution from a small number of antennas that can be conformal to vehicle shape, but requires complicated beam-forming technologies. Alternatively, electronic scanning can be accomplished by electronic modulation of a filled antenna. This discusses approaches and technology requirements for achieving electronic beam- steering.
Optical fibre interferometric strain sensors embedded into structures offer a very accurate and robust method for shape measurement . Many schemes have been demonstrated in which strain and/or temperature in a structure are inferred from monochromatic optical phase delay .
We describe the use of a four-core optical fibre as the basis of a sensor capable of measuring the angle through which the fibre is bent in two dimensions. The intended application of the sensor is in measuring the shape of flexible structures.
The shape of a structure which is known to deform in one-dimension only can be measured using a single optical fibre with one or more sensing elements multiplexed down its length. However, to monitor the shape of a structure that can deform in three-dimensions the full vector strain, or strain field, is required. To achieve this a minimum of three fibres, embedded in a non-collinear geometry, is required.
Passive imaging at millimeter wavelengths is a constant struggle for increased sensitivity and angular resolution. Aperture synthesis is a particularly attractive technique for attacking these problems since it offers a high resolution from a given total antennas area and greater flexibility in the positioning of the antenna elements. This in turn can lead to a greater total collecting area and hence greater sensitivity than might be achievable with a single scanned antenna and with the additional benefit of electronic scanning. The high loss of millimeter-wave transmission lines means that received signals must be frequency translated to a more suitable frequency prior to transport to correlators. Although down conversion enables transmission by coaxial cables, up conversion onto optical carriers enables very low-loss optical fibers to be used for transmission and electronically programmable delay lines. In this paper we describe proof-of-principle experiments that demonstrate the application of optical up-conversion in aperture synthesis and also the direct formation of an image on a conventional optical camera from millimeter-wave signals modulated onto an optical carrier.
Two versions of a kilometric interferometer with equivalent science capabilities have been studied, one located on the Moon and the other operating as a free-flyer. It has been found that the Moon is not the ideal site for interferometry because of tidal and micro-meteorite induced disturbances, the need for long delay lines and the large temperature swings from day to night. Automatic deployment of the Moon- based interferometer would be difficult and site preparation and assistance by man appear to be essential. The free-flyer would be implemented as a very accurately controlled cluster of independent satellites placed in a halo orbit around the 2nd Lagrange point of the Sun-Earth system. Both versions could attain the required scientific performances and each one needs the same type of metrology control. The free-flyer is intrinsically advantageous because of its reconfiguration flexibility, quasi-unlimited baseline length and observation efficiency (the Moon-based interferometer cannot be operated during the lunar day because of stray light). The free-flyer is better suited for implementation in the near or mid-term future, but the Moon-based version could be considered in the long term when a human presence would permit maintenance and upgrading leading to a longer lifetime with continuous performance enhancement.
We demonstrate temperature-insensitive strain measurement in a carbon fiber composite panel using a sensor based on broad-band interferometry in highly-birefringent optical fiber. The sensing element forms an unbalanced Fabry-Perot cavity in the measurement arm of a tandem interferometer. This is interrogated using an LED source and a scanning Michelson interferometer, producing three distinct interferograms, two of which relate to the group delay (GD) of the eigenmodes of the sensing element, the other providing a zero-OPD reference in the scanning interferometer. We measure the GD of each interferogram by dispersive Fourier-transform spectroscopy. Changes in strain and temperature in the measurement fiber affect the group delays of the sensing interferograms, but do not affect the zero-OPD interferograms, which is therefore used as the origin for group delay measurements. We determine a linear transformation relating the measured group delays to strain and temperature. Inverting this transformation then provides a means of recovering strain and temperature from measurements of group delay. We apply this technique to the simultaneous measurement of strain and temperature in the composite panel. Typical measurement errors are 7 microsecond(s) train and 0.7 K. The measured values are independent, and the strain values show no evidence of thermal-apparent strain.
At visible frequencies the comparative sensitivity achieved from space-borne or terrestrial interferometers depends on source brightness, source complexity and on the waveband considered. At short wavelengths a space-borne instrument is always superior and at longer `visible' wavelengths (up to K-Band) a space-borne instrument is superior for faint and/or complex targets. At thermal infrared wavelengths the sensitivity comparison depends on source brightness but not its complexity. A space-borne interferometer gives superior sensitivity at M-Band and between atmospheric transmission windows. To obtain adequate sensitivity at N-Band and longer wavelengths a space-borne interferometer cooled below 80 degree(s)K is required.
The sensitivity of space debris detection using a satellite-borne, low-power laser as a reference for terrestrial adaptive optics is considered. It is shown that there is sufficient sensitivity to permit detection of sunlit particles of debris a few cm in diameter.
Two- and higher-dimensional bandlimited functions are almost always non-factorizable, meaning that their zeros form a single analytic curve. In principle one can use this property to separate the product of two bandlimited functions into its respective factors; this is important in Fourier phase retrieval and deconvolution problems. The intersection of this zero structure with the real plane is at points, closed curves or lines stretching to infinity. The location of zero points, curves or lines can only be estimated, in practice, from available noisy data. The estimated locations can be used to write a factorizable approximation to the original function and we explore the consequences of doing this. Of importance is the fact that point zeros can be used to represent a 2D bandlimited function. Hence from intensity data, point zero locations in the intensity can be used directly to estimate the complex spectrum and provide an approximate solution to the phase retrieval problem. Examples will be given and their importance for the general blind deconvolution problem discussed.
Low power (2mW) lasers mounted on a small satellite in a highly eccentric orbit can provide a bright and spectrally well-defined reference source for calibration of ground-based adaptive optic systems. Because the reference is spectrally well-defined it can be efficiently filtered in broad-band imaging applications and yet can provide a very bright reference source for wavefront detectors when imaging faint sources. Dependent on the size of the atmospheric isoplanatic patch, the satellite reference may be useful for calibrating observations of selected objects for periods in excess of 1 hr, leading to limiting magnitudes for detection of up to +30. The area of sky for which the reference is valid is restricted (order 1 sq deg of sky per telescope per year). The reference is valid for phasing aperture synthesis telescope arrays of kilometric scale. Orbital maneuvers for target selection and to increase the sky coverage will be considered.
The principles of image formation using a dilute, multiple- telescope interferometer will be considered. In particular, the requirements for an instrument capable of providing a unique reconstruction of object Fourier phases and instrumental phase errors from a single 'snapshot' of data will be examined. The roles of redundancy in the telescope array, the imposition of the positivity requirement in data inversion, the quality of the instrument point spread function and the stability of the data inversion will be taken into account.