MIRADAS is a near-infrared multiobject echelle spectrograph operating at spectral resolution R = 20,000 over the 1 to 2.5 μm bandpass for Gran Telescopio Canarias. It possesses a multiplexing system with 12 cryogenic robotic probe arms, each capable of independently selecting a user-defined target in the instrument field of view. The arms are distributed around a circular bench, becoming a very packed workspace when all of them are in simultaneous operation. Therefore, their motions have to be carefully coordinated. We propose here a motion planning method for the MIRADAS probe arms. Our offline algorithm relies on roadmaps comprising alternative paths, which are discretized in a state-time space. The determination of collision-free trajectories in such space is achieved by means of a graph-search technique. The approach considers the constraints imposed by the particular architecture of the probe arms as well as the limitations of the commercial off-the-shelf motor controllers used in the mechanical design. We test our solution with real science targets and a typical MIRADAS scenario presenting some instances of the two identified collision conflicts that can arise between any pair of probe arms. Experiments show the method is versatile enough to compute trajectories fulfilling the requirements.
A telescope control system relies on a pointing model to determine the gimbal angles that aim the telescope toward a desired target. High-accuracy telescope pointing models include parameters that describe the mount/telescope orientation as well as common mechanical effects. For professional telescopes, calibrating the pointing model requires careful initial alignment around a nominal orientation (e.g., leveling) followed by sightings of dozens to hundreds of stars to fit the model parameters. While this approach is effective for observatories, applications such as transportable optical ground stations for communications, space situational awareness, or astronomy using low-cost telescope networks can benefit from a more rapid calibration approach. We formulate a quaternion-based pointing model that utilizes measurements from an externally mounted star camera to compromise between calibration speed and accuracy. A key aspect of this formulation is that it is completely agnostic to the orientation of the telescope/mount so that no manual prealignment is required. We derive angle and rate commands for telescope pointing and tracking based on the model. We present results from a 15-min calibration procedure on a very low-cost telescope that demonstrated pointing to an accuracy of 53 arc sec RMS in azimuth and 66 arc sec RMS between 20-deg and 70-deg altitude.
We present a community-led assessment of the solar system investigations achievable with NASA’s next-generation space telescope, the Wide Field Infrared Survey Telescope (WFIRST). WFIRST will provide imaging, spectroscopic, and coronagraphic capabilities from 0.43 to 2.0 μm and will be a potential contemporary and eventual successor to the James Webb Space Telescope (JWST). Surveys of irregular satellites and minor bodies are where WFIRST will excel with its 0.28 deg2 field-of-view Wide Field Instrument. Potential ground-breaking discoveries from WFIRST could include detection of the first minor bodies orbiting in the inner Oort Cloud, identification of additional Earth Trojan asteroids, and the discovery and characterization of asteroid binary systems similar to Ida/Dactyl. Additional investigations into asteroids, giant planet satellites, Trojan asteroids, Centaurs, Kuiper belt objects, and comets are presented. Previous use of astrophysics assets for solar system science and synergies between WFIRST, Large Synoptic Survey Telescope, JWST, and the proposed Near-Earth Object Camera mission is discussed. We also present the case for implementation of moving target tracking, a feature that will benefit from the heritage of JWST and enable a broader range of solar system observations.
A milestone in understanding life in the universe is the detection of biosignature gases in the atmospheres of habitable exoplanets. Future mission concepts under study by the 2020 decadal survey, e.g., Habitable Exoplanet Imaging Mission (HabEx) and the Large UV/Optical/IR Surveyor (LUVOIR), have the potential of achieving this goal. We investigate the baseline requirements for detecting four molecular species, H2O, O2, CH4, and CO2, assuming concentrations of these species equal to that of modern Earth. These molecules are highly relevant to habitability and life on Earth and other planets. Through numerical simulations, we find the minimum requirements of spectral resolution, starlight suppression, and exposure time for detecting biosignature and habitability marker gases. The results are highly dependent on cloud conditions. A low-cloud case is more favorable because of deeper and denser lines whereas a no-cloud case is the pessimistic case for its low albedo. The minimum exposure time for detecting a certain molecule species can vary by a large factor (∼10) between the low-cloud case and the no-cloud case. For all cases, we provide baseline requirements for HabEx and LUVOIR. The impact of exozodiacal contamination and thermal background is also discussed and will be included in future studies.
Molecular film contamination is known to degrade the optical performance of space system components, including solar arrays and thermo-optical, second surface mirrors. In the form of a contaminant film, the resulting performance loss rate can be evaluated using traditional models for absorption and reflection. In recent years, however, some space-borne optical sensors have suffered severe and rapid performance degradation due to the formation of contaminant droplets that fog interior lenses, mirrors, and windows. Optical system analysts tasked with predicting the loss of throughput due to molecular film contamination have not addressed the impact of droplets in great depth. This paper investigates the conditions leading to the formation of films or droplets resulting from the outgassing products of typical spacecraft materials. A simplified view of surface energy and the wetting parameter are used to show that typical outgassed contaminants and optical substrates favor the formation of droplets. Therefore, analysis of throughput losses due to the scattering of droplets is critical. The droplets can be converted into films or extended islands when exposed to vacuum ultraviolet (VUV) radiation. This observation shows why droplets are rarely observed on external thermal control mirrors and solar arrays but might be considered highly likely in a low-VUV environment.
The IUCAA digital sampling array controller (IDSAC) is a flexible and generic yet powerful CCD controller that can handle a wide range of scientific detectors. Based on an easily scalable modular backplane architecture consisting of single board controllers (SBC), IDSAC can control large detector arrays and mosaics. Each of the SBCs offers the full functionality required to control a CCD independently. The SBCs can be cold swapped without the need to reconfigure them. IDSAC is also available in a backplane-less architecture. Each SBC can handle data from up to four video channels with or without dummy outputs at speeds up to 500-kilo pixels per second (kPPS) per channel with a resolution of 16 bits. Communication with a Linux-based host computer is through a USB3.0 interface, with the option of using copper or optical fibers. A field programmable gate array (FPGA) is used as the master controller in each SBC, which allows great flexibility in optimizing performance by adjusting gain, timing signals, bias levels, etc., using user-editable configuration files without altering the circuit topology. Elimination of thermal kTC noise is achieved via digital correlated double sampling (DCDS). The number of digital samples per pixel (for both reset and signal levels) is user configurable. We present the results of noise performance characterization of IDSAC through simulation, theoretical modeling, and actual measurements. The contribution of different types of noise sources is modeled using a tool to predict noise of a generic DCDS signal chain analytically. The analytical model predicts the net input referenced noise of the signal chain to be 5 electrons for 200-k pixels/s per channel readout rate with three samples per pixel. Using a cryogenic test setup in the lab, the noise is measured to be 5.4 e (24.3 μV), for the same readout configuration. With a better-optimized configuration of 500-kPPS readout rate, the measured noise is down to 3.8 electrons RMS (17 μV), with three samples per interval.
Galaxy classification has an important role in understanding the formation of galaxies and the evaluation of our universe. Most of the machine learning methods were used to improve galaxy image classification. However, these methods suffer from some limitations, such as getting stuck in local point and slow convergence. Therefore, an alternative method to enhance the performance of galaxy images classification and avoid the limitations of other methods is proposed. The proposed method for galaxy classification (called BSOMFOG) is based on an improvement in the brain storm optimization (BSO) through combining it with the moth flame optimization (MFO). In this modified version of BSO (called BSOMFO), the MFO algorithm works as a local search operator to enhance the exploitation ability of BSO. The performance of the BSOMFO algorithm is compared against other algorithms through two experiments. In the first one, a set of 15 global optimization problems is used to evaluate the ability of the BSOMFO algorithm to find the solution for these problems. Meanwhile, in the second experiment, the BSOMFO is included in the BSOMFOG framework to improve the classification of the galaxy images into three classes, namely, spiral, lenticular, and elliptical. BSOMFOG consists of three phases: the first phase is to extract the shape, color, and texture features from the galaxy images, while the second phase used the BSOMFO algorithm to select the relevant features from the extracted features. The last phase is to evaluate the selected features through classification using the k-nearest neighbor classifier. The experimental results show that the BSOMFO algorithm provides better results than the traditional BSO algorithm and other metaheuristic algorithms to solve the optimization problem. Moreover, it makes the proposed BSOMFOG framework improves the classification accuracy (∼97 % ) for galaxy images, and its general purpose makes it suitable for automatic classification of galaxies.
Soft x-rays (0.1 to 10 keV) will liberate between tens and thousands of electrons from the absorber array of a depleted silicon detector. These electrons tend to diffuse outward into what is referred to as the charge cloud, which is then picked up by several pixels and forms a specific pattern based on the exact incident location of the x-ray. By performing the first ever application of a “mesh experiment” on a hybrid CMOS detector (HCD), we have experimentally determined the charge cloud shape and used it to perform subpixel localization of incident x-rays on a photon-by-photon basis for a custom 36-μm pixel pitch H2RG HCD. We find that significant spatial resolution improvement is possible for all events, with 68% confidence regions equal to 7.1 × 7.1, 0.4 × 7.1, and 0.4 × 0.4 μm for 1-pixel, 2-pixel, and 3- to 4-pixel events, respectively. This represents a much finer resolution than that provided by containment within a single pixel.
We demonstrate an architecture for adaptive optics (AO) control based on field programmable gate arrays (FPGAs), making active use of their configurable parallel processing capability. The unique capabilities of scalable platform for adaptive optics real-time control (SPARC) are demonstrated through an implementation on an off-the-shelf inexpensive Xilinx VC-709 development board. The architecture makes SPARC a generic and powerful real-time control kernel for a broad spectrum of AO scenarios. SPARC is scalable across different numbers of subapertures and pixels per subaperture. The overall concept, objectives, architecture, validation, and results from simulation as well as hardware tests are presented here. For Shack–Hartmann wavefront sensors, the total AO reconstruction time ranges from a median of 39.4 μs (11 × 11 subapertures) to 1.283 ms (50 × 50 subapertures) on the development board. For large wavefront sensors, the latency is dominated by access time (∼1 ms) of the standard dual data rate memory available on the board. This paper is divided into two parts. Part 1 is targeted at astronomers interested in the capability of the current hardware. Part 2 explains the FPGA implementation of the wavefront processing unit, the reconstruction algorithm, and the hardware interfaces of the platform. Part 2 mainly targets the embedded developers interested in the hardware implementation of SPARC.
The next generation of adaptive optics (AO) systems on large telescopes will require immense computation performance and memory bandwidth, both of which are challenging with the technology available today. The objective of this work is to create a future-proof AO platform on a field programmable gate array (FPGA) architecture, which scales with the number of subapertures, pixels per subaperture, and external memory. We have created a scalable AO platform with an off-the-shelf FPGA development board, which provides an AO reconstruction time only limited by the external memory bandwidth. SPARC uses the same logic resources irrespective of the number of subapertures in the AO system. This paper is aimed at embedded developers who are interested in the FPGA design and the accompanying hardware interfaces. The central theme of this paper is to show how scalability is incorporated at different levels of the FPGA implementation. This work is a continuation of part 1 of the paper, which explains the concept, objectives, control scheme, and method of validation used for testing the platform.