In this paper we discuss the latest developments of the STRIP instrument of the “Large Scale Polarization Explorer” (LSPE) experiment. LSPE is a novel project that combines ground-based (STRIP) and balloon-borne (SWIPE) polarization measurements of the microwave sky on large angular scales to attempt a detection of the “B-modes” of the Cosmic Microwave Background polarization. STRIP will observe approximately 25% of the Northern sky from the “Observatorio del Teide” in Tenerife, using an array of forty-nine coherent polarimeters at 43 GHz, coupled to a 1.5 m fully rotating crossed-Dragone telescope. A second frequency channel with six-elements at 95 GHz will be exploited as an atmospheric monitor. At present, most of the hardware of the STRIP instrument has been developed and tested at sub-system level. System-level characterization, starting in July 2018, will lead STRIP to be shipped and installed at the observation site within the end of the year. The on-site verification and calibration of the whole instrument will prepare STRIP for a 2-years campaign for the observation of the CMB polarization.
ESA's Dark Energy Mission Euclid will map the 3D matter distribution in our Universe using two Dark Energy probes: Weak Lensing (WL) and Galaxy Clustering (GC). The extreme accuracy required for both probes can only be achieved by observing from space in order to limit all observational biases in the measurements of the tracer galaxies. Weak Lensing requires an extremely high precision measurement of galaxy shapes realised with the Visual Imager (VIS) as well as photometric redshift measurements using near-infrared photometry provided by the Near Infrared Spectrometer Photometer (NISP). Galaxy Clustering requires accurate redshifts (Δz/(z+1)<0.1%) of galaxies to be obtained by the NISP Spectrometer.
Performance requirements on spacecraft, telescope assembly, scientific instruments and the ground data-processing have been carefully budgeted to meet the demanding top level science requirements. As part of the mission development, the verification of scientific performances needs mission-level end-to-end analyses in which the Euclid systems are modeled from as-designed to final as-built flight configurations. We present the plan to carry out end-to-end analysis coordinated by the ESA project team with the collaboration of the Euclid Consortium. The plan includes the definition of key performance parameters and their process of verification, the input and output identification and the management of applicable mission configurations in the parameter database.
Euclid is an ESA mission aimed at understanding the nature of dark energy and dark matter by using simultaneously two probes (weak lensing and baryon acoustic oscillations). The mission will observe galaxies and clusters of galaxies out to z~2, in a wide extra-galactic survey covering 15000 deg2, plus a deep survey covering an area of 40 deg². The payload is composed of two instruments, an imager in the visible domain (VIS) and an imager-spectrometer (NISP) covering the near-infrared. The launch is planned in Q4 of 2020. The elements of the Euclid Science Ground Segment (SGS) are the Science Operations Centre (SOC) operated by ESA and nine Science Data Centres (SDCs) in charge of data processing, provided by the Euclid Consortium (EC), formed by over 110 institutes spread in 15 countries. SOC and the EC started several years ago a tight collaboration in order to design and develop a single, cost-efficient and truly integrated SGS. The distributed nature, the size of the data set, and the needed accuracy of the results are the main challenges expected in the design and implementation of the SGS. In particular, the huge volume of data (not only Euclid data but also ground based data) to be processed in the SDCs will require distributed storage to avoid data migration across SDCs. This paper describes the management challenges that the Euclid SGS is facing while dealing with such complexity. The main aspect is related to the organisation of a geographically distributed software development team. In principle algorithms and code is developed in a large number of institutes, while data is actually processed at fewer centers (the national SDCs) where the operational computational infrastructures are maintained. The software produced for data handling, processing and analysis is built within a common development environment defined by the SGS System Team, common to SOC and ECSGS, which has already been active for several years. The code is built incrementally through different levels of maturity, going from prototypes (developed mainly by scientists) to production code (engineered and tested at the SDCs). A number of incremental challenges (infrastructure, data processing and integrated) have been included in the Euclid SGS test plan to verify the correctness and accuracy of the developed systems.
Euclid is a space-based optical/near-infrared survey mission of the European Space Agency (ESA) to investigate the
nature of dark energy, dark matter and gravity by observing the geometry of the Universe and on the formation of
structures over cosmological timescales. Euclid will use two probes of the signature of dark matter and energy: Weak
gravitational Lensing, which requires the measurement of the shape and photometric redshifts of distant galaxies, and
Galaxy Clustering, based on the measurement of the 3-dimensional distribution of galaxies through their spectroscopic
redshifts. The mission is scheduled for launch in 2020 and is designed for 6 years of nominal survey operations. The
Euclid Spacecraft is composed of a Service Module and a Payload Module. The Service Module comprises all the
conventional spacecraft subsystems, the instruments warm electronics units, the sun shield and the solar arrays. In
particular the Service Module provides the extremely challenging pointing accuracy required by the scientific objectives.
The Payload Module consists of a 1.2 m three-mirror Korsch type telescope and of two instruments, the visible imager
and the near-infrared spectro-photometer, both covering a large common field-of-view enabling to survey more than
35% of the entire sky. All sensor data are downlinked using K-band transmission and processed by a dedicated ground
segment for science data processing. The Euclid data and catalogues will be made available to the public at the ESA
Science Data Centre.
Euclid is the future ESA mission, mainly devoted to Cosmology. Like WMAP and Planck, it is a
survey mission, to be launched in 2019 and injected in orbit far away from the Earth, for a nominal
lifetime of 7 years. Euclid has two instruments on-board, the Visible Imager (VIS) and the Near-
Infrared Spectro-Photometer (NISP). The NISP instrument includes cryogenic mechanisms, active
thermal control, high-performance Data Processing Unit and requires periodic in-flight calibrations
and instrument parameters monitoring. To fully exploit the capability of the NISP, a careful control
of systematic effects is required. From previous experiments, we have built the concept of an
integrated instrument development and verification approach, where the scientific, instrument and
ground-segment expertise have strong interactions from the early phases of the project. In particular,
we discuss the strong integration of test and calibration activities with the Ground Segment, starting
from early pre-launch verification activities. We want to report here the expertise acquired by the
Euclid team in previous missions, only citing the literature for detailed reference, and indicate how it
is applied in the Euclid mission framework.
We discuss the design and expected performance of STRIP (STRatospheric Italian Polarimeter), an array of coherent receivers designed to fly on board the LSPE (Large Scale Polarization Explorer) balloon experiment. The STRIP focal plane array comprises 49 elements in Q band and 7 elements in W-band using cryogenic HEMT low noise amplifiers and high performance waveguide components. In operation, the array will be cooled to 20 K and placed in the focal plane of a ~0.6 meter telescope providing an angular resolution of ~1.5 degrees. The LSPE experiment aims at large scale, high sensitivity measurements of CMB polarization, with multi-frequency deep measurements to optimize component separation. The STRIP Q-band channel is crucial to accurately measure and remove the synchrotron polarized component, while the W-band channel, together with a bolometric channel at the same frequency, provides a crucial cross-check for systematic effects.
The LSPE is a balloon-borne mission aimed at measuring the polarization of the Cosmic Microwave Background (CMB)
at large angular scales, and in particular to constrain the curl component of CMB polarization (B-modes) produced by
tensor perturbations generated during cosmic inflation, in the very early universe. Its primary target is to improve the
limit on the ratio of tensor to scalar perturbations amplitudes down to r = 0.03, at 99.7% confidence. A second target is
to produce wide maps of foreground polarization generated in our Galaxy by synchrotron emission and interstellar dust
emission. These will be important to map Galactic magnetic fields and to study the properties of ionized gas and of
diffuse interstellar dust in our Galaxy. The mission is optimized for large angular scales, with coarse angular resolution
(around 1.5 degrees FWHM), and wide sky coverage (25% of the sky). The payload will fly in a circumpolar long
duration balloon mission during the polar night. Using the Earth as a giant solar shield, the instrument will spin in
azimuth, observing a large fraction of the northern sky. The payload will host two instruments. An array of coherent
polarimeters using cryogenic HEMT amplifiers will survey the sky at 43 and 90 GHz. An array of bolometric
polarimeters, using large throughput multi-mode bolometers and rotating Half Wave Plates (HWP), will survey the same
sky region in three bands at 95, 145 and 245 GHz. The wide frequency coverage will allow optimal control of the
polarized foregrounds, with comparable angular resolution at all frequencies.
In this paper we present the test results of the qualification model (QM) of the LFI instrument, which is being
developed as part of the ESA Planck satellite. In particular we discuss the calibration plan which has defined
the main requirements of the radiometric tests and of the experimental setups. Then we describe how these
requirements have been implemented in the custom-developed cryo-facilities and present the main results. We
conclude with a discussion of the lessons learned for the testing of the LFI Flight Model (FM).
X-Shooter is the first 2nd generation instrument to be installed at Paranal early 2008. It is a single target spectrograph covering in a single exposure a wide spectral range from the UV to the K' band with maximum sensitivity. Another key feature of the instrument is its fast response, obtained by making it simple and easy to operate. Compared to other big VLT instruments X-Shooter has a relatively small number of moving functions, but nevertheless the requirements on the whole instrument software are quite demanding. In order to cover the wide spectral range with high efficiency, the instrument is split into three different arms, one being cryogenically cooled. The high level coordinating software architecture provides all the facilities for parallel operation with the maximum achievable level of synchronicity. Low level X-Shooter requirements are also quite stringent, since to compensate for slit misalignments among the three arms, an active piezoelectric actuator system is envisaged. The low-level architecture, besides the typical control of single devices (like motors, sensors and lamps), handles the required real-time operations. The software integration and test is also an issue, being X-Shooter a collaborative effort among several institutes spread around Europe. The whole instrument software architecture is presented here, entering in details into its main modules such as the instrument control software, the observation software and the observing templates structure and their integration in the VLT software environment.
The Workstation Software Sytem (WSS) is the high level control software of the Italian Galileo Galilei Telescope settled in La Palma Canary Island developed at the beginning of '90 for HP-UX workstations. WSS may be seen as a middle layer software system that manages the communications between the real time systems (VME), different workstations and high level applications providing a uniform distributed environment. The project to port the control software from the HP workstation to Linux environment started at the end of 2001. It is aimed to refurbish the control software introducing some of the new software technologies and languages, available for free in the Linux operating system. The project was realized by gradually substituting each HP workstation with a Linux PC with the goal to avoid main changes in the original software running under HP-UX. Three main phases characterized the project: creation of a simulated control room with several Linux PCs running WSS (to check all the functionality); insertion in the simulated control room of some HPs (to check the mixed environment); substitution of HP workstation in the real control room. From a software point of view, the project introduces some new technologies,
like multi-threading, and the possibility to develop high level WSS
applications with almost every programming language that implements the Berkley sockets. A library to develop java applications has also been created and tested.
A geographically distibuited software project needs to have a well defined software integration & development plan to avoid extra work in the pipeline creation phase. Here we will describe the rationale in the case of the Planck/LFI DPC project and what was designed and developed to build the integration and testing environment.
In alt-azimuth telescopes Nasmyth foci are suitable focal planes to reduce mechanical instrument complexity and costs. However, they present some disadvantages mainly due to the field rotation. This is a particularly crucial point in polarimetric observations. Because of the folding mirror, the radiation polarization state is so modified that, to avoid systematic errors, instrumental polarization has to be removed as a function of the telescope position. A model of the polarization introduced by the Telescopio Nazionale Galileo (TNG) at its focal plane is presented. The model takes into account physical and geometrical properties of the optical system, complex refraction index of the mirrors and their relative position, deriving instrumental polarization as a function of the pointing coordinates of the telescope. This model has been developed by means of Muller matrices calculation. Telescope instrumental polarization has been measured following some standard polarization stars at different telescope positions. The mathematical model, here discussed, was confirmed comparing the theoretical results and the experimental measurements at the TNG instruments.
The Italian National "Galileo" Telescope (Telescopio Nazionale "Galileo" - TNG) is a 3.5m telescope located at La Palma, in the Canary islands, which has seen first light in 1998. Available TNG subsystems include four first-generation instruments, plus adaptive optics, meteo and seeing towers; the control and data handling systems are tightly coupled allowing a smooth data flow while preserving integrity. As a part of the data handling systems, the production of a local "Archive at the Telescope" (AaT) is included, and the production of database tables and hard media for the TNG Long-Term Archive (LTA) is supported. The implementation of a LTA prototype has been recently terminated, and the implementation of its operational version is being planned by the Italian National Institute for Astrophysics (INAF).
A description of the AaT and prototype LTA systems are given, including their data handling/archiving and data retrieval capabilities. A discussion of system features and lessons learned is also included, with particular reference to the issues of completeness and data quality. These issues are of particular importance in the perspective of the preparation of a national facility for the archives of data from ground-based telescopes, and its possible inclusion as a data provider in the Virtual Observatory framework.
The Telescope Nazionale Galileo (TNG) telescope is now operational. One of its main goals is to provide high quality images, in a wide range of operating conditions and for several observing modes. Telescope pointing and tracking performances can heavily affect achievement of this requirement, and particular care must be taken in order to reach the highest possible accuracy. Control of the three axes is implemented in one VME controller (minimizing data exchange through the TNG LAN), and telescope mount positions are computed from object coordinates taking into account physical and environmental aspects which alter the object apparent position. Improvement of telescope pointing and tracking performances is obtained by two means. Systematic errors are mostly corrected using a model compensation, that can be introduced in the coordinates transformation flow. The telescope model is derived by off-line analysis of the pointing errors on a specific set of data. In this way we could improve the pointing and tracking performances up to now by a factor of about 30. Tracking drift given by non- systematic and residuals of systematic errors is corrected using a guide camera, which mounts a 800 X 576 CCD (0.35 arcsec/pixel scale). Guide stars are selected from an on- line available star catalogue. Light from the selected star is focused on the guide camera by moving a probe housed in the rotator-adapter module. Tracking drift computation is performed on a two-axes scheme (Right Ascension and Declination), with sub-pixel accuracy. Here the first results of the telescope pointing and tracing accuracy are presented.