Due to the advent of large format detectors, integral-field spectrographs (IFSs) have become an increasingly popular class of astronomical instrumentation. IFSs are hybrids of traditional imaging cameras and slit spectrographs; they obtain a spectrum from each spatial element in a two-dimensional (2-D) field-of-view for an () data cube. The first realization of an IFS used a bundle of fibers to create a pseudoslit,1 while TIGER2 was the first IFS to use a lenslet array. Modern IFSs generally use either fiber bundles (MaNGA)3 or image slicers (SINFONI, MUSE, and NIRSpec)45.–6 to rearrange the field-of-view into a long pseudoslit or lenslet arrays (OSIRIS)7 to reimage spatial elements into small spots suitable for dispersion.
IFSs have become especially popular tools for high-contrast imaging. Diffraction speckles have a different chromatic behavior from astrophysical sources; an IFS data cube can exploit this to achieve higher contrasts.8,9 An IFS also naturally enables the extraction of a planet or brown dwarf’s spectrum, providing a probe of the object’s temperature, chemistry, and gravity.1011.12.13.–14 IFSs combined with second-generation adaptive optics systems are now operational on Gemini South (GPI),15 the VLT (SPHERE),16 and Palomar (Project 1640).17 These new high-contrast instruments have recently discovered and characterized the low-mass companion to 51 Eri.18 Future NASA mission studies such as Exo-C19 and the WFIRST Coronagraph Instrument have baselined high-contrast IFSs as their science cameras.20,21
Data reduction and processing for IFSs have long presented problems. The reduction pipeline for GPI22,23 is an ongoing, years-long effort partially built on legacy software from OSIRIS (whose pipeline also remains, to some degree, a work in progress). This is the result of many complexities inherent in IFS data. There are now two flatfields (one for the detector and one for the illumination of the fibers, lenslets, or sliced image plane). For a lenslet-based IFS, the point-spread functions (PSFs) of the input optics and the lenslets are both important. The finite size of the lenslet PSFs means that neighboring spectra partially overlap one another; this must be corrected or accounted for during the extraction. An IFS requires the wavelength and spectrophotometric calibrations of a spectrograph as well as the astrometric calibration of an imager.
This paper presents the data reduction pipeline for the CHARIS IFS on the Subaru telescope. CHARIS, the Coronagraphic High Angular Resolution Imaging Spectrograph, is a lenslet-based near-infrared IFS located on the Nasmyth platform behind the adaptive optics systems AO18824 and SCExAO.25 Section 2 summarizes the design and properties of the IFS, while the rest of the paper presents the software that extracts the data cube. Section 3 discusses the construction of the pixel-by-pixel count rates from a sequence of raw reads, Sec. 4 discusses our calibration procedure and associated data products, and Sec. 5 presents our algorithms for extracting the data cube. Section 6 summarizes the software’s parameters and settings, and Sec. 7 shows the software’s performance. We discuss and conclude with Sec. 8.
This software package constructs the data cube and its inverse variance from a sequence of CHARIS reads. We defer the necessary steps of image registration and spectrophotometric and astrometric calibration to a separate software package that is currently under development. These steps depend on the SCExAO system in front of CHARIS and the calibrations changed when SCExAO was rebuilt in July of 2016; they also rely on a system of induced satellite spots that still must be manually controlled by the SCExAO team.26 These elements of the software, in addition to algorithms for angular and spectral differential imaging, will operate only on the data cubes produced by this pipeline and will be presented in a follow-up paper.
CHARIS Integral-Field Spectrograph
CHARIS is a new high-contrast IFS for the Subaru Telescope. Its scientific, conceptual, optical, and mechanical designs are summarized by McElwain et al.,27 Peters et al.,28 Galvin et al.,29 and Peters-Limbach et al.,30 respectively. Groff et al.31 summarized laboratory testing performed after CHARIS was built but before it was transported to the summit. This section briefly reviews the basic parameters and observing modes of CHARIS; we refer the reader to these other papers and to a forthcoming instrument paper for details.
CHARIS is a lenslet-based diffraction-limited spectrograph operating in the near-infrared. Table 1 summarizes its basic as-built properties. CHARIS uses one of two prisms behind the lenslet array to disperse the light from each lenslet into a -pixel-long microspectrum. The detector image consists of about of these microspectra, each containing the light incident on a single 16.4 mas square lenslet, for a field-of-view.
Basic CHARIS parameters.
|No. of lenslets|
|Lenslet size||16.4 mas|
|Wavelength coverage||1.15 to|
|(2 pixels)||(low res), (high res)|
|Available modes||, , or at or NDa at|
Filter is ND3 (10−3 transmission) from 1.15 to 2.4 μm, opaque at other wavelengths.
CHARIS offers five observing modes, three with a high-resolution prism and two with a low-resolution prism. We measure its as-built spectral resolution using the definition31 and enable us to model and remove correlated read noise (Sec. 5.3).
CHARIS is controlled by a Linux-based software and is integrated into the software environment of the observatory. It has only three moving parts to be controlled during observations: a shutter, the five-slot filter wheel (, , , broadband, and ND), and the three-position prism slider (low resolution, high resolution, and empty). The only other command is to reset and read out the detector. The rest of this paper presents the software for reconstructing the individual lenslet microspectra, i.e., the data cube, from these reads.
From the Reads to a Ramp
CHARIS’s detector is a Hawaii2-RG (H2RG), a HgCdTe CMOS device in which each pixel has its own amplifier. The pixel is read out by measuring the voltage across it relative to a reference voltage in the system. In CHARIS’s configuration, 32 readout channels each read pixels at a rate of 100 kHz. Including some down time as the readout proceeds to the next row of pixels, it takes 1.47 s to read out the full array. Resetting the detector is done pixel-by-pixel and also takes 1.47 s. H2RG detectors have some generic shortcomings, including persistence after exposure to a bright source32 and read noise strongly correlated among readout channels.33
CHARIS always saves every one of its reads, and this sequence of raw reads is then saved to disk. The first step of the data extraction is to fit for the count rate at each pixel from the raw reads. In the limit of a linear pixel response and the dominance of read noise over photon noise, the best-fit count rate in pixel may be derived from a fit to the sequence of reads34 23,3536.–37
In the limit of uncorrelated read noise, UTR improves on the signal-to-noise ratio (SNR) of correlated double sampling (CDS), the normalized difference of the first and last reads, by a factor
If there are only two reads, i.e., , UTR and CDS are equivalent, and the ratio in Eq. (4) is unity. It is never less than one and increases asymptotically as or assuming . When photon noise dominates, UTR is asymptotically noisier than CDS by about 23%.38 This may be corrected by dynamically choosing different weights for each read at each pixel (e.g., Ref. 39), but such an approach is not easily compatible with our nonlinear fit (Sec. 3.2). In the high signal-to-noise regime, we are limited by the fidelity of our models of the microspectra rather than by photon noise.
We compute a variance on a given pixel’s count rate from the variance in the count rates of the reference pixels in its channel. We then add photon noise assuming our configured gain of to be correct. We have verified this gain using a frame-to-frame scatter in count rate as a crude measure of shot noise. In addition, we allow the user to enforce an additional error equal to a fixed fraction (with a suggested value of a few percent) of the count rate seen by each pixel. This additional error accounts for our imperfect model of the microspectra and produces reduced values close to unity when fitting these models to the ramps; we add it in quadrature to the other errors. The variances are saved as inverse variances, so bad pixels may be masked by giving them an inverse variance of zero. Cosmic ray hits are rare in short CHARIS exposures, and we do not implement cosmic ray rejection within our ramps.38,40 Instead, we mask pixels in which a single read exceeds that pixel’s mean count rate by a factor of 5 and its mean read noise by a factor of 10. Less than 0.03% of pixels are masked in this way in a typical 2-min exposure.
The CHARIS software generally uses the UTR coefficients in Eq. (3). The following subsections discuss the removal of an artifact in the first read of a CHARIS ramp and our handling of nonlinearity and saturation. We then briefly discuss the read noise properties of our ramps. In Sec. 5.3, we will take advantage of the relatively low dimensionality of much of the read noise, fitting it out when extracting data cubes.
Correcting Artifacts in the First Read
The first read in a CHARIS ramp is contaminated by an exponential decay of the reference voltage. Figure 1 shows the difference between the first and second reads of a dark frame, while Fig. 2(a) shows the lower-left corner of a 17-read ramp taken in CHARIS’s low-resolution mode without correcting for this decay. Given CHARIS’s baffling and low level of dark current, the mean count rate in a dark frame is much lower than the read noise; Fig. 1 shows a noisy decay to zero. For the time series in Fig. 1, the 2-D readout of each channel has been mapped back to one dimension assuming a pixel rate of 100 kHz and an downtime among reading rows of pixels on the detector. This exponential decay has a time constant of and becomes negligible by 100 to 200 ms ( of the 1.47 s full-frame readout time). Similar artifacts have previously been noted in H2RGs and in related detectors.41,37
The black lines in Fig. 1 take the form
In a CHARIS ramp, the difference between the counts in the first and subsequent reads is the sum of the read noise, the exponential decay of the reference voltage, and the photon rates scaled by the gain. We first estimate the photon count rates by fitting a ramp only to the second and subsequent reads, ignoring the contaminated first read. We then use the fitted pixel-by-pixel count rates and reset values to obtain a modeled number of counts at the first read. The difference between the actual and modeled counts in the first read is the sum of read noise and the reference voltage artifact that we wish to remove. We must then perform a 34-parameter nonlinear fit. Luckily, the fit is only nonlinear in a single parameter, the decay constant . Once the decay constant is fixed, the 34-parameter nonlinear optimization becomes 33 decoupled one-parameter linear optimizations. By solving these linear problems for each value of , we may reduce the problem to a one-dimensional (1-D) nonlinear optimization for . We first guess the value of from the known behavior of the CHARIS detector and then iteratively fit parabolas to converge to the best-fit decay constant. This is equivalent to Newton–Raphson iteration on . Once we have the decay constant, the other 33 parameters may be obtained by straightforward linear optimizations. The entire process for a typical CHARIS ramp takes a few hundred milliseconds on a laptop computer.
Figure 2(b) shows the 17-read ramp of (a) after removing the exponential decay of the reference voltage. There are no longer any artifacts visible, and the first read may now be used to increase the integration time on-source and reduce the read noise. The first read consists of , many orders of magnitude larger than the 34 parameters describing the reference voltage artifact; fitting out this artifact uses a negligible amount of the information available in the first read.
Nonlinearity and Saturation
As CHARIS’s H2RG approaches saturation, its response becomes nonlinear. The nonlinearity sets in gradually before the pixel’s response drops sharply to zero. The H2RG also suffers from bleeding into adjacent pixels. After a pixel saturates, its four nearest neighbors see an immediate and compensating increase in their count rates, while its four next-nearest neighbors (the nearest along a diagonal) see a smaller, but still significant, increase. Pixels that do not adjoin the saturated pixel show no significant increase in their count rates until an adjacent pixel saturates.
Figure 3 shows the detector’s response to saturation in a cutout of a 120-read ramp in which the most strongly illuminated pixels saturate in reads. The lines in Fig. 3(a) are labeled with the pixel number, and an inset in Fig. 3(b) provides the key. The increase in count rates in the peripheral pixels closely approximates the lost counts from the saturated pixel(s). The total instantaneous count rate in a box decreases by just between the first few and the last few reads, despite the fact that saturated pixels account for of the photons in the initial reads.
The H2RG’s saturation behavior means that the saturation of one pixel immediately corrupts the count rates of its eight nearest neighbors but can be neglected for pixels that are farther away. We fit for each pixel’s count rate using only the reads for which neither the pixel in question nor any of its immediate neighbors have saturated.
We measure our H2RG’s nonlinearity using a series of long exposures: a 1000-read ramp taken with the detector almost uniformly illuminated and 16 120- to 160-read ramps with the detector sparsely illuminated by monochromatic light passing through the lenslets. In both cases, we fit for each pixel’s count rate using only the first of the reads. We then compare the actual counts at subsequent reads with the expected count rates from the initial reads assuming perfect linearity.
Figure 4 shows the density of points in actual versus predicted counts on the detector. The density is independently normalized at each count rate (-coordinate) to limit dynamic range, and there are several billion points in each figure. A perfectly linear detector should be nearly symmetric about the dashed line (biases from read and photon noise are at these count rates). Our H2RG falls below the line, indicating a loss of sensitivity as the pixels approach well capacity. The solid blue line shows our adopted nonlinear response: linear up to , continuously matched to a cubic fit up to saturation. Our method is distinct from Finger et al.,36 who only use unsaturated reads to fit a ramp but assume the pixel response to remain linear.
Different ramps suggest nonlinear responses that differ by to 2% near saturation. We make no attempt to explain or account for this in the data, but we do allow the user to add a fixed fraction of the count rate as an uncertainty. This error term also accounts for imperfect modeling of the microspectra, and an appropriate value (typically ) gives a per pixel of order unity after fitting all of the 2-D microspectra. This 5% uncertainty is significantly larger than the uncertainty in the nonlinear response.
Our adopted pixel response is linear up to or of well capacity. For pixels that remain below this value, we, therefore, use the UTR fit described in the first paragraphs of Sec. 3. We individually fit pixels that exceed this threshold. With a fixed nonlinear pixel response, the nonlinear fit has two free parameters: the initial count rate and the reset value. The best-fit count rate minimizes
In our case, is a nonlinear function of , and the count rate may no longer be obtained as the weighted sum of the reads with the coefficients from Eq. (3). However, at a fixed count rate, finding the best-fit reset value remains a linear problem, and the best at this count rate is trivial to compute. We use this fact to reduce the 2-D nonlinear minimization in and to a 1-D nonlinear minimization in (the coefficients of the cubic function define the nonlinear response and remain fixed). We begin with a guess from the UTR count rate and locally fit a parabola to ; the vertex of the parabola is our next guess for the count rate. As with our procedure to fit for the initial exponential decay of the reference voltage, this is equivalent to Newton–Raphson iteration on . The algorithm converges in only a few steps and requires a negligible amount of computation for a typical ramp.
Read Noise Properties
CHARIS’s H2RG detector has read noise that is correlated among the readout channels; this behavior is typical of H2RGs.33 It may be suppressed by fitting out power on various timescales using interspersed reference pixels,33 using (nearly) unilluminated light-sensitive pixels42 or using different weightings of the reference pixels at the detector edges.43 In CHARIS, there is an additional component of read noise that is largely shared by alternating readout channels. We read out our detector using 32 readout channels, each in size. The even and odd channels each have their own shared component of read noise: removing a single scaled template from all channels achieves only the noise suppression of using different templates for the even and odd channels.
The power spectrum of CHARIS’s read noise spikes at a range of frequencies in the 1- to 10-kHz range, most of which do not match any known frequencies of the system. The noise is also highly variable with time, both in amplitude and in its power spectrum. In lab tests during CHARIS’s assembly, the shared read noise was comparable in power to the independent read noise in each channel. On Subaru’s Nasmyth platform, the correlated component of the read noise has gotten much worse; its variance can be more than 10 times that of the independent read noise depending on the date and the readout channel. We are currently investigating this source of noise and attempting to fix it in hardware.
Figure 5 shows three representative realizations of CHARIS read noise on three widely separated dates. In all cases, we have used a long ramp to predict the count rate at the second read of the ramp and then used the difference between the actual and expected counts in the second read as a measure of the read noise. The noise in a CDS image (a difference of two reads) would be times the noise shown.
Figure 5 shows that the read noise on Subaru’s Nasmyth platform is a serious problem, with CDS-equivalent read noise in some channels at some times and that the read noise properties vary with time as CHARIS and neighboring instruments are moved and reconfigured. The lower-right panel of Fig. 5 demonstrates how much of this read noise is shared among alternating readout channels. In this panel, we have constructed two read noise templates: one for the even channels and a separate one for the odd channels. We then rescaled the appropriate template to each channel and removed it. This procedure reduced the read noise by a channel-dependent factor of 1.3 to 5.8. If there were no correlated read noise, this procedure would only be expected to reduce the variance by or the noise by (since 16 channels were used to construct each template).
Real CHARIS ramps are packed with microspectra, preventing us from simply removing the correlated read noise as described above. However, the combination of our method of fitting the microspectra (Sec. 5.2) and the generous spacing of the spectra on the detector does enable us to achieve effective read noise suppression. We discuss this in more detail in Sec. 5.3.
Calibrations for Cube Extraction using Monochromatic Flatfields
A CHARIS image consists of microspectra arrayed in a grid on the detector (see Fig. 2); these must be extracted into a data cube. Each microspectrum is the integral of the monochromatic lenslet spots over the spectrum of light that the lenslet sees, convolved with the pixel response function. Extracting the source spectrum requires knowing the pixel locations where each wavelength of light falls for each lenslet; this is the wavelength solution. Optimal extraction and extraction, the two main algorithms included in the CHARIS data reduction software, also require knowledge of the monochromatic lenslet spots (the lenslet PSF). In the rest of this paper, we will use the terms lenslet PSF and PSFlet interchangeably.
To both derive the wavelength solution and measure the wavelength-dependent PSFlets, we inject a supercontinuum source with a narrowband tunable filter into an integrating sphere to uniformly illuminate CHARIS’s lenslet array with monochromatic light. The tunable filter has a width of 5 nm, for a spectral resolution of at , well in excess of CHARIS’s resolving power in its high spectral resolution mode. Such a calibration strategy would need to be revised for an instrument with higher spectral resolution, though a lamp with well-spaced emission lines might serve as an effective substitute for our tunable filter. We have also integrated three narrowband filters, one each in the , , and bands, into SCExAO’s optics. With one of these filters in place, we may uniformly illuminate CHARIS’s lenslet array with any infrared-bright lamp or even the twilight sky.
We break the CHARIS calibration procedure into two steps. We first use our supercontinuum source and tunable filter to gradually step through the wavelength. This enables us to measure the locations of the lenslet PSFs on the detector and to derive the wavelength solution. We also use these images to extract the position-dependent lenslet PSFs. This first step, a full sequence of calibration images, is rarely (if ever) repeated. As of publication, the pipeline includes calibration products derived from a July 2016 calibration sequence. A new calibration sequence offers negligible improvements even one year and several cooling cycles later. The second step is to use a narrowband flat to derive a correction to the wavelength solution once per night. This section describes the process in detail and summarizes the final calibration files that it produces.
The typical user of CHARIS will not need to derive the full wavelength solution or the monochromatic lenslet PSFs; we have calculated these and distribute them with the source code. The user will, however, need to build the appropriate calibration files for a given observing mode from a single narrowband flat as thermal cycles of the instrument induce small shifts of the microspectra on the detector. We include the script buildcal for this purpose. It takes as input the raw reads from a single narrowband flat and writes all of the necessary calibration files to the directory from which it was run.
Full Calibration Sequences
A full calibration sequence consists of connecting our supercontinuum source to a tunable filter, coupling it to an integrating sphere, and injecting the light into CHARIS to illuminate the lenslet array. We then step through the wavelength with a step size comparable to or slightly larger than CHARIS’s spectral resolution. This yields a series of about 10 to 15 images per observing mode with well-separated lenslet PSFs. Figure 6 shows these spectrally unresolved flatfields from 1.35 to for a small subregion of the detector.
We use these calibration sequences to measure the PSFlet locations for each lenslet as a function of wavelength. We assume a cubic polynomial mapping between integer lenslet coordinates () and floating point pixel coordinates , with the polynomial coefficients being functions of wavelength
We first lightly smooth our monochromatic images with a narrow Gaussian; we use the known lenslet pitch and rotation as our initial guesses for the linear coefficients of the polynomial. We then maximize the sum of the interpolated intensities at the lenslet spot locations by adjusting the coefficients. Once we have derived these coefficients for one wavelength, we estimate the offset in the dispersion direction for the next wavelength using a grid search. We combine this offset with the polynomial coefficients from the previous step to form the initial guess for this optimization. We proceed to derive the cubic polynomial transformation from lenslet to detector coordinates for all wavelength steps.
For any wavelength of interest, we can now compute all of the coefficients of the lenslet-detector transformation polynomial (and hence the full wavelength solution) by fitting a cubic polynomial as a function of to each coefficient. We have derived these full wavelength solutions for each CHARIS observing mode using calibration sequences taken in July 2016 and distribute them as part of the CHARIS software package.
Our next step is to reconstruct the lenslet PSFs. For wavelengths , these are undersampled by CHARIS’s H2RG. We take an approach very similar to Ref. 44 for Hubble and to Refs. 45 and 46 for GPI. By deriving the wavelength solution, we already have the location of each lenslet PSF’s centroid and can place it on an oversampled grid. The lenslet PSFs are not spaced by an integer number of pixels nor by a ratio of small integers; as a result, they populate an oversampled PSFlet reasonably densely. We iteratively construct a slightly smoothed PSFlet and fit for each PSFlet’s normalization to refine the smoothed template. Finally, we deconvolve with the smoothing kernel using the Richardson–Lucy algorithm47 to produce our final oversampled PSFlets. We measure these oversampled PSFlets at about 10 wavelengths in each observing mode.
Figure 7 shows our resulting oversampled PSFlets in 25 subregions of the detector for monochromatic light. We use bilinear interpolation to estimate the PSFlet between the centers of regions and assume that the PSFlets remain constant from the centers of the outer regions to the edges of the detector. The convolution and deconvolution are not completely equivalent because some subpixel offsets are sampled more than others; this could lead to a systematic underestimation of the PSFlet width. Random errors in the PSFlet centers would push in the other direction, leading to a systematic overestimation of PSFlet width. An inspection of the residuals after our fitting of the microspectra (Sec. 5.2) indicates that these errors are negligible. Our use of only 25 images to represent PSFlet variation over the detector (motivated by the need to average over a large number of lenslets) is probably a bigger source of error.
As of publication, the pipeline includes one set of calibration files derived from calibration sequences taken in July 2016. We have been unable to conclusively measure any differences in the dispersion, in the nonlinear part of the wavelength solution, or in the PSFlet shapes over several calibration sequences and cooling cycles. Wavelength solutions based on more recent calibration sequences have not improved the extracted data cubes so long as a contemporaneous narrowband flat is available (see Sec. 4.2 for details). We will add updated calibration files to the pipeline if and when they prove necessary.
We use the long calibration sequences described above to measure the lenslet PSFs as a function of both wavelength and position and to compute the wavelength solution. This wavelength solution changes as CHARIS thermally cycles and as it is craned onto and off of its bench. Small shifts due to motion of the optics or lenslet array change the position and orientation of the wavelength solution but do not have a measurable impact on its nonlinear component. We use a single narrowband flat for each night to compute these shifts and adjust the wavelength solution accordingly, assuming that its nonlinear component remains fixed.
Our strategy for these narrowband flatfield images is to uniformly illuminate CHARIS’s lenslet array using the halogen flatfield lamp in front of AO188, the system optically upstream of SCExAO. One of three narrowband filters within SCExAO creates a flatfield image that CHARIS cannot resolve spectroscopically. These filters, at 10-nm width, are somewhat broader than our tunable source (); the 1200-nm -band light () is marginally resolved by CHARIS in its -band mode (). All filters are completely unresolved in CHARIS’s broadband mode.
We process a narrowband flat using the same procedure as for the full calibration sequence above, except that we use the existing wavelength solution as a starting guess for the spot locations. We perform a grid search to find the approximate offset and then iterate using Powell’s method within the minimization function of scipy.optimize. We keep the offset and linear terms of this solution and replace the higher-order terms with their values in the main wavelength solution. These refinements to the wavelength solution are all that we extract from the narrowband flats; they are the only aspects of CHARIS calibration data that are unstable from run to run. Because CHARIS sits on Subaru’s Nasmyth platform and because we designed the imaging relay to be thermally stable (with aluminum mirrors and an aluminum bench), the location of the PSFlets is very stable within a night. Project 1640 and GPI are both mounted at the Cassegrain focuses of their respective telescopes; their PSFlets shift due to flexure as the telescope elevation angle changes.48,49
We include the script buildcal with the CHARIS software for the purpose of building all of the necessary calibration files. The user runs buildcal with the raw file for the narrowband flat as a command-line argument. The software will then compute the calibrations and write all of the files to the directory from which buildcal was run. Some of these files are specific to optimal extraction and extraction, the two main spectral extraction techniques used by the pipeline. These files are described together with the methods themselves in Secs. 5.1 and 5.2. A summary of the wavelength calibration steps and required products for each extraction technique is shown in Fig. 8.
Masking and Flatfielding
Flatfielding an ordinary image divides by a single sensitivity for each pixel; that sensitivity is a combination of illumination by the optics and the quantum efficiency of the pixel. In an IFS like CHARIS, these two components of the flatfield separate into a detector flat that measures the pixel-to-pixel sensitivity differences and a lenslet flat that measures differences in the illumination and transmission of the various lenslets.
We construct the pixel flat from early ramps taken before CHARIS was effectively baffled, when the detector was (relatively) uniformly illuminated. This flatfield does include artifacts from the nonuniform illumination. We therefore apply a high-pass filter, preserving pixel-to-pixel variations on the scale of the microspectra but removing the slow variations in illumination across the chip. Individual CHARIS microspectra are in size, and our high-pass filter is a 2-D Gaussian with (FWHM 23.5 pixels) and removes power on scales significantly larger than this. We use prebaffling images, together with later images where the background count rate is very low, to identify bad pixels. These are either hot, with a very high dark current, or they respond to light much more weakly than their neighbors. Later images effectively identify the hot and warm pixels, while our method of constructing the pixel flat identifies pixels that are not light sensitive. We flag a pixel as “bad” if it has a dark current 15 sigma above the read noise of its neighbors in a series of long-exposure dark frames (i.e., above ) or if its sensitivity in our high-pass-filtered pixel flat is below 80%. We flag 0.6% of pixels as bad in this way; they are clustered in groups large and small across the detector.
Our second step is to construct a lenslet flat. For this step, we return to our full calibration sequences from which we derived a wavelength solution and measured the high-resolution lenslet PSFs. Having extracted these, we reconstruct the monochromatic spot pattern expected for a uniformly sensitive detector and a uniformly illuminated lenslet array. We then scale this ideal spot pattern by the pixel flat described above and determine the best-fit lenslet-by-lenslet amplitude of the PSFlets. Because the spot pattern is monochromatic, the spots are very well separated and cross-talk among neighboring lenslets may be safely ignored (see Fig. 6).
We thus obtain a lenslet flat for each wavelength in our sequence. The flats are consistent with one another within the observing mode, as is expected for illumination of the field by nearly all reflective optics. This also indicates that the filters, which lie immediately behind the lenslet array, have transmission curves that are nearly spatially uniform. We median-combine the flats to create our lenslet flat for each observing mode. We have also verified that these lenslet flats are consistent with those that we would derive from our narrowband flatfield images, described in Sec. 5.
Figure 9 shows the lenslet flat and a subregion of the pixel flat. The pixel flat is the same for all observing modes (it was measured using white-light while the detector was poorly baffled), while there is, in principle, a different lenslet flat for each observing mode. In practice, the lenslet flat is nearly the same in all modes due the almost exclusive use of reflective optics throughout CHARIS and the AO systems. The pixel sensitivity generally varies by just a few percent from one pixel to the next. The lenslet flat, apart from a few poorly illuminated (or effectively opaque) lenslets, has typical variations of across the array.
Once we have constructed all of the calibration materials, extracting a data cube is relatively straightforward. CHARIS includes two main techniques to perform the extraction: optimal extraction50 and extraction. We describe our implementation of each below. The CHARIS pipeline also implements a simple aperture photometry-based extraction. However, this algorithm is unable to account for bad or noisy pixels and has no advantage, either in performance or run time, over optimal extraction.
Optimal extraction50 computes the spectral intensity at each wavelength (i.e., each row of pixels perpendicular to the dispersion direction) using both the measured shape of the line-spread function and the pixel-specific measurement errors to weight the pixels. This extraction method has long been the standard approach for spectrographs.5152.–53 We implement optimal extraction for CHARIS assuming Gaussian profiles with a wavelength- and lenslet-dependent width that we measure from our high-resolution PSFlets (Fig. 7). This is equivalent to computing the spectral intensity as the normalization of a 1-D Gaussian of known position, width, and unit area.
We compute the position of each lenslet’s microspectrum as part of the calibration process, but the wavelengths that correspond to integer pixels along the spectrum differ. Each microspectrum is long in the dispersion direction. We obtain () quadruples for each lenslet, where the positions are integers in the dispersion direction and floating point numbers in the perpendicular direction (which runs along the center of the microspectrum). The wavelength-dependent width of the microspectrum is given by computing the second moment of the flux along a line passing through the center of the corresponding PSFlet (Fig. 7). Because the PSFlets are not circularly symmetric, the actual profile will include variable contributions from the diffraction spikes at other wavelengths and will differ depending on a lenslet’s spectrum. Optimal extraction takes the () quadruples for each lenslet (which are computed and saved as part of the calibration step) and uses a weighted sum to calculate a corresponding spectral intensity.
Optimal extraction returns the spectral intensity at the wavelengths corresponding to a given microspectrum’s sampling on the detector. Each microspectrum has its own exact spectral resolution and subpixel sampling and is, therefore, defined on its own native wavelength array. It is possible to extract the data without interpolating onto a common wavelength array, but it is difficult to visualize and manipulate such data. CHARIS’s software, therefore, returns a cube in which these microspectra are interpolated onto a common wavelength array at the spectral resolution requested by the user. The resulting data cube couples neighboring wavelengths because of this interpolation and because the extracted microspectra are convolved with the line-spread function (a PSFlet’s extent along the dispersion direction of a microspectrum).
The output of optimal extraction is a pair of data cubes: one for the spectral intensities and one for their errors. The code could easily be modified to return cubes at CHARIS’s native wavelength sampling (which differs for each lenslet). In that case, the software would return three cubes: one for the spectral intensities, one for their errors, and one for the native wavelengths.
The CHARIS software also has the ability to perform a naïve aperture extraction using unit weights perpendicular to the dispersion direction. This approach is currently used by SPHERE and GPI.22,54,55 For CHARIS, aperture extraction produces cubes with slightly more noise (depending on the aperture) and lacks an ability to treat errors and bad pixels: bad or noisy pixels must be replaced with guesses from their neighbors. We do not attempt to replace bad pixels in this case but simply set them to zero. A more careful implementation of aperture photometry would produce a result no better than that from optimal extraction and would save a negligible amount of computational cost.
CHARIS’s software implements a -based extraction, fitting every microspectrum with a linear combination of narrowband spots. Once the narrowband spot templates have been computed, it is relatively straightforward to extract the spectrum, i.e., the coefficients of their best-fit linear combination. Computing the templates themselves, however, is more subtle. The spectra on the detector are the spectra seen by the lenslets and convolved with both the lenslet PSFs and the pixel response function. An attempt to use monochromatic PSFlets to fit the microspectra will suffer from the fact that no real spectrum can be represented as the sum of delta functions. Attempting to overcome this by extracting a spectrum at a resolution significantly higher than CHARIS’s native resolution would cause severe problems with aliasing and covariance among neighboring wavelengths. Our solution is to fit the microspectrum of each lenslet as a series of top-hat spectra in units of , with the spectral resolution of this sampling chosen to be comparable to, or slightly higher than, CHARIS’s intrinsic spectral resolution.
We construct a spot diagram for a narrow spectral range using the oversampled lenslet PSFs, such as those shown in Fig. 7. We use 10 monochromatic spots to construct each narrowband spot, first scaling each monochromatic spot by the atmospheric and filter transmission appropriate to the very narrow range of wavelengths that it covers. In this way, our narrowband spectra are what we would expect for an astrophysical source with a perfectly flat , average atmospheric transmission, the filter for the given observing mode, and achromatic optics in the rest of the system. The correction is far from perfect, but it does mitigate the effect of the atmosphere and filter on our recovered microspectra.
Our calibration routine produces and saves a series of narrow, but not quite monochromatic, arrays of spots stepping through wavelength and built as described above. Together, these spots cover the full wavelength array of a given observing mode with no gaps and approximately correct for the chromatic throughput of the atmosphere and filter. Figure 10 shows these narrowband spot diagrams. They are broader than those as shown in Fig. 6, but only slightly.
Finally, we produce one last set of narrowband spots where we retain an oversampling by a factor of 5 in the direction perpendicular to the dispersion. This allows us to use cross correlation to find the appropriate offset for an individual CHARIS ramp even if a calibration data set is not available from that night (or if it proves a relatively poor match). This takes a significant amount of disk space ( per calibration set) but enables the spectra to be located to and more accurately fit by our extraction routine. We do not oversample in the dispersion direction because we currently have no way of getting a sufficiently accurate wavelength solution for an individual image without a narrowband calibration flat.
Once all of the narrowband templates shown in Fig. 10 have been computed and saved, spectral extraction is relatively straightforward. We begin by cutting out a 7-pixel wide, -long rectangle around each microspectrum. We then fit each 2-D cutout with 20 to 25 narrowband spots, minimizing the squared residuals weighted by the pixels’ inverse variance. We use the singular value decomposition for this purpose, implementing it ourselves in Cython to enable lenslet-by-lenslet parallelization using OpenMP. This approach naturally includes the errors on individual pixels and masks hot pixels, and it returns the spectral covariance matrix for each lenslet. It also avoids any interpolation onto a common wavelength array as was necessary for optimal extraction. A similar approach has been implemented for GPI,46,45 but measurements of GPI’s wavelength-dependent lenslet PSFs were insufficient for the algorithm to perform well.
Figure 11 shows the performance of our extraction on the microspectra in two regions of a low-resolution image: one far from the star where read noise is important (a) and the other in a bright speckle where it is negligible (b). The residuals from systematic errors in the PSFlet models are typically of the intensity. As we discuss below in Sec. 5.3, our models (center panels) include the correlated component of the read noise and the undispersed background. This is visible in the top panel, where the left few columns of microspectra are in a noisy readout channel while the rightmost columns are in a much cleaner channel.
As an optional feature, the software can use the PSFlet template file that is oversampled in the direction perpendicular to the dispersion. It uses cross correlation over subregions of the detector to fit a position-dependent subpixel shift in the locations of the spectra. We adopt a range of prospective shifts spaced by 0.2 pixels, compute the cross correlation in each case, and then fit a parabola to the three cross-correlation values nearest their minimum to obtain the exact subpixel offset. We then use bilinear interpolation to compute an offset for each lenslet from the cross-correlation offsets and interpolate the oversampled PSFlet templates onto the appropriate pixel sampling. This approach typically produces an offset of no more than a few tenths of a pixel but noticeably improves the residuals in the 2-D microspectra. The fit shown in Fig. 11 includes this subpixel offset.
A -based extraction has several features that differentiate it from optimal extraction and aperture photometry on the microspectra. Because we fit the entire 2-D spectrum, it is trivial to also fit for and remove an undispersed background that is uniform over the microspectrum. For microspectra that straddle two readout channels, we allow for the undispersed background to have different values in the two channels. This fitting of an undispersed background is an optional setting in the software’s configuration.
An important feature of the extraction is that it automatically attempts to extract the intrinsic source spectrum; it performs a deconvolution with the instrumental line-spread function. This deconvolution results in a negative covariance between neighboring spectral bins and a superficially noisier cube than that produced by optimal extraction. A small amount of smoothing in the spectral dimension, i.e., a reapplication of the convolution with the line-spread function, removes this extra apparent noise. The effect of the line-spread function is apparent in a single slice through a data cube, as shown in Fig. 12. With optimal extraction, the speckles are radially extended due to contributions from a range of wavelengths. This arises both from the line-spread function and from the fact that we interpolate all of our microspectra (each of which has a different native wavelength sampling) onto a common wavelength array. The speckles in a -extracted cube show a negligible radial extent beyond that of the monochromatic PSF incident on the lenslet array.
A extraction naturally enables the removal of CHARIS’s spectral cross talk. In the calibration step, we measured the lenslet PSFs (shown in Fig. 7) out to a radius of , slightly larger than the horizontally separating the centers of the microspectra. While we only fit the microspectra over a rectangle of 7-pixels wide, our model microspectrum extends out to a box roughly twice as wide. Subtracting all of the microspectra results in a small, negative residual as the broader boxes remove a few photons from the neighboring spectra. We iterate one time on the cube to remove cross talk. In practice, the effects of cross talk in CHARIS are extremely minor due to the spacing of our microspectra and our use of pinholes on the back of the lenslet array. The correction from an iteration to remove cross talk is generally .
Residual Intensity and Read Noise Suppression
A particularly important feature of our extraction is that it produces a 2-D model of the entire detector readout. Subtracting this model from the actual ramp produces a residual image that, in the limit of perfect models of the microspectra, is pure noise. Residuals from actual CHARIS ramps do show some systematics in the centers of the microspectra but are noise-dominated over much of the detector, both between microspectra and in the less-illuminated lenslets. CHARIS’s H2RG detector has an excessive amount of read noise correlated among the readout channels, as discussed in Sec. 3.3 and shown in Fig. 5. We use our measurement of a residual intensity to model and remove the correlated component of the read noise, achieving a suppression approaching that shown in the lower-right panel of Fig. 5.
We fit for the correlated read noise as two patterns, one shared by the even readout channels and a second shared by the odd channels. We also fit for a scalar coupling between each channel and the appropriate noise pattern. We compute the correlated read noise using a trimmed mean over a user-specified fraction, by default 70%, of the pixels with the smallest fitted intensity relative to the read noise, i.e., the pixels where the residual is most dominated by the read noise. Each element of the correlated read noise pattern has 16 realizations on the detector (half of the 32 readout channels); this approach generally gives at least from which to calculate the pattern. We then scale the correlated read noise by our fitted couplings and subtract it from each readout channel.
We find that our approach provides excellent suppression of the correlated component of the read noise while avoiding the addition of systematics back into the data. Figure 13 shows an example of a slice through an extracted cube with and without suppressing the read noise and fitting out an undispersed background. The image was taken through an ND filter to prevent saturation of the central star, resulting in low SNR over much of the image. The correlated read noise was severe on the date shown, but it is mostly removed by our algorithm.
We also enable the suppression of correlated read noise when using optimal extraction or aperture photometry. In this case, we use extraction but save the read noise and (optionally) the undispersed background rather than the data cube. We then subtract the fitted read noise before performing the requested extraction algorithm. This approach has the same effectiveness at removing correlated read noise as simply using extraction.
Our approach to read noise suppression works because of CHARIS’s redundancy. While the detector has , we extract only spectral measurements for each of our illuminated lenslets, using on average to fit each spectral measurement. The problem is sufficiently overconstrained in that we can also fit two noise patterns without facing significant degeneracies.
CHARIS ramps have a background overwhelmingly composed of light leaks and thermal photons in the -band—the true dark current is negligible. The software does have the ability to subtract a background count rate from a 2-D ramp (i.e., a matched dark). This is common practice for near-infrared IFSs, including OSIRIS7 and GPI.22 However, it is only worthwhile for CHARIS if a very high SNR dark frame is available in the same instrument configuration as the science data. Shot noise and read noise from the background frame will add to each image in an observing sequence, and, because the background has a single realization of the noise, it will add coherently to all images. The SNR in the background must be substantially higher than in each science frame (i.e., the integration time must be longer) to avoid this becoming a problem. Matching exposure times will result in the background subtraction contributing the noise to each science frame and most of the noise to a stack of science frames (because the same realization of noise is added to each frame).
The complexity of subtracting a background image is largely due to the fact that the background contains both a dispersed and an undispersed thermal component. The dispersed component is overwhelmingly due to longer wavelengths and is orders of magnitude lower with a filter in place that blocks -band light. Microspectra from the thermal background are clearly visible on the detector in the low-resolution and -band modes but are nearly invisible in the and bands or with the ND filter in place. Figure 14 shows these backgrounds over part of the detector in the low-resolution, -, and - bands. The dispersed component dominates the background in the low-resolution and -bands, while the -band shows a relatively uniform background of . The microspectra in -band are dispersed about four times as much as the low-resolution spectra, resulting in of the peak intensity.
Any change in the alignment of the lenslet array will change the locations of the thermal microspectra on the array. The CHARIS extraction script can handle shifts in the location of the microspectra by applying a subpixel offset to the template PSFlets, but it cannot use this to shift the 2-D array of thermal microspectra. Any mismatch, even if it is just a fraction of a pixel, will degrade the ability of extraction to fit the background-subtracted microspectra. Unfortunately, we lack the hardware (e.g., a mechanism to block the lenslet pinholes) to measure only the undispersed background.
It is easy to degrade data by subtracting a poor 2-D background. This is especially true of low-resolution data taken with the ND filter. Exposures with this filter are usually longer integrations to achieve good SNRs and, therefore, require long background images to perform a 2-D subtraction without adding noise. Subtracting a background taken with the broadband (rather than ND) filter results in a dispersed thermal background orders of magnitude too high and a negative inferred sky background in the -band.
In the absence of a matched, high SNR and 2-D background, the software does have the ability to subtract an undispersed background lenslet-by-lenslet as part of a extraction (Sec. 5.2) or combined with read noise suppression (Sec. 5.3). The dispersed background will still be included in the final data cube (though it can easily be removed by standard postprocessing algorithms).
The script extractcube produces a flexible image transport system (FITS) file with the data cube (the intensity at each lenslet and at each wavelength) as the second Header/Data Unit (HDU 1) and the inverse variance of the data cube (the diagonal of the covariance matrix if using extraction) as HDU 2. The first HDU, HDU 0, consists of only a header with key information about the observation and the data reduction. The last HDU, HDU 3, contains the original header of the file containing the raw reads.
The cube is defined on a logarithmic wavelength array. It is given by the nonstandard header keywords LAM_MIN, LAM_MAX, and DLOGLAM, with56 with CTYPE3 = “AWAV-LOG”. We use a logarithmic scale because of CHARIS’s nearly wavelength-independent dispersion, and each pixel in the dispersion direction is a nearly constant increment in . A uniform spacing in wavelength would result in aliasing problems at the short wavelength end of the spectrum.
With extraction, the data cube is never interpolated except over bad spectral measurements (which in any case have their inverse variance set to zero). With aperture photometry or optimal extraction, each lenslet’s microspectrum has been interpolated to place it on a common wavelength array. In any of these cases the inverse variance does not capture the full spectral covariance. Still, it does capture lenslet-to-lenslet variations in the quality of the cube and prevents bad measurements from corrupting the images in postprocessing.
Software Parameters and Settings
To reduce CHARIS data, we provide a software package that accomplishes the two required tasks: wavelength calibration (through the script buildcal, see Sec. 4) and cube reduction (the script extractcube, see Sec. 5). The wavelength calibration only needs to point to a monochromatic flat taken as close as possible to the actual date that the science data were taken, ideally the same night. The user can choose whether or not to compute oversampled PSFlets, which are required to fit for subpixel shifts in the ramp as described in Sec. 5.2.
The cube extraction script, extractcube, has several parameters that the user can set; these will change the photometry and morphology of the reduced spatiospectral datacube as described in Sec. 5 and Fig. 12. The set of parameters is presented in Table 2 and can be edited within a .ini text file that is read by the extraction software.
Extraction software parameters.
|read_0||1||First read in the ramp to use.|
|read_f||None||[read_0+1, ] or None||Last read to use. If None, read_f will be the final read.|
|gain||2||Detector gain, , used to compute photon noise.|
|noisefac||0||[0, 1)||Additional fractional error in count rate to account for systematic errors in PSFlet models. Suggested values: 0.05 for extraction, 0 otherwise.|
|saveramp||False||True and False||Save the ramp as its own image?|
|calibdir||—||String file path||Directory containing the files created by buildcal.|
|bgsub||False||True and False||Subtract a background/thermal ramp? If True, buildcal must have been called with background ramp(s). Should be False for ND frames.|
|mask||True||True and False||Mask bad pixels?|
|flatfield||True||True and False||Apply the pixel and lenslet flatfields?|
|fitshift||True||True and False||Fit for a subpixel shift of the microspectra? If True, buildcal must have computed oversampled PSFlets.|
|R||30||Spectral resolution of the extracted cube. Recommended: 30 for low res, 100 for high res.|
|method||lstsq||lstsq, optext, apphot3, apphot5||Method used to extract cube: extraction, optimal extraction, or aperture photometry across 3 or 5 pixels.|
|refine||True||True and False||Iterate once to remove cross talk? Only used if method is lstsq.|
|suppressrn||True||True and False||Use the method of Sec. 5.3 to remove read noise?|
|minpct||70||(0, 100)||Minimum percentage of pixels used to estimate read noise (only used if supressrn is True).|
|fitbkgnd||True||True and False||Fit and remove an undispersed background lenslet-by-lenslet?|
|smoothandmask||True||True and False||Interpolate over bad lenslets to display a smooth cube? The inverse variance of the bad measurements remains zero.|
|saveresid||False||True and False||Save the 2-D residual from the ramp? Only used if method is lstsq.|
|maxcpus||None||or None||Maximum number of threads to use. If None, use all threads. If negative, it is the number of threads to reserve.|
Many of the parameters in Table 2 are intended to be left as fixed. Indeed, the only parameter that a typical user will need to modify from its default value is calibdir (which must point to a local directory). Some should never be changed from their default values, while others may be modified to suit a particular reduction or a user’s particular needs. Several of these parameters, however, allow the user to run a faster reduction, to change the treatment of the background, and to write intermediate data products.
Table 3 lists two sets of parameters, one intended for a quick reduction and the other for a more detailed reduction that removes read noise and performs a extraction. The parameters read_0, read_f, gain, mask, and flatfield are the same in both reductions; we generally recommend that the user never change these values. Changing either flatfield or mask to False, for example, will save a negligible amount of computational effort. To extract a cube with bgsub, a thermal background must have been computed by buildcal, while using fitshift requires oversampling the PSFlets in buildcal. The parameter smoothandmask is intended only for cosmetic purposes; it replaces spectral measurements that are much noisier than their neighbors with values taken from a smoothed cube. To avoid biasing the results, any modified intensities have their corresponding inverse variances set to zero. The parameters refine and minpct are ignored in the fast and rough reduction, refine because method is not lstsq and minpct because suppressrn is False. Table 3 indicates these extraneous parameters with ellipses.
Suggested parameter settings.
|Parameter||Fast and rough reduction||Slower and better reduction|
|R||b30 or 100||b30 or 100|
See Secs. 5.4 and 6 for a discussion. Never set both bgsub and fitbkgnd to True.
30 in low-resolution mode and 100 in J, H, or K.
One of the more problematic aspects of the reduction is background subtraction, as discussed in Sec. 5.4. Subtracting a good match to the 2-D thermal background (comprising both the microspectra and undispersed background) can improve the reduction. However, it is easy to degrade the final cube by subtracting a noisy background or one that is a poor match to either the thermal microspectra or the undispersed background. For this reason, we generally recommend that a user set bgsub to False. If the user has a background image matching the observing mode and with at least a factor of a few longer integration time than any of the science frames, then setting bgsub to True may improve the reduction. If bgsub is set to False, then the undispersed background may be fit and removed by setting both fitbkgnd and suppressrn to True or by setting fitbkgnd to True with method equal to lstsq. The user should never attempt to remove the background twice by setting both bgsub and fitbkgnd equal to True.
Software Performance and Scaling
The CHARIS software is written in Python and Cython57 and uses the extensive libraries available in the NumPy, SciPy,58 and Astropy59 packages. In addition to the parallelization performed natively in these packages, we have used the multiprocessing module to parallelize the construction of model PSFlets and OpenMP to parallelize many of the Cython routines.
We have tested the performance and scaling of the software on a compute server with two 12-core 2.5-GHz Intel Xeon processors. We use maxproc to set the maximum number of threads to allocate and test the two sets of parameters given in Table 3. We use a 41-read ramp (about a 60-s exposure); a shorter exposure would require correspondingly less time to compute the ramp.
Figure 15 shows the results. The software has a few seconds of overhead from the initial module imports and from astropy computations of important parameters of the observation (such as the parallactic angle). The computation of the ramp requires a large amount of I/O and scales with the number of reads; using only the first 10 of the 41 reads reduces the computational cost by about a factor of 4. For the fast and rough reduction, OpenMP provides only a slight performance gain, and the cube extraction is a minor component of the total computational cost. The slower and better reduction, however, sees large performance gains from parallelizing the extraction step, scaling reasonably well up to CPUs. This step requires the cube to be extracted three times: an initial extraction, a second extraction once a first estimate of the correlated noise has been removed (Sec. 5.3), and a third after removing updated estimates of the correlated read noise and spectral cross talk. Setting refine and suppressrn to False (which we do not recommend) would reduce the computational cost of the extraction step by nearly a factor of 3. Using optimal extraction but setting suppressrn to True would require at least two extractions (three if also computing an undispersed background). In this case, the total computational time would be almost identical to the results shown here for the slower and better reduction with extraction.
Discussion and Conclusions
In this paper, we have presented the data reduction pipeline for the CHARIS IFS. The software begins with the raw detector reads, corrects for an artifact in the first read, and then reconstructs the count rate at each pixel using either UTR weighting or the full nonlinear response. This approach handles saturation gracefully as long as there are at least a few reads before a pixel saturates, and it uses only those reads that are uncontaminated by bleeding from a neighboring saturated pixel. Fitting the full nonlinear response to pixels that exceed of well capacity adds a negligible amount to the required runtime.
There are two steps after constructing the ramp: first, all of the calibration files must be built from a narrowband flatfield image, and, second, the data cube must be extracted. Our calibration step uses a range of data products that we have built from early lab data and from detailed image sequences taken with a supercontinuum source and a tunable filter. These products consist of the detector flatfield, the lenslet flatfield, a bad pixel mask, a reference wavelength solution (a wavelength-dependent mapping from lenslet to pixel coordinates), and reconstructed position- and wavelength-dependent lenslet PSFs. We supply a script buildcal with the software; buildcal takes a narrowband filter from the same night as the data to be reduced and computes a perturbation to the wavelength solution and resampled PSFlets for each wavelength and position to be extracted.
Once the calibration step has been completed, the script extractcube extracts the data cube. This script operates on a sequence of reads and requires a set of input parameters that the user may set using a .ini file. It first constructs the count rate at each pixel from the sequence of reads and computes basic parameters of the observation using keywords in the FITS header. Next, buildcal optionally computes a subpixel shift in the positions of the microspectra and finally extracts the data cube.
We implement three cube extraction algorithms: aperture photometry, optimal extraction, and extraction. The GPI and SPHERE pipelines currently use aperture photometry as their primary extraction method.54,55,22 While a least-squares inversion technique has been implemented for GPI,46 it has not yet equaled the performance of simple aperture photometry. Project 1640’s pipeline49 comes closest to our approach; however, after reconstructing the lenslet PSFs, it uses a weighted sum with the PSFlet templates rather than a inversion. This decision increases the coupling among neighboring wavelengths, as it effectively performs a second convolution with the line-spread function.
All of the aperture photometry and optimal extraction techniques require interpolation to place the microspectra onto a common wavelength array. This interpolation couples neighboring wavelengths and degrades the data cube, while aperture photometry also requires bad data to be fixed. Our extraction algorithm avoids all of these drawbacks and performs a deconvolution of the microspectra with the line-spread function, resulting in much more monochromatic slices through the data cube. The algorithm produces a full 2-D residual map that allows us to estimate and remove correlated read noise, which results in a dramatic improvement to CHARIS data. This algorithm also allows us to fit out and remove spectral cross talk and an undispersed lenslet-by-lenslet background, and it produces a full covariance matrix for each lenslet’s microspectrum (though we presently save only the diagonal elements).
The final data product of the CHARIS pipeline is an extracted () data cube with inverse variances for each spatial and spectral measurement. This cube must subsequently be calibrated, using satellite spots induced by SCExAO26 or some other method. The pipeline currently saves an estimated transformation from lenslet to sky coordinates; we defer a full astrometric calibration to a future paper. We caution against naïvely interpolating our data cube, as doing so would mix spatial and spectral elements that could have very different uncertainties and will inevitably discard information in the cube.
The CHARIS software is written in Python and Cython and is freely available on github: https://github.com/PrincetonUniversity/charis-dep. It is parallelized, taking anywhere from a couple of seconds to run on a short ramp with a rough reduction to s on a longer ramp with a detailed reduction using a single processor. With a moderately powerful computer server, the most detailed reductions may be completed in to . The CHARIS pipeline has a documentation page that we maintain at: http://princetonuniversity.github.io/charis-dep/. The data cubes that it produces are suitable inputs for high-contrast image processing, a topic that we will address in forthcoming work.
This work was performed in part under contract with the Jet Propulsion Laboratory funded by NASA through the Sagan Fellowship Program executed by the NASA Exoplanet Science Institute. This work was performed in part under a Grant-in-Aid for Scientific Research on Innovative Areas from MEXT of the Japanese government (No. 23103002). This research was based on data collected at the Subaru Telescope, which is operated by the National Astronomical Observatories of Japan. The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Mauna Kea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain.