<p>Space missions designed for high precision photometric monitoring of stars often undersample the point-spread function, with much of the light landing within a single pixel. Missions such as MOST, Kepler, BRITE, and TESS do this to avoid uncertainties due to pixel-to-pixel response nonuniformity. This approach has worked remarkably well. However, individual pixels also exhibit response nonuniformity. Typically, pixels are most sensitive near their centers and less sensitive near the edges, with a difference in response of as much as 50%. The exact shape of this fall-off, and its dependence on the wavelength of light, is the intrapixel response function (IPRF). A direct measurement of the IPRF can be used to improve the photometric uncertainties, leading to improved photometry and astrometry of undersampled systems. Using the spot-scan technique, we measured the IPRF of a flight spare e2v CCD90 imaging sensor, which is used in the Kepler focal plane. Our spot scanner generates spots with a full-width at half-maximum of ≲3 μm across the range of 400 to 850 nm. We find that Kepler’s CCD shows similar IPRF behavior to other back-illuminated devices, with a decrease in responsivity near the edges of a pixel by ∼50 % . The IPRF also depends on wavelength, exhibiting a large amount of diffusion at shorter wavelengths and becoming much more defined by the gate structure in the near-IR. This method can also be used to measure the IPRF of the CCDs used for TESS, which borrows much from the Kepler mission.</p>
Space missions designed for high precision photometric monitoring of stars often under-sample the point-spread function, with much of the light landing within a single pixel. Missions like MOST, Kepler, BRITE, and TESS, do this to avoid uncertainties due to pixel-to-pixel response nonuniformity. This approach has worked remarkably well. However, individual pixels also exhibit response nonuniformity. Typically, pixels are most sensitive near their centers and less sensitive near the edges, with a difference in response of as much as 50%. The exact shape of this fall-off, and its dependence on the wavelength of light, is the intra-pixel response function (IPRF). A direct measurement of the IPRF can be used to improve the photometric uncertainties, leading to improved photometry and astrometry of under-sampled systems. Using the spot-scan technique, we measured the IPRF of a flight spare e2v CCD90 imaging sensor, which is used in the Kepler focal plane. Our spot scanner generates spots with a full-width at half-maximum of .5 microns across the range of 400 nm - 900 nm. We find that Kepler's CCD shows similar IPRF behavior to other back-illuminated devices, with a decrease in responsivity near the edges of a pixel by ~50%. The IPRF also depends on wavelength, exhibiting a large amount of diffusion at shorter wavelengths and becoming much more defined by the gate structure in the near-IR. This method can also be used to measure the IPRF of the CCDs used for TESS, which borrows much from the Kepler mission.
The Transiting Exoplanet Survey Satellite (TESS) will conduct a search for Earth's closest cousins starting in early 2018 and is expected to discover ∼1,000 small planets with R<sub>p</sub> < 4 R<sub>⊕</sub> and measure the masses of at least 50 of these small worlds. The Science Processing Operations Center (SPOC) is being developed at NASA Ames Research Center based on the Kepler science pipeline and will generate calibrated pixels and light curves on the NASA Advanced Supercomputing Division's Pleiades supercomputer. The SPOC will also search for periodic transit events and generate validation products for the transit-like features in the light curves. All TESS SPOC data products will be archived to the Mikulski Archive for Space Telescopes (MAST).
Kepler is NASA's first space mission dedicated to the study of exoplanets. The primary scientific goal is statistical - to
estimate the frequency of planetary systems associated with sun-like stars. Kepler was launched into an Earth-trailing
heliocentric "drift-away" orbit in March 2009, and is monitoring 150,000 stars. The instrument detects the faint
photometric signals of transits of those systems whose orbital planes are oriented in our line-of-sight. An Earth-Sun
analog will produce a transit depth of 80 parts per million (ppm), lasting for at most a few tens of hours, and repeating
once per "year". The instrumentation was designed to provide photometric data with a precision of 20 parts per million
in 6.5 hours for 12<sup>th</sup> magnitude stars, resulting in a signal-to-noise ratio of 4 for an Earth-Sun transit. The stability of the
flight system enables the precision of the data that reveal subtle instrumental and astrophysical effects that in turn allow a
deeper understanding of the performance of the hardware, to enhanced operational procedures, and to novel post-processing
of the data. The data are approaching the sensitivity needed to detect transits of terrestrial planets. Intrinsic
stellar variability is now the most significant component of the photometric error budget.
The Kepler Mission is designed to detect the 80 parts per million (ppm) signal from an Earth-Sun equivalent
transit. Such precision requires superb instrument stability on time scales up to 2 days and systematic error
removal to better than 20 ppm. The sole scientific instrument is the Photometer, a 0.95 m aperture Schmidt
telescope that feeds the 94.6 million pixel CCD detector array, which contains both Science and Fine Guidance
Sensor (FGS) CCDs. Since Kepler's launch in March 2009, we have been using the commissioning and science
operations data to characterize the instrument and monitor its performance. We find that the in-flight detector
properties of the focal plane, including bias levels, read noise, gain, linearity, saturation, FGS to Science crosstalk,
and video crosstalk between Science CCDs, are essentially unchanged from their pre-launch values. Kepler's
unprecedented sensitivity and stability in space have allowed us to measure both short- and long- term effects from
cosmic rays, see interactions of previously known image artifacts with starlight, and uncover several unexpected
systematics that affect photometric precision. Based on these results, we expect to attain Kepler's planned
photometric precision over 90% of the field of view.
The Kepler mission is designed to detect the transit of Earth-like planets around Sun-like stars by observing
100,000 stellar targets. Developing and testing the Kepler ground-segment processing system, in particular the
data analysis pipeline, requires high-fidelity simulated data. This simulated data is provided by the Kepler Endto-
End Model (ETEM). ETEM simulates the astrophysics of planetary transits and other phenomena, properties
of the Kepler spacecraft and the format of the downlinked data. Major challenges addressed by ETEM include
the rapid production of large amounts of simulated data, extensibility and maintainability.
In order for Kepler to achieve its required <20 PPM photometric precision for magnitude 12 and brighter stars,
instrument-induced variations in the CCD readout bias pattern (our "2D black image"), which are either fixed or slowly
varying in time, must be identified and the corresponding pixels either corrected or removed from further data
processing. The two principle sources of these readout bias variations are crosstalk between the 84 science CCDs and the
4 fine guidance sensor (FGS) CCDs and a high frequency amplifier oscillation on <40% of the CCD readout channels.
The crosstalk produces a synchronous pattern in the 2D black image with time-variation observed in <10% of individual
pixel bias histories. We will describe a method of removing the crosstalk signal using continuously-collected data from
masked and over-clocked image regions (our "collateral data"), and occasionally-collected full-frame images and
reverse-clocked readout signals. We use this same set to detect regions affected by the oscillating amplifiers. The
oscillations manifest as time-varying moiré pattern and rolling bands in the affected channels. Because this effect
reduces the performance in only a small fraction of the array at any given time, we have developed an approach for
flagging suspect data. The flags will provide the necessary means to resolve any potential ambiguity between
instrument-induced variations and real photometric variations in a target time series. We will also evaluate the
effectiveness of these techniques using flight data from background and selected target pixels.
The Kepler Mission simultaneously measures the brightness of more than 160,000 stars every 29.4 minutes over a 3.5-year
mission to search for transiting planets. Detecting transits is a signal-detection problem where the signal of interest is a
periodic pulse train and the predominant noise source is non-white, non-stationary (1/f) type process of stellar variability.
Many stars also exhibit coherent or quasi-coherent oscillations. The detection algorithm first identifies and removes strong
oscillations followed by an adaptive, wavelet-based matched filter. We discuss how we obtain super-resolution detection
statistics and the effectiveness of the algorithm for Kepler flight data.
We give an overview of the operational concepts and architecture of the Kepler Science Processing Pipeline. Designed,
developed, operated, and maintained by the Kepler Science Operations Center (SOC) at NASA Ames Research Center,
the Science Processing Pipeline is a central element of the Kepler Ground Data System. The SOC consists of an office at
Ames Research Center, software development and operations departments, and a data center which hosts the computers
required to perform data analysis. The SOC's charter is to analyze stellar photometric data from the Kepler spacecraft
and report results to the Kepler Science Office for further analysis. We describe how this is accomplished via the Kepler
Science Processing Pipeline, including the hardware infrastructure, scientific algorithms, and operational procedures. We
present the high-performance, parallel computing software modules of the pipeline that perform transit photometry,
pixel-level calibration, systematic error correction, attitude determination, stellar target management, and instrument
characterization. We show how data processing environments are divided to support operational processing and test
needs. We explain the operational timelines for data processing and the data constructs that flow into the Kepler Science
The Kepler spacecraft is in a heliocentric Earth-trailing orbit, continuously observing ~160,000 select stars over ~115
square degrees of sky using its photometer containing 42 highly sensitive CCDs. The science data from these stars,
consisting of ~6 million pixels at 29.4-minute intervals, is downlinked only every ~30 days. Additional low-rate Xband
communications contacts are conducted with the spacecraft twice a week to downlink a small subset of the science
data. This paper describes how we assess and monitor the performance of the photometer and the pointing stability of the
spacecraft using such a sparse data set.
The Kepler Science Operations Center (SOC) is responsible for several aspects of the Kepler Mission, including
managing targets, generating onboard data compression tables, monitoring photometer health and status, processing
science data, and exporting Kepler Science Processing Pipeline products to the Multi-mission Archive at Space
Telescope [Science Institute] (MAST). We describe how the pipeline framework software developed for the Kepler
Mission is used to achieve these goals, including development of pipeline configurations for processing science data and
performing other support roles, and development of custom unit-of-work generators for controlling how Kepler data are
partitioned and distributed across the computing cluster. We describe the interface between the Java software that
manages data retrieval and storage for a given unit of work and the MATLAB algorithms that process the data. The data
for each unit of work are packaged into a single file that contains everything needed by the science algorithms, allowing
the files to be used to debug and evolve the algorithms offline.
The Kepler space telescope is designed to detect Earth-like planets around Sun-like stars using transit photometry by
simultaneously observing more than 100,000 stellar targets nearly continuously over a three-and-a-half year period. The
96.4-megapixel focal plane consists of 42 Charge-Coupled Devices (CCD), each containing two 1024 x 1100 pixel
arrays. Since cross-correlations between calibrated pixels are introduced by common calibrations performed on each
CCD, downstream data processing requires access to the calibrated pixel covariance matrix to properly estimate
uncertainties. However, the prohibitively large covariance matrices corresponding to the ~75,000 calibrated pixels per
CCD preclude calculating and storing the covariance in standard lock-step fashion. We present a novel framework used
to implement standard Propagation of Uncertainties (POU) in the Kepler Science Operations Center (SOC) data
processing pipeline. The POU framework captures the variance of the raw pixel data and the kernel of each subsequent
calibration transformation, allowing the full covariance matrix of any subset of calibrated pixels to be recalled on the fly
at any step in the calibration process. Singular Value Decomposition (SVD) is used to compress and filter the raw
uncertainty data as well as any data-dependent kernels. This combination of POU framework and SVD compression
allows the downstream consumer access to the full covariance matrix of any subset of the calibrated pixels which is
traceable to the pixel-level measurement uncertainties, all without having to store, retrieve, and operate on prohibitively
large covariance matrices. We describe the POU framework and SVD compression scheme and its implementation in the
Kepler SOC pipeline.
We present an overview of the Data Validation (DV) software component and its context within the Kepler Science
Operations Center (SOC) pipeline and overall Kepler Science mission. The SOC pipeline performs a transiting planet
search on the corrected light curves for over 150,000 targets across the focal plane array. We discuss the DV strategy for
automated validation of Threshold Crossing Events (TCEs) generated in the transiting planet search. For each TCE, a
transiting planet model is fitted to the target light curve. A multiple planet search is conducted by repeating the transiting
planet search on the residual light curve after the model flux has been removed; if an additional detection occurs, a
planet model is fitted to the new TCE. A suite of automated tests are performed after all planet candidates have been
identified. We describe a centroid motion test to determine the significance of the motion of the target photocenter
during transit and to estimate the coordinates of the transit source within the photometric aperture; a series of eclipsing
binary discrimination tests on the parameters of the planet model fits to all transits and the sequences of odd and even
transits; and a statistical bootstrap to assess the likelihood that the TCE would have been generated purely by chance
given the target light curve with all transits removed.
The Kepler mission monitors ~ 165, 000 stellar targets using 42 2200 × 1024 pixel CCDs. Onboard storage
and bandwidth constraints prevent the storage and downlink of all 96 million pixels per 30-minute cadence, so
the Kepler spacecraft downlinks a specified collection of pixels for each target. These pixels are selected by
considering the object brightness, background and the signal-to-noise in each pixel, and maximizing the signal-to-
noise ratio of the target. This paper describes pixel selection, creation of spacecraft apertures that efficiently
capture selected pixels, and aperture assignment to a target. Engineering apertures, short-cadence targets and
custom-specified shapes are discussed.
We present an overview of the pixel-level calibration of flight data from the Kepler Mission performed within the Kepler
Science Operations Center Science Processing Pipeline. This article describes the calibration (CAL) module, which
operates on original spacecraft data to remove instrument effects and other artifacts that pollute the data. Traditional
CCD data reduction is performed (removal of instrument/detector effects such as bias and dark current), in addition to
pixel-level calibration (correcting for cosmic rays and variations in pixel sensitivity), Kepler-specific corrections
(removing smear signals which result from the lack of a shutter on the photometer and correcting for distortions induced
by the readout electronics), and additional operations that are needed due to the complexity and large volume of flight
data. CAL operates on long (~30 min) and short (~1 min) sampled data, as well as full-frame images, and produces
calibrated pixel flux time series, uncertainties, and other metrics that are used in subsequent Pipeline modules. The raw
and calibrated data are also archived in the Multi-mission Archive at Space Telescope at the Space Telescope Science
Institute for use by the astronomical community.
This paper describes the algorithms of the Photometer Performance Assessment (PPA) software component in the
Kepler Science Operations Center (SOC) Science Processing Pipeline. The PPA performs two tasks: One is to analyze
the health and performance of the Kepler photometer based on the long cadence science data down-linked via Ka band
approximately every 30 days. The second is to determine the attitude of the Kepler spacecraft with high precision at each
long cadence. The PPA component has demonstrated the capability to work effectively with the Kepler flight data.
The Kepler Mission is designed to continuously monitor up to 170,000 stars at a 30-minute cadence for 3.5 years
searching for Earth-size planets. The data are processed at the Science Operations Center at NASA Ames Research
Center. Because of the large volume of data and the memory needed, as well as the CPU-intensive nature of the
analyses, significant computing hardware is required. We have developed generic pipeline framework software that is
used to distribute and synchronize processing across a cluster of CPUs and provide data accountability for the resulting
products. The framework is written in Java and is, therefore, platform-independent. The framework scales from a single,
standalone workstation (for development and research on small data sets) to a full cluster of homogeneous or
heterogeneous hardware with minimal configuration changes. A plug-in architecture provides customized, dynamic
control of the unit of work without the need to modify the framework. Distributed transaction services provide for
atomic storage of pipeline products for a unit of work across a relational database and the custom Kepler DB. Generic
parameter management and data accountability services record parameter values, software versions, and other metadata
used for each pipeline execution. A graphical user interface allows for configuration, execution, and monitoring of
pipelines. The framework was developed for the Kepler Mission based on Kepler requirements, but the framework itself
is generic and could be used for a variety of applications where these features are needed.
NASA's <i>Kepler Mission</i> is designed to determine the frequency of Earth-size and larger planets in the habitable zone of solar-like stars. It uses transit photometry from space to determine planet size relative to its star and orbital period. From these measurements, and those of complementary ground-based observations of planet-hosting stars, and from Kepler's third law, the actual size of the planet, its position relative to the habitable zone, and the presence of other planets can be deduced. The <i>Kepler</i> photometer is designed around a 0.95 m aperture wide field-of-view (FOV) Schmidt type telescope with a large array of CCD detectors to continuously monitor 100,000 stars in a single FOV for four years. To detect terrestrial planets, the photometer uses differential relative photometry to obtain a precision of 20 ppm for 12th magnitude stars. The combination of the number of stars that must be monitored to get a statistically significant estimate of the frequency of Earth-size planets, the size of Earth with respect to the Sun, the minimum number of photoelectrons required to recognize the transit signal while maintaining a low false-alarm rate, and the areal density of target stars of differing brightness are all critical to the photometer design.