Data is now produced faster than it can be meaningfully analyzed. Many modern data sets present unprecedented
analytical challenges, not merely because of their size but by their inherent complexity and information richness. Large
numbers of astronomical objects now have dozens or hundreds of useful parameters describing each one. Traditional
color-color plots using a limited number of symbols and some color-coding are clearly inadequate for finding all useful
correlations given such large numbers of parameters. To capitalize on the opportunities provided by these data sets one
needs to be able to organize, analyze and visualize them in fundamentally new ways. The identification and extraction of
useful information in multiparametric, high-dimensional data sets - data mining - is greatly facilitated by finding simpler,
that is, lower-dimensional abstract mathematical representations of the data sets that are more amenable to analysis.
Dimensionality reduction consists of finding a lower-dimensional representation of high-dimensional data by constructing
a set of basis functions that capture patterns intrinsic to a particular state space. Traditional methods of dimension
reduction and pattern recognition often fail to work well when performed upon data sets as complex as those that now
confront astronomy. We present here our developments of data compression, sampling, nonlinear dimensionality
reduction, and clustering, which are important steps in the analysis of large-scale, complex datasets.
A graphical user interface (GUI) for bandmerging is presented. The purpose of the Bandmerge GUI is to provide an integrated graphical user interface for running the bandmerge module and its support modules to provide astronomers with an interactive tool for bandmerging. The bandmerge module identifies multi-band detections of an individual point source and merges the information in the different bands into a single record of the source. The developed Java Application provides an interface to downlink software, which is normally invoked on the command line. With the Bandmerge GUI, a <i>SPITZER</i> general user can select the data to be processed, specify processing parameters, and invoke the Bandmerge pipelines.
A new nonlinear diffusion filtering scheme based on a nonlinear diffusion equation with a variable scale parameter is developed to preserve faint point sources while smoothing images for segmentation purposes. Application of the proposed approach to simulated, as well as to real images obtained by the <i>Spitzer Space Telescope</i> and by the <i>Chandra</i> X-ray Observatory reduced the Gaussian and Poisson noise successfully, while preserving both point sources and diffuse structures.
The Multiband Imaging Photometer for Spitzer (MIPS) provides long wavelength capability for the mission, in imaging bands at 24, 70, and 160 microns and measurements of spectral energy distributions between 52 and 100 microns at a spectral resolution of about 7%. By using true detector arrays in each band, it provides both critical sampling of the Spitzer point spread function and relatively large imaging fields of view, allowing for substantial advances in sensitivity, angular resolution, and efficiency of areal coverage compared with previous space far-infrared capabilities. The Si:As BIB 24 micron array has excellent photometric properties, and measurements with rms relative errors of 1% or better can be obtained. The two longer wavelength arrays use Ge:Ga detectors with poor photometric stability. However, the use of 1.) a scan mirror to modulate the signals rapidly on these arrays, 2.) a system of on-board stimulators used for a relative calibration approximately every two minutes, and 3.) specialized reduction software result in good photometry with these arrays also, with rms relative errors of less than 10%.
Traditional photoconductive detectors are used at 70 and 160 microns in the Multiband Imaging Photometer for SIRTF. These devices are highly sensitivity to cosmic rays and have complex response characteristics, all of which must be anticipated in the data reduction pipeline. The pipeline is being developed by a team at the SIRTF Science Center, where the detailed design and coding are carried out, and at Steward Observatory, where the high level algorithms are developed and detector tests are conducted to provide data for pipeline experiments. A number of innovations have been introduced. Burger's model is used to extrapolate to asymptotic values for the response of the detectors. This approach permits rapid fitting of the complexities in the detector response. Examples of successful and unsuccessful fits to the laboratory test data are shown.