PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
A data Tsunami is overwhelming Astronomy. This wave is affecting all aspects of our field, revolutionizing not just the type of scientific questions being asked, but the very nature of how the answers are uncovered. In this invited proceeding, we will address a particular scientific application - Panchromatic Mining for Quasars - of the forthcoming virtual observatories, which have arisen in an effort to control the effects of the data Tsunami. This project, in addition to serving as an important scientific driver for virtual observatory technologies, is designed to a) characterize the multi-wavelength nature of known active galaxies and quasars, especially in relation to their local environment, in order to b) quantify the clustering of these known systems in the multidimensional parameter space formed by their observables, so that new, and potentially unknown types of systems can be optimally targeted.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we describe the use of data mining techniques to search for radio-emitting galaxies with a bent-double morphology. In the past, astronomers from the FIRST (Faint Images of the Radio Sky at Twenty-cm) survey identified these galaxies through visual inspection. This was not only subjective but also tedious as the on-going survey now covers 8000 square degrees, with each square degree containing about 90 galaxies. In this paper, we describe how data mining can be used to automate the identification of these galaxies. We discuss the challenges faced in defining meaningful features that represent the shape of a galaxy and our experiences with ensembles of decision trees for the classification of bent-double galaxies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We review the capabilities of the NASA/IPAC Extragalactic Database (NED, http://ned.ipac.caltech.edu) for information retrieval and knowledge discovery in the context of a globally distributed virtual observatory. Since it's inception in 1990, NED has provided astronomers world-wide with the results of a systematic cross-correlation of catalogs covering all wavelengths, along with thousands of extragalactic observations culled from published journal articles. NED is continuously being expanded and revised to include new catalogs and published observations, each undergoing a process of cross-identification to capture the current state of knowledge about extragalactic sources in a panchromatic fashion. In addition to assimilating data from the literature, the team in incrementally folding in millions of observations from new large-scale sky surveys such as 2MASS, NVSS, APM, and SDSS. At the time of writing the system contains over 3.3 million unique objects with 4.2 million cross-identifications. We summarize the recent evolution of NED from its initial emphasis on object name-, position-, and literature-based queries into a research environment that also assists statistical data exploration and discovery using large samples of objects. Newer capabilities enable intelligent Web mining of entries in geographically distributed astronomical archives that are indexed by object names and positions in NED, sample building using constraints on redshifts, object types and other parameters, as well as image and spectral archives for targeted or serendipitous discoveries. A pilot study demonstrates how NED is being used in conjunction with linked survey archives to characterize the properties of galaxy classes to form a training set for machine learning algorithms; an initial goal is production of statistical likelihoods that newly discovered sources belong to known classes, represent statistical outliers, or candidates for fundamentally new types of objects. Challenges and opportunities for tighter integration of NED capabilities into data mining tools for astronomy archives are also discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we deal with FOCA ultraviolet data and their cross-referencing with the DPOSS optical catalog, through data mining techniques. While traditional cross-referencing consists in correcting catalog coordinates in order to seek nearest candidate, non-optical surveys tend to have lower resolutions and more coordinates uncertainties. Then, it seemed to be a loss not to use more light sources parameters obtained through image processing pipelines. A data mining approach based on decision trees (machine learning algorithms), we processed different FOCA/DPOSS sources pairs that we could suppose being the same stellar entity, and some other pairs, obviously too distant to match. Trees use every existing ultraviolet/optical parameter present on catalog, excluding only coordinates. The resulting trees allows a classification of any FOCA/DPOSS pair, giving a probability for the pair to match, i.e. come from the same source. The originality of this method is the use of non-position parameters, that can be used for cross-referencing various catalogs in different wavelength without the need to homogenize coordinates systems. Such methods could be tools for working on upcoming multi-wavelength catalogs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Like every other field of intellectual endeavor, astronomy is being revolutionized by the advances in information technology. There is an ongoing exponential growth in the volume, quality, and complexity of astronomical data sets, mainly through large digital sky surveys and archives. The Virtual Observatory (VO) concept represents a scientific and technological framework needed to cope with this data flood. Systematic exploration of the observable parameter spaces, covered by large digital sky surveys spanning a range of wavelengths, will be one of the primary modes of research with a VO. This is where the truly new discoveries will be made, and new insights be gained about the already known astronomical objects and phenomena. We review some of the methodological challenges posed by the analysis of large and complex data sets expected in the VO-based research. The challenges are driven both by the size and the complexity of the data sets (billions of data vectors in parameter spaces of tens or hundreds of dimensions), by the heterogeneity of the data and measurement errors, including differences in basic survey parameters for the federated data sets (e.g., in the positional accuracy and resolution, wavelength coverage, time baseline, etc), various selection effects, as well as the intrinsic clustering properties (functional form, topology) of the data distributions in the parameter spaces of observed attributes. Answering these challenges will require substantial collaborative efforts and partnerships between astronomers, computer scientists, and statisticians.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The advent of large format CCD detectors is producing a data flow which cannot be handled with traditional interactive software tools and to be effectively exploited need automatic tools for catalogue extraction and analysis. NExt (Neural Extractor) is a neural network based package capable to perform in a fully automatic way both object detection and star/galaxy classification on large format astronomical images. In this paper we shortly summarize the main aspects of the package stressing some innovative aspect of the procedure implemented to perform the automatic extraction of the data set to be used for the training.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Data Modeling, Background Estimation, Cosmic Ray Removal,...
Giuseppe Longo, Roberto Tagliaferri, Salvatore Sessa, Patricio F. Ortiz, Massimo Capaccioli, A. Ciaramella, Ciro Donalek, Giancarlo Raiconi, A. Staiano, et al.
The advent of large format CCD detectors and of dedicated survey telescopes is providing the astronomical community with datasets of unprecedented size and quality. These data sets cannot be effectively exploited with traditional interactive tools and require the use of innovative data mining and visualization tools resulting from a synergy between astronomy and information sciences. We discuss here some preliminary results obtained by our group in the fields of automatic clustering in multiparametric space and detection of time transients (both astrometric and photometric).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ever-increasing quality and complexity of astronomical data underscores the need for new and powerful data analysis applications. This need has led to the development of Sherpa, a modeling and fitting program in the CIAO software package that enables the analysis of multi-dimensional, multi-wavelength data. In this paper, we present an overview of Sherpa's features, which include: support for a wide variety of input and output data formats, including the new Model Descriptor List (MDL) format; a model language which permits the construction of arbitrarily complex model expressions, including ones representing instrument characteristics; a wide variety of fit statistics and methods of optimization, model comparison, and parameter estimation; multi-dimensional visualization, provided by ChIPS; and new interactive analysis capabilities provided by embedding the S-Lang interpreted scripting language. We conclude by showing example Sherpa analysis sessions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a new satellite attitude determination method based on the Bayesian bootstrap filtering approach. The proposed method estimates three Euler angles using the vector observations obtained from a star sensor and using the information of gyro angular rates. The system dynamics and the measurement models of this problem are highly nonlinear functions of the Euler angles and the angular velocities. Moreover, the well-known singularity problem (of the Euler angles) may be encountered during random spinning of the satellite. To verify the proposed method, simulation is performed and the result demonstrates that our method gives better Euler angles estimate than the EKF (extended Kalman filter).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Measuring vector magnetic fields in the solar atmosphere using the profiles of the Stokes parameters of polarized spectral lines split by the Zeeman effect is known as Stokes Inversion. This inverse problem is usually solved by least-squares fitting of the Stokes profiles. However least-squares inversion is too slow for the new generation of solar instruments (THEMIS, SOLIS, Solar-B, ...) which will produce an ever-growing flood of spectral data. The solar community urgently requires a new approach capable of handling this information explosion, preferably in real-time. We have successfully applied pattern recognition and machine learning techniques to tackle this problem. For example, we have developed PCA-inversion, a database search technique based on Principal Component Analysis of the Stokes profiles. Search is fast because it is carried out in low dimensional PCA feature space, rather than the high dimensional space of the spectral signals. Such a data compression approach has been widely used for search and retrieval in many areas of data mining. PCA-inversion is the basis of a new inversion code called FATIMA (Fast Analysis Technique for the Inversion of Magnetic Atmospheres). Tests on data from HAO's Advanced Stokes Polarimeter show that FATIMA isover two orders of magnitude faster than least squares inversion. Initial tests on an alternative code (DIANNE - Direct Inversion based on Artificial Neural NEtworks) show great promise of achieving real-time performance. In this paper we present the latest achievements of FATIMA and DIANNE, two powerful examples of how pattern recognition techniques can revolutionize data analysis in astronomy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Aperture phase errors due to wave propagation through random media (troposphere in optics, troposphere and ionosphere for radio astronomy super large synthesis arrays) belong to the main obstacles for achieving potential angular resolution and image quality. One of the methods of finding phase corrections is the image sharpness maximization. Genetic algorithms for the image sharpness maximization are proposed and tested in computer simulations in this paper. Genetic algorithms are especially well suited for this problem, because they robustly find a global maximum in such a multi- modal task as searching an optimum aperture phase distribution for phase errors compensation. Results of computer simulations of image enhancement with genetic algorithms are presented, which show significant improvement of the test images quality. This approach permits to make both calibration observations of pointlike objects (bright starts of calibration radio sources) and to enhance images of extended objects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We study time series of the X-ray intensity of the binary XTE-J1550-564 with the goal of estimating its instantaneous power spectrum. We develop a method that, from the initial sequence of photon arrival times, is able to estimate the time-frequency spectrum in conjunction with noise reduction techniques. This method clearly highlights the presence of a quasi-periodic oscillation (QPO), a spectral component the frequency of which changes in time. Furthermore, the QPO is extracted by using signal processing methods in the time-frequency plane. The method is also validated using a synthetic signal to show the quality and reliability of its performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many multiscale methods have been developed during the last fifteen years, such the bi-orthogonal wavelet transform, the \`a trous algorithm, the ridgelet transform or the curvelet transform. Each of them is optimal to detect one kind of features. We present the Combined Transforms Method which allows us to combine several transformations in order to benefit of the advantages of each of them.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image Compression, Databases, Information Retrieval,...
The Centre de Donnees astronomiques de Strasbourg (CDS) develops a set of value-added services, widely used for information retrieval, observation preparation, data interpretation, etc. SIMBAD, VizieR, Aladin and the 'Dictionary of Nomenclature' integrate heterogeneous selected information from observatory archives, sky surveys and publications. Each service organizes information in a different way (astronomical objects, tables, images with overlays), and the CDS hub allows versatile information retrieval, e.g. looking for known information in a given region of the sky, including observations from ground- and space-based instruments, or searching by criteria in large data sets. Links among the CDS services, and with other reference on-line information systems, such as observatory and survey archives or publications, permit comprehensive searches in a wide variety of resources. Shared exchange standards and generic tools such as the GLU are essential for the building of links. XML is a key tool for further information integration, and Aladin is a precursor of an integration tool, relying on FITS and XML. New functionalities will be developed at CDS in the context of the Virtual Observatory, e.g. for data mining and management of very large catalogues.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Performance results using wavelet and other multiresolution transforms are described. The roles of information content, resolution scale, and image capture noise, are discussed. Delivery systems for a range of large image repositories from areas which include medical, astronomy and graphic arts are described, as standalone systems for use on portable platforms, and client-server systems for use on the web.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One major component of the VO will be catalogs measuring gigabytes and terrabytes if not more. Some mechanism like XML will be used for structuring the information. However, such mechanisms are not good for information retrieval on their own. For retrieval we use queries. Topic Maps that have started becoming popular recently are excellent for segregating information that results from a query. A Topic Map is a structured network of hyperlinks above an information pool. Different Topic Maps can form different layers above the same information pool and provide us with different views of it. This facilitates in being able to ask exact questions, aiding us in looking for gold needles in the proverbial haystack. Here we will discuss the specifics of what Topic Maps are and how they can be implemented within the VO framework.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new data compression algorithm for encoding astronomical source lists is presented. Two experiments in combined compression and analysis (CCA) are described, the first using simulated imagery based upon a tractable source list model, and the second using images from SPIRIT III, a spaceborne infrared sensor. A CCA system consisting of the source list compressor followed by a zerotree-wavelet residual encoder is compared to alternatives based on three other astronomical image compression algorithms. CCA performance is expressed in terms of image distortion along with relevant measures of point source detection and estimation quality. Some variations of performance with compression bit rate and point source flux are characterized. While most of the compression algorithms reduce high-frequency quantum noise at certain bit rates, conclusive evidence is not found that such denoising brings an improvement in point source detection or estimation performance of the CCA systems. The proposed algorithm is a top performer in every measure of CCA performance; the computational complexity is relatively high, however.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The AVO has the potential to significantly change and improve the way astronomers can utilize data and conduct their research. In order to make this happen, the most important challenge of the AVO will be to enable astronomers to find what they need for their research. This will be more and more difficult the more data are included in the AVO. We believe that there already exists a search system that can be used as the basis for this search capability of the AVO. Properly utilized this basis will allow the AVO to much more quickly reach its goals.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The goal of the virtual observatory is to provide seamless, efficient access to astronomical data archives, catalogs, bibliographic services, and computational resources worldwide. This goal can be realized only through the development of a sophisticated information technology infrastructure. The infrastructure must accommodate the integration of diverse data types from potentially thousands of sites and services, capitalizing on emerging computational grid technologies for resource allocation and management. This paper describes the major IT challenges facing the virtual observatory and suggests a middleware architecture capable of supporting its scientific objectives.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A clear goal of the Virtual Observatory (VO) is to enable new science through analysis of integrated astronomical archives. An additional and powerful possibility of the VO is to link and integrate these new analyses with planning of new observations. By providing tools that can be used for observation planning in the VO, the VO will allow the data lifecycle to come full circle: from theory to observations to data and back around to new theories and new observations. The Scientist's Expert Assistant (SEA) Simulation Facility (SSF) is working to combine the ability to access existing archives with the ability to model and visualize new observations. Integrating the two will allow astronomers to better use the integrated archives of the VO to plan and predict the success of potential new observations more efficiently. The full circle lifecycle enabled by SEA can allow astronomers to make substantial leaps in the quality of data and science returns on new observations. Our talk examines the exciting potential of integrating archival analysis with new observation planning, such as performing data calibration analysis on archival images and using that analysis to predict the success of new observations, or performing dynamic signal-to-noise analysis combining historical results with modeling of new instruments or targets. We will also describe how the development of the SSF is progressing and what have been its successes and challenges.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the Virtual Observatory (VO), software tools will perform the functions that have traditionally been performed by physical observatories and their instruments. These tools will not be adjuncts to VO functionality but will make up the very core of the VO. Consequently, the tradition of observatory and system independent tools serving a small user base is not valid for the VO. For the VO to succeed, we must improve software collaboration and code sharing between projects and groups. A significant goal of the Scientist's Expert Assistant (SEA) project has been promoting effective collaboration and code sharing among groups. During the past three years, the SEA project has been developing prototypes for new observation planning software tools and strategies. Initially funded by the Next Generation Space Telescope, parts of the SEA code have since been adopted by the Space Telescope Science Institute. SEA has also supplied code for the SIRTF planning tools, and the JSky Open Source Java library. The potential benefits of sharing code are clear. The recipient gains functionality for considerably less cost. The provider gains additional developers working with their code. If enough users groups adopt a set of common code and tools, de facto standards can emerge (as demonstrated by the success of the FITS standard). Code sharing also raises a number of challenges related to the management of the code. In this talk, we will review our experiences with SEA - both successes and failures, and offer some lessons learned that might promote further successes in collaboration and re-use.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Interoperability is one of the important issues in the current efforts to build the Virtual Observatory. We present here some of the tools which already contribute to the efficient exchange of information between archives and databases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Data Flow System is the VLT end-to-end system for handling astronomical observations from the initial observation proposal phase through the acquisition, processing and control of the astronomical data. The VLT Data Flow System has been in place since the opening of the first VLT Unit Telescope in 1998. When completed the VLT Interferometer will make it possible to coherently combine up to three beams coming from the four VLT 8.2m telescopes as well as from a set of initially three 1.8m Auxiliary Telescopes, using a Delay Line tunnel and four interferometry instruments. The Data Flow system is now in the process of installation and adaptation for the VLT Interferometer. Observation preparation for a multi-telescope system, handling large data volume of several tens of gigabytes per night are among the new challenges offered by this system. This introduction paper presents the VLTI Data Flow system installed during the initial phase of VLTI commissioning. Observation preparation, data archival, and data pipeline processing are addressed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Chandra Data Model (CDM) library was developed to support data analysis for the Chandra X-ray Observatory, one of NASA's orbiting Great Observatories. The library and its associated tools are designed to be multi-mission and can be used to manipulate a wide variety of astronomical data. Much of the library's power comes from its use of virtual files, which provide a flexible command-line user interface.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The joint archive facility of the European Southern Observatory (ESO) and the Space Telescope - European Coordinating Facility (ST-ECF) is undertaking particular efforts in the field of associating (grouping) Hubble Space Telescope (HST) observations for a number of years already. By now their users are given means for browsing associations of HST images. Soon the same capability will be provided for spectra as well. Associations of observations can either be defined and driven by requirements imposed by higher level algorithms like co-adding and drizzling techniques or by user defined constraints. In any case we consider these services an important precursor and testbed to a future virtual observatory. Two components complement an on-line interface (archive.eso.org) to such data products: For one part it is the selection process which can be greatly improved by adding preview capabilities for individual or multiple exposures. On the other hand it requires a request handling system which supports the concept of associations and which can expand a given association and computes and delivers calibrated and combined data products.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years, the operation of large telescopes with wide field detectors - such as the European Southern Observatory (ESO) Wide Field Imager (WFI) on the 2.2 meters telescope at La Silla, Chile - have dramatically increased the amount of astronomical data produced each year. The next survey telescopes, such as the ESO VST, will continue on this trend, producing extremely large datasets. Astronomy, therefore, has become an incredibly data rich field requiring new tools and new strategies to efficiently handle huge archives and fully exploit their scientific content. At the Space Telescope European Coordinating Facility we are working on a new project, code named Querator (http://archive.eso.org/querator/). Querator is an advanced multi-archive search engine built to address the needs of astronomers looking for multicolor imaging data across different astronomical data-centers. Querator returns sets of images of a given astronomical object or search region. A set contains exposures in a number of different wave bands. The user constraints the number of desired wave bands by selecting from a set of instruments, filters or by specifying actual physical units. As far as present-day data-centers are concerned, Querator points out the need for: - an uniform and standard description of archival data and - an uniform and standard description of how the data was acquired (i.e. instrument and observation characteristics). Clearly, these pieces of information will constitute an intermediate layer between the data itself and the data mining tools operating on it. This layered structure is a prerequisite to real data-center inter-operability and, hence, to Virtual Observatories. A detailed description of Querator's design, of the required data structures, of the problems encountered so far and of the proposed solutions will be given in the following pages. Throughout this paper we'll favor the term data-center over archive to stress the need to look at raw-pixels' archives and catalogues in an homogeneous way.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For the last 20 months, the Chandra X-Ray Observatory (Weisskopf et. al. 2000) has been producing X-ray images of the universe in stunning detail. This is due in large part to the excellent post-facto pointing aspect determination for Chandra (Aldcroft et. al. 2000). This aspect determination performance is achieved using elliptical gaussian centroiding techniques. Application of point spread function (PSF) fitting using a true PSF model for the Aspect Camera Assembly (ACA) on Chandra could improve this performance. We have investigated the use of an ACA PSF model in the post-facto centroiding of stars and fiducial lights imaged by the ACA. We will present the methodologies explored for use in determining a model for the ACA PSF and discuss the results of a comparison of PSF fit centroiding and the current method of elliptical gaussian centroiding as they apply to post-facto aspect reconstruction. The first method of recovering the ACA PSF uses a raytrace model of the ACA to generate simulated stellar PSFs. In this method, the MACOS raytracing software package is used to describe each element of the Chandra aspect optical system. The second method investigated is the so called shift and add method whereby we build a high resolution image of the PSF by combining several thousand low resolution images of a single star collected by the ACA while tracking during normal science observations. The programmed dither of the spacecraft slowly sweeps the stellar image across the ACA focal plane, and the many slightly offset images are used to effectively increase the resolution of the resultant image of the star to a fraction of an ACA pixel. In each method, a library of PSF images is built at regularly gridded intervals across the ACA focal plane. This library is then used to interpolate a PSF at any desired position on the focal plane. We have used each method to reprocess the aspect solution of a set of archived Chandra observation and compare the results to one another and to the delivered post-facto aspect solution, currently derived using elliptical gaussian centroiding of ACA star images. Finally, we will present a summary of Chandra's aspect performance achieved to date, and discuss the effect of incorporating a PSF model into the post-facto aspect determination software.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The LASCO-C2 and C3 coronagraphs aboard the SOHO solar observatory have been providing for the last five years an unprecedented long sequence of coronal images at high cadence (about 100 images/day). To build temporal sequences for movie displays as well as for science analysis purpose, we need a photomety having a relative accuracy better than 0.1 percent. In this paper we address this problem showing how image to image regression as well as long term correction of drifts induced by Wiener-Levy stochastic processes are able to solve this challenge. The use of time derivatives of synoptic maps to correct and verify the full procedure and comparison with more classical methods like star calibration are discussed. Difficulties due to brightening events as coronal mass ejections, showers of cosmic rays, etc., as well as those due to telemetry gaps are also addressed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The current generation of 8-10m optical ground-based telescopes have a symbiotic relationship with space telescopes. For direct imaging in the optical the former can collect photons relatively cheaply but the latter can still achieve, even in the era of adaptive optics, significantly higher spatial resolution, point-spread function stability and astrometric fidelity over fields of a few arcminutes. The large archives of HST imaging already in place, when combined with the ease of access to ground-based data afforded by the virtual observatory currently under development, will make space-ground data fusion a powerful tool for the future. We describe a photometric image restoration method that we have developed which allows the efficient and accurate use of high-resolution space imaging of crowded fields to extract high quality photometry from very crowded ground-based images. We illustrate the method using HST and ESO VLT/FORS imaging of a globular cluster and demonstrate quantitatively the photometric measurements quality that can achieved using the data fusion approach instead of just using data from just one telescope. This method can handle most of the common difficulties encountered when attempting this problem such as determining the geometric mapping to the requisite precision, deriving the PSF and the background.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the so-called parallel mode, ISOCAM, the mid-infrared camera on board ESA's Infrared Space Observatory (ISO), continued to observe while other instruments were prime, thus providing a widespread high-sensitivity survey of the sky. The currently exploitable data set was taken during 7000 hours of observations and consists of over 37000 pointings. The source extraction from these images is a challenging task due to the following difficulties: * the small number of pixels per image (32*32), resulting into a highly under-sampled Point Spread Function * the varying sky area --- from flat background to highly structured or confused * the varying and a priori unknown instrumental noise and the highly varying duration of each pointing * the high number of spurious sources due to restricted glitch-rejection for observations with few readouts * the lack of redundant pointings for the majority of cases The algorithm developed to solve these problems consists of a combination of three detection methods: * sextractor, using various thresholds and convolution files * multi-resolution detection * flux- and position determination of detected sources via modified point-source fitting * heuristic criteria to classify the sources into point- and extended sources
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The recent publications of the DENIS Catalogue towards the Magellanic Clouds (MCs) with more than 1.3 million sources identified in at least two of the three DENIS filters (I J Ks) and of the incremental releases of the 2MASS point source catalogues (J H Ks) covering the same region of the sky, provide an unprecedented wealth of data related to stellar populations in the Mcs. In order to build a reference catalogue of stars towards the Magellanic Clouds, we have performed a cross--identification of these two catalogues. This implied developing new tools for cross--identification and data mining. This study is partly supported by the Astrovirtel program that aims at improving access to astronomical archives as virtual telescopes. The main goal of the present study is to validate new cross--matching procedures for very large catalogues, and to derive results concerning the astrometric and photometric accuracy of these catalogues. The cross--matching of large surveys is an essential tool to improve our understanding of their specific contents. This approach can be considered as a new step towards a Virtual Observatory.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Data Mining: Sky Survey Data Analysis, Detection, Classification
Moves to start building the virtual observatory are already under way in Europe where funding has been approved for both the Astrophysical Virtual Observatory and the UK's Astrogrid project. This paper outlines some of the data challenges, discusses the merits of FITS and XML for data archiving, the difficulties of finding suitable software for the management of data archives, and for data mining.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.