There is an increasing demand for high-resolution recording of in situ underwater cultural heritage. Reflectance transformation imaging (RTI) has a proven track record in terrestrial contexts for acquiring high-resolution diagnostic data at small scales. The research presented here documents the first adaptation of RTI protocols to the subaquatic environment, with a scuba-deployable method designed around affordable off-the-shelf technologies. Underwater RTI (URTI) was used to capture detail from historic shipwrecks in both the Solent and the western Mediterranean. Results show that URTI can capture submillimeter levels of qualitative diagnostic detail from in situ archaeological material. In addition, this paper presents the results of experiments to explore the impact of turbidity on URTI. For this purpose, a prototype fixed-lighting semisubmersible RTI photography dome was constructed to allow collection of data under controlled conditions. The signal-to-noise data generated reveals that the RGB channels of underwater digital images captured in progressive turbidity degraded faster than URTI object geometry calculated from them. URTI is shown to be capable of providing analytically useful object-level detail in conditions that would render ordinary underwater photography of limited use.
This paper presents image processing algorithms designed to analyse the colour CIE Lab histogram of high resolution
images of paintings. Three algorithms are illustrated which attempt to identify colour clusters, cluster shapes due to
shading and finally to identify pigments. Using the image collection and pigment list of the National Gallery London
large numbers of images within a restricted period have been classified with a variety of algorithms. The image
descriptors produced were also used with suitable comparison metrics to obtain content-based retrieval of the images.
Users of image retrieval systems often find it frustrating that the image they are looking for is not ranked near
the top of the results they are presented. This paper presents a computational approach for ranking keyworded
images in order of relevance to a given keyword. Our approach uses machine learning to attempt to learn what
visual features within an image are most related to the keywords, and then provide ranking based on similarity
to a visual aggregate. To evaluate the technique, a Web 2.0 application has been developed to obtain a corpus
of user-generated ranking information for a given image collection that can be used to evaluate the performance
of the ranking algorithm.
In this paper, we focus on the development of whole-scene colour appearance descriptors for classification to be used in
browsing applications. The descriptors can classify a whole-scene image into various categories of semantically-based
colour appearance. Colour appearance is an important feature and has been extensively used in image-analysis, retrieval
and classification. By using pre-existing global CIELAB colour histograms, firstly, we try to develop metrics for whole-scene
colour appearance: "colour strength", "high/low lightness" and "multicoloured". Secondly we propose methods
using these metrics either alone or combined to classify whole-scene images into five categories of appearance: strong,
pastel, dark, pale and multicoloured. Experiments show positive results and that the global colour histogram is actually
useful and can be used for whole-scene colour appearance classification. We have also conducted a small-scale human
evaluation test on whole-scene colour appearance. The results show, with suitable threshold settings, the proposed
methods can describe the whole-scene colour appearance of images close to human classification. The descriptors were
tested on thousands of images from various scenes: paintings, natural scenes, objects, photographs and documents. The
colour appearance classifications are being integrated into an image browsing system which allows them to also be used
to refine browsing.
Augmented Reality (AR) requires a mapping between the camera(s) and the world, so that virtual objects can be correctly registered. Current AR applications either use pre-prepared fiducial markers or specialist equipment or impose significant constraints on lighting and background. Each of these approaches has significant drawbacks. Fiducial markers are susceptible to loss or damage, can be awkward to work with and may require significant effort to prepare an area for Augmented interaction. Use of such markers may also present an imposition to non-augmented observers, especially in environments such as museums or historical landmarks. Specialist equipment is expensive and not universally available. Lighting and background constraints are often impractical for real-world applications. This paper presents initial results in using the palm of the hand as a pseudo-fiducial marker in a natural real-world environment, through colour, feature and edge analysis. The eventual aim of this research is to enable fiducial marker cards to be dispensed with entirely in some situations in order to allow more natural interaction in Augmented environments. Examples of this would be allowing users to "hold" virtual 3D objects in the palm of their hand or use gestures to interact with virtual objects.
The growing number of large multimedia collections has led to an increased interest in content-based retrieval research. Applications of content-based techniques to image retrieval is an active research area but much less work has been reported on content-based retrieval of 3-D objects in a multimedia database context. Increasingly such objects are being captured and added to multimedia collections and the European project, SCULPTEUR, is developing a museum information system which includes the introduction of facilities for content-based retrieval of the 3-D representations.
This paper provides a comparison and evaluation of a range of 3-D shape descriptors and distance metrics which have been introduced into the SCULPTEUR project to demonstrate their use for content-based retrieval applications. Results show that while particular descriptors and distance metrics provide good overall performance, it can be more appropriate to choose different descriptors for different search tasks.
In this paper we present steps taken to implement a content-based analysis of crack patterns in paintings. Cracks are first detected using a morphological top-hat operator and grid-based automatic thresholding. From a 1-pixel wide representation of crack patterns, we generate a statistical structure of global and local features from a chain-code based representation. A well structured model of the crack patterns allows post-processing to be performed such as pruning and high-level feature extraction. High-level features are extracted from the structured model utilising information mainly based on orientation and length of line segments. Our strategy for classifying the crack patterns makes use of an unsupervised approach which incorporates fuzzy clustering of the patterns. We present results using the fuzzy k-means technique.
The European Commission-funded MARC project ended in April 1996, with the publication of Flemish Baroque Painting, Masterpieces of the Alte Pinakothek, Muenchen (Hirmer 1996). To the best of our knowledge, this is the world's first all- digital colorimetric art catalogue. This paper will briefly introduce the MARC camera and the MARC printing technology, and then present a critical evaluation of the final book. The application of MARC results since the end of the project will be covered, and related EC imaging projects surveyed.
This paper describes VIPS (VASARI Image Processing System), an image processing system developed by the authors in the course of the EU-funded projects VASARI (1989-1992) and MARC (1992-1995). VIPS implements a fully demand-driven dataflow image IO (input- output) system. Evaluation of library functions is delayed for as long as possible. When evaluation does occur, all delayed operations evaluate together in a pipeline, requiring no space for storing intermediate images and no unnecessary disc IO. If more than one CPU is available, then VIPS operations will automatically evaluate in parallel, giving an approximately linear speed-up. The evaluation system can be controlled by the application programmer. We have implemented a user-interface for the VIPS library which uses expose events in an X window rather than disc output to drive evaluation. This makes it possible, for example, for the user to rotate an 800 MByte image by 12 degrees and immediately scroll around the result.
In the past decade various museums and galleries around Europe have been developing digital imaging as a tool for archiving and analysis. Accurate digital images can replace the conventional film archives which are not stable or accurate but are the standard record of art. The digital archives open up new research possibilities as well as become resources for CD- ROM production, damage analysis, research and publishing. In the VASARI project new scanners were devised to produce colorimetric images directly from paintings using multispectral (six band) imaging. These can produce images in CIE Lab format with resolutions over 10 k multiplied by 10 k and have been installed in London, England; Munich, Germany; and Florence, Italy. They are based around a large stepper-motor controlled scanner moving a high resolution CCD camera to obtain patches of 3 k multiplied by 2 k pels which are mosaiced together. The scanners can also be used for infra-red imaging with a different camera. The MARC project produced a portable scan-back, RGB camera capable of similar output and techniques for calibrated printing. The Narcisse project produced a fast high resolution scanner for X-radiographs and film and many projects have worked on networking the growing number of image databases. This paper presents a survey of some key European projects, particularly those funded by the European Union, involved in high resolution and colorimetric imaging of art. The design of the new scanners and examples of the applications of these images are presented.
With the aim of providing a digital electronic replacement for conventional photography of paintings, a scanner has been constructed based on a 3000 X 2300 pel resolution camera which is moved precisely over a 1 meter square area. Successive patches are assembled to form a mosaic which covers the whole area at c. 20 pels/mm resolution, which is sufficient to resolve the surface textures, particularly craquelure. To provide high color accuracy, a set of seven broad-band interference filters are used to cover the visible spectrum. A calibration procedure based upon a least-mean-squares fit to the color of patches from a Macbeth Colorchecker chart yields an average color accuracy of better than 3 units in the CMC uniform color space. This work was mainly carried out as part of the VASARI project funded by the European Commission's ESPRIT program, involving companies and galleries from around Europe. The system is being used to record images for conservation research, for archival purposes and to assist in computer-aided learning in the field of art history. The paper will describe the overall system design, including the selection of the various hardware components and the design of controlling software. The theoretical basis for the color calibration methodology is described as well as the software for its practical implementation. The mosaic assembly procedure and some of the associated image processing routines developed are described. Preliminary results from the research will be presented.