A method is presented for increasing the spatial resolution of the three-dimensional (3-D) digital representation of coins by combining fine photometric detail derived from a set of photographic images with accurate geometric data from a 3-D laser scanner. 3-D reconstructions were made of the obverse and reverse sides of two ancient Roman denarii by processing sets of images captured under directional lighting in an illumination dome. Surface normal vectors were calculated by a “bounded regression” technique, excluding both shadow and specular components of reflection from the metallic surface. Because of the known difficulty in achieving geometric accuracy when integrating photometric normals to produce a digital elevation model, the low spatial frequencies were replaced by those derived from the point cloud produced by a 3-D laser scanner. The two datasets were scaled and registered by matching the outlines and correlating the surface gradients. The final result was a realistic rendering of the coins at a spatial resolution of 75 pixels/mm (13-μm spacing), in which the fine detail modulated the underlying geometric form of the surface relief. The method opens the way to obtain high quality 3-D representations of coins in collections to enable interactive online viewing.
In the dome imaging system at UCL, sets of pixel-registered images can be captured, with a different direction of illumination for each image. A new method has been developed to estimate surface normals more accurately by solving the photometric normal equations as a regression over a set of illumination angles and intensities selected from the subset corresponding to the diffuse component of reflection from the object surface (the 'body colour'). The gradients are integrated to reconstruct a digital terrain map, using a Fourier transform to regularise (i.e. enforce integrability of) the gradients in the frequency domain. This yields a 3D surface that is continuous but distorted over the whole area with the height greatly amplified. The problem is that although the gradients give a good representation of the spatial frequencies in the surface, right up to the Nyquist frequency, they are not accurate for very low frequencies of a few cycles over the full object diameter. Such frequencies are represented in the Fourier plane by only a few sample points close to the (shifted) origin. Errors in these frequencies can result in 'curl' or 'heave' in the baseplane, even though the superimposed higher spatial frequencies may be accurate. The solution is to replace the inaccurate low frequencies of the photometric normals by the more accurate low frequencies of a surface constructed from a few known heights. This is conveniently achieved from the values measured by a digital height gauge by interpolating to produce a smooth ‘hump’ and then transforming into the frequency domain by an FFT.
This paper aims to provide a procedure for improving automated 3D reconstruction methods via vision metrology. The 3D reconstruction problem is generally addressed using two different approaches. On the one hand, vision metrology (VM) systems try to accurately derive 3D coordinates of few sparse object points for industrial measurement and inspection applications; on the other, recent dense image matching (DIM) algorithms are designed to produce dense point clouds for surface representations and analyses. This paper strives to demonstrate a step towards narrowing the gap between traditional VM and DIM approaches. Efforts are therefore intended to (i) test the metric performance of the automated photogrammetric 3D reconstruction procedure, (ii) enhance the accuracy of the final results and (iii) obtain statistical indicators of the quality achieved in the orientation step. VM tools are exploited to integrate their main functionalities (centroid measurement, photogrammetric network adjustment, precision assessment, etc.) into the pipeline of 3D dense reconstruction. Finally, geometric analyses and accuracy evaluations are performed on the raw output of the matching (i.e. the point clouds) by adopting a metrological approach. The latter is based on the use of known geometric shapes and quality parameters derived from VDI/VDE guidelines. Tests are carried out by imaging the calibrated Portable Metric Test Object, designed and built at University College London (UCL), UK. It allows assessment of the performance of the image orientation and matching procedures within a typical industrial scenario, characterised by poor texture and known 3D/2D shapes.
The UCL Dome consists of an acrylic hemisphere of nominal diameter 1030 mm, fitted with 64 flash lights, arranged in three tiers of 16, one tier of 12, and one tier of 4 lights at approximately equal intervals. A Nikon D200 digital camera is mounted on a rigid steel frame at the ‘north pole’ of the dome pointing vertically downwards with its optical axis normal to the horizontal baseboard in the ‘equatorial’ plane. It is used to capture sets of images in pixel register for visualisation and surface reconstruction. Three techniques were employed for the geometric calibration of flash light positions in the dome: (1) the shadow cast by a vertical pin onto graph paper; (2) multi-image photogrammetry with retro-reflective targets; and (3) multi-image photogrammetry using the flash lights themselves as targets. The precision of the coordinates obtained by the three techniques was analysed, and it was found that although photogrammetric methods could locate individual targets to an accuracy of 20 μm, the uncertainty of locating the centroids of the flash lights was approximately 1.5 mm. This result was considered satisfactory for the purposes of using the dome for photometric imaging, and in particular for the visualisation of object surfaces by the polynomial texture mapping (PTM) technique.
Flexible manufacturing technologies are supporting the routine production of components with freeform surfaces in a
wide variety of materials and surface finishes. Such surfaces may be exploited for both aesthetic and performance criteria
for a wide range of industries, for example automotive, aircraft, small consumer goods and medial components. In order
to ensure conformance between manufactured part and digital design it is necessary to understand, validate and promote
best practice of the available measurement technologies. Similar, but currently less quantifiable, measurement
requirements also exist in heritage, museum and fine art recording where objects can be individually hand crafted to
extremely fine levels of detail.
Optical 3D measurement systems designed for close range applications are typified by one or more illumination sources
projecting a spot, line or structured light pattern onto a surface or surfaces of interest. Reflections from the projected
light are detected in one or more imaging devices and measurements made concerning the location, intensity and
optionally colour of the image. Coordinates of locations on the surface may be computed either directly from an
understanding of the illumination and imaging geometry or indirectly through analysis of the spatial frequencies of the
projected pattern. Regardless of sensing configuration some independent means is necessary to ensure that measurement
capability will meet the requirements of a given level of object recording and is consistent for variations in surface
properties and structure. As technologies mature, guidelines for best practice are emerging, most prominent at the current
time being the German VDI/VDE 2634 and ISO/DIS 10360-8 guidelines. This considers state of the art capabilities for
independent validation of optical non-contact measurement systems suited to the close range measurement of table top
sized manufactured or crafted objects.
Printing with white ink plays an important role in many printing processes, but white is difficult to integrate into colour
management processes since conventional measurements are uncorrelated with the ink amount.
A control method for white ink is proposed in which white is printed and measured over black. The resulting
colorimetric densities can be modelled by polynomial regression, allowing accurate prediction of tonal value. The model
can readily be inverted to predict the colorant amount required to match a given colorimetric density, and hence is a
suitable method of measurement that can support process control and colour management.
Colour information is not faithfully maintained by a CCTV imaging chain. Since colour can play an important role in
identifying objects it is beneficial to be able to account accurately for changes to colour introduced by components in the
chain. With this information it will be possible for law enforcement agencies and others to work back along the imaging
chain to extract accurate colour information from CCTV recordings.
A typical CCTV system has an imaging chain that may consist of scene, camera, compression, recording media and
display. The response of each of these stages to colour scene information was characterised by measuring its response to
a known input. The main variables that affect colour within a scene are illumination and the colour, orientation and
texture of objects. The effects of illumination on the appearance of colour of a variety of test targets were tested using
laboratory-based lighting, street lighting, car headlights and artificial daylight. A range of typical cameras used in CCTV
applications, common compression schemes and representative displays were also characterised.
Two internet-based psychophysical experiments were conducted to investigate the performance of an image sharpness enhancement method, based on adjustment of spatial frequencies in the image according to the contrast sensitivity function and compensation of MTF losses of the display. The method was compared with the widely-used unsharp mask
(USM) filter from PhotoShop. The experiment was performed in two locations with different groups of observers: one in the UK, and the second in the USA. Three Apple LCD displays (15" studio, 23" HD cinema and 15" PowerBook) were used at both sites. Observers assessed the sharpness and pleasantness of the displayed images. Analysis of the results led to four major conclusions: (1) Performance of the sharpening methods; (2) Influence of MTF compensation; (3) Image
dependency; and (4) Comparison between sharpness perception and preference judgement at both sites.
The European collaborative research project IST-2000-28008-VITRA ('Veridical Imaging of Transmissive and Reflective Artefacts') developed an innovative system for high-resolution digital image acquisition for conservation and heritage applications. Using a robotic platform to carry both camera and lighting, it can capture colorimetric images up to 15 metres above floor level, thus eliminating the need for scaffold towers. Potential applications include wall-paintings, tapestries, friezes and stained glass windows in historic buildings such as churches, cathedrals, palaces and monuments. Evaluation of the system was conducted at four sites in Germany and the UK. In the course of the project a number of significant technical innovations were made, including a new panoramic image viewer for the Internet.
The problem for proper rendering of spatial frequencies in digital imaging applications is to establish the relative contrast sensitivity of observers at suprathreshold contrast levels in typical viewing environments. In an experimental study two methods of evaluating spatial contrast sensitivity were investigated, using targets of graded tonal modulation, at which observers were asked to point to the perceived threshold locations. The results produced by these two methods were rather different from those of the classical methods of vision science, showing a much lower sensitivity over a broader range of spatial frequencies. These may be regarded as complementary to CSF data derived from single- frequency Gabor stimuli and prove to be better suited to the needs of practical imaging applications.
This study investigated four different image sharpness enhancement methods. Two methods applied standard sharpening filters (Sharpen and Sharpen More) in PhotoShop and the other two were based on adjustment of the image power spectrum using the human visual contrast sensitivity function. A psychophysical experiment was conducted with 25 observers, the results of which are presented and discussed. Five conclusions are drawn from this experiment: (1) Performance of the sharpening methods; (2) Image dependence; (3) Influence of two different colour spaces on sharpness manipulation; (4) Correlation between perceived image sharpness and image preference; and (5) Effect of image sharpness enhancement on the image power spectrum.
The measurement of gloss is conventionally made by specialised instruments that determine the ratio of reflected to incident illumination at a single fixed angle. This study investigated whether digital photography with flash illumination could be used as an alternative. Multiple exposures were combined by a high dynamic range (HDR) imaging technique to produce a two-dimensional intensity profile of the reflectance around the specular point. The method was tested for six paper samples of varying gloss, and the results were found to correlate well with instrumental measurements. The image gloss profiles, however, provided more information about the distribution of the reflection over a range of angles and also gave an indication of the surface texture.
Experiments were conducted to investigate colour appearance under mesopic vision. Lightness, colourfulness and hue observations of 40 test colours were accumulated for eight phases with four different luminance levels covering 0.1 to 90 cd/m2 and two different stimulus sizes corresponding to viewing angles of 2° and 10° using the magnitude estimation method. The psychophysical effects of luminance level and patch size on colour appearance were investigated and the role of the rods under mesopic vision was explored.
Several approaches have been applied to a digital image of a stained glass window in order to segment the image to match the window's physical structure of separate pieces of glass joined by strips of lead. A three-stage neural network with optimal thresholding strategy gave satisfactory results when followed by a tuned set of Gabor filters.
Two sets of color appearance data were accumulated for investigating the difference between LCD projector and LCD self-luminous colors. Psychophysical experiments were conducted using magnitude estimation methods. These colors were viewed against different neutral backgrounds. These data sets were used to test the performance of five color appearance models (CIECAM97s, Hunt94, LLAB, RLAB and CIELAB together with two most recently proposed revisions of CIECAM97s: Fairchild and FC).
This paper describes the characterization of cine film, by identifying the relationship between the Status A density values of positive print film and the XYZ values of conventional colorimetry. Several approaches are tried including least-squares modeling, tetrahedral interpolation, and distance weighted interpolation. The distance weighted technique has been improved by the use of the Mahalanobis distance metric in order to perform the interpolation, and this is presented as an innovation.
A psychophysical experiment was conducted to investigate the relationship between sharpness and preferred color of images displayed on a CRT monitor screen. Blurred versions of each of four original images were generated by convolution with a low-pass Gaussian filter. Sharpened versions of these images were created through adjustment of the image power spectrum. Each test image was decomposed into a set of spatial frequency bands, defined as octaves of the pixel sampling frequency. The Fourier power spectrum was derived, then amplitudes of selected bands were adjusted to enhance the desired spatial frequencies. The experimental results indicated that: (1) sharpness was perceived to be increased when certain spatial frequency bands were enhanced; (2) weighting the frequency bands using the standard observer's contrast sensitivity function (CSF) gives better results for particular distances; and (3) preferred image color is strongly related to image sharpness.
Evaluation of algorithms for color gamut compression was conducted using two different experimental methods. In the first, real prints in a viewing cabinet were compared side- by-side with the original images displayed on a CRT monitor. In the second, virtual prints were simulated on the monitor for comparison with the original images. The results showed that the new topographic algorithm achieved a good performance in both cases. Comparison of the two experiments, however, indicated rather different results for the other three algorithms applied to the same four test images, with the magnitude of differences between the algorithms being smaller in the case of simulated prints.
Visual communication is a key aspect of human-computer interaction, which contributes to the satisfaction of user and application needs. For effective design of presentations on computer displays, color should be used in conjunction with the other visual variables. The general needs of graphic user interfaces are discussed, followed by five specific tasks with differing criteria for display color specification - advertising, text, information, visualization and imaging.
In this study, the characterization method for a typical desktop LCD color projector is reviewed. Measurements were made with a spectroradiometer to establish the additivity of the primaries, inter-channel dependence, color gamut, tone scale, contrast, spatial non-uniformity, temporal stability and viewing angle variation. In the case of tone characterization, LCD projectors show S-shaped curve between input digital values and output luminance unlike the conventional CRT monitor represented by a power function. Mathematical models to predict the S-shaped electro-optical transfer function have been empirically derive.d Four mathematical models including PLCC, GOG, S-Curve Model I and II were compared for their accuracy in predicting the colors generated by the display for arbitrary signal inputs. It is proven that the newly derived S-Curve Model I and II work successfully for an LCD projector.
A general framework and first experimental results are presented for the `OPTimal IMage Appearance' (OPTIMA) project, which aims to develop a computational model for achieving optimal color appearance of natural images on adaptive CRT television displays. To achieve this goal we considered the perceptual constraints determining quality of displayed images and how they could be quantified. The practical value of the notion of optimal image appearance was translated from the high level of the perceptual constraints into a method for setting the display's parameters at the physical level. In general, the whole framework of quality determination includes: (1) evaluation of perceived quality; (2) evaluation of the individual perceptual attributes; and (3) correlation between the physical measurements, psychometric parameters and the subjective responses. We performed a series of psychophysical experiments, with observers viewing a series of color images on a high-end consumer television display, to investigate the relationships between Overall Image Quality and four quality-related attributes: Brightness Rendering, Chromatic Rendering, Visibility of Details and Overall Naturalness. The results of the experiments presented in this paper suggest that these attributes are highly inter-correlated.
Display color can be represented by a simple two-stage model, although the display performance is affected by aging, magnetic field sand various other factors. For optimum image rendering there should be a close match between the source and destination devices, in terms of primary chromaticities, white point, gamma, palette encoding and gamut mapping. Characterization procedures for displays include visual assessment, visual matching and measurement with a tricolorimeter or telespectroradiometer. Color may be communicated in images either by encoding the image directly in a standard color space, or by attaching a profile that allows the data values to be interpreted. Color correction of Internet images may be carried out on either the receiver side or on the server side. Evaluation of the effectiveness of color Internet image delivery should include repeatability and consistency of the display characterization procedure, color accuracy of the imagery and general usability considerations.
Digital color imaging has developed over the past twenty years from specialized scientific applications into the mainstream of computing. In addition to the phenomenal growth of computer processing power and storage capacity, great advances have been made in the capabilities and cost-effectiveness of color imaging peripherals. The majority of imaging applications, including the graphic arts, video and film have made the transition from analogue to digital production methods. Digital convergence of computing, communications and television now heralds new possibilities for multimedia publishing and mobile lifestyles. Color engineering, the application of color science to the design of imaging products, is an emerging discipline that poses exciting challenges to the international color imaging community for training, research and standards.
Digital photography was applied to the capture of images of the stained glass windows in the historic parish church in Fairford, Gloucestershire, England. Because of their size, the windows had to be photographed in 45 separate sections in order to capture all the detail present in the painting on the glass. The digital images of each section, approximately 3000 by 2300 pixels, were then mosaiced together in order to construct the very high resolution image needed for the complete window. A special backlight panel was constructed for the purpose, and techniques developed for minimizing the effects of reflected light and for calibrating the color of the images. Improvements in the technology for mounting and positioning the camera were identified as the most significant factors currently preventing the widespread adoption of this technology for virtual heritage applications.
The electronic pre-press industry has undergone a very rapid evolution over the past decade, driven by the accelerating performance of desktop computers and affordable application software for image manipulation, page layout and color separation. These have been supported by the steady development of colo scanners, digital cameras, proof printers, RIPs and image setters, all of which make the process of reproducing color images in print easier than ever before. But is color print itself in decline as a medium? New channels of delivery for digital color images include CD-ROM, wideband networks and the Internet, with soft-copy screen display competing with hard-copy print for applications ranging from corporate brochures to home shopping. Present indications are that the most enduring of the graphic arts skills in the new multimedia world will be image rendering and production control rather than those related to photographic film and ink on paper.
The color fidelity of printed images depends upon the complete image reproduction chain. Factors include the spectral response of the color filters in the scanner, the color space chosen for digital encoding, image processing operations including corrections to tone curve, hue, and colorfulness, printer/press characteristics, and the color gamut of the printing inks. The criteria for fidelity of color reproduction depend on the objectives of the reproduction and the method of visual assessment.