28 February 2017 Underwater reflectance transformation imaging: a technology for in situ underwater cultural heritage object-level recording
Author Affiliations +
J. of Electronic Imaging, 26(1), 011029 (2017). doi:10.1117/1.JEI.26.1.011029
There is an increasing demand for high-resolution recording of in situ underwater cultural heritage. Reflectance transformation imaging (RTI) has a proven track record in terrestrial contexts for acquiring high-resolution diagnostic data at small scales. The research presented here documents the first adaptation of RTI protocols to the subaquatic environment, with a scuba-deployable method designed around affordable off-the-shelf technologies. Underwater RTI (URTI) was used to capture detail from historic shipwrecks in both the Solent and the western Mediterranean. Results show that URTI can capture submillimeter levels of qualitative diagnostic detail from in situ archaeological material. In addition, this paper presents the results of experiments to explore the impact of turbidity on URTI. For this purpose, a prototype fixed-lighting semisubmersible RTI photography dome was constructed to allow collection of data under controlled conditions. The signal-to-noise data generated reveals that the RGB channels of underwater digital images captured in progressive turbidity degraded faster than URTI object geometry calculated from them. URTI is shown to be capable of providing analytically useful object-level detail in conditions that would render ordinary underwater photography of limited use.
Selmo, Sturt, Miles, Basford, Malzbender, Martinez, Thompson, Earl, and Bevan: Underwater reflectance transformation imaging: a technology for in situ underwater cultural heritage object-level recording



The UNESCO convention on the protection of the underwater cultural heritage (UCH) (2001) establishes within the first sentence of rule 1 that “in situ preservation shall be considered as the first option.” This has precipitated a corresponding shift in disciplinary thought, with preservation in situ widely espoused as best practice.12.3.4 However, as Maarleveld et al.5 make clear, rule 1 should not be misinterpreted to mean that archaeological research is being discouraged, rather that we need to focus on improving practices for engaging with a finite resource. In order to achieve this, maritime archaeology requires a suite of technologies that will permit documentation and retrieval of diagnostic information at a range of scales and resolutions.

At the macro- to mesoscale, the discipline has seen a stepwise change in capabilities, with swath bathymetric6 and mechanical sector scanning sonar offering rapid acquisition and centimetric levels of accuracy. More recently7 underwater laser scanning systems have emerged offering millimetric point cloud recording. While revolutionary, these systems are often expensive to acquire (and thus have restricted uptake globally) and produce data sets best suited to site level investigation. More broadly, considerable progress has been made in subaquatic photogrammetry. This has democratized site-wide metrically accurate three-dimensional (3-D) recording, with archival quality image-based color accuracy (e.g., Refs. 89.10). However, there still remains a capability gap when it comes to high-resolution recording at the object/feature level, where issues such as tool marks, cannon reliefs, pottery stamps, wood carvings, and the identification of any of a plethora of subtle textures, become diagnostically relevant.

Optical imaging (i.e., digital photo and video) remains the backbone of UCH in situ recording. Unfortunately, its usefulness is proportional to, and limited by, water clarity. Photography with a single light also has limitations in terms of capturing fine surface texture. Reflectance transformation imaging (RTI) has been proven as a technique for capturing this qualitative detail, as the viewer can manipulate the light direction when studying the final images. This paper demonstrates for the first time that underwater reflectance transformation imaging (URTI) offers a viable method to extract unprecedentedly high levels of diagnostic detail from the surface of in situ submerged objects. Significantly, this capability extends into turbid environments where conventional photography may be problematic. The method presented here is image-based, open source, diver-deployable, user-friendly, and repeatable, and generates robust results in low visibility. Perhaps its most attractive feature, however, is that URTI is affordable. A torch, a ball, a camera, a tripod, and a single dive can produce impressive results, thus providing a new avenue for groups across the world to carry out detailed investigations underwater.



Over the past decade, RTI has proven to be a robust and analytically useful surface-deployed, image-based cultural heritage recording technique. The variety of objects and features recorded are found across a wide spectrum of archaeology and conservation practices. As a sampling of this diversity in terms of material and scale, the successful use of RTI has been published in:

  • the study of ancient archaeological clay and stone writings;11

  • the paleontological illustration of subtle features in fossils;12

  • the conservation of wooden artifacts, wall paintings, and metal;13

  • the decoding of the “Antikythera mechanism”;14

  • the conservation of stone monuments;15

  • and the preservation of fine arts museum documents, paintings, and the study of Greek Attic pottery.16

RTI’s popularity stems from its ability to extract an approximation of object surface geometry based on ordinary digital photos. This geometry is then pixel-encoded and rendered within open-source viewer software to facilitate “relighting” of the object under a variety of reflectance property transformations. The result is tremendous surface detail enhancement, proving especially useful when surface details have been worn by the elements. Good examples of this are Mudge et al.’s work on the conservation of Roman and ancient Greek coins17 and Paleolithic rock art.18 The ability to see remnant impressions on worn coins and the temporal sequence of intersecting engraved lines in rock art establishes in two very distinctly different archaeological material genres that RTI can regain eroded and low relief data in terrestrial contexts.

Degraded (and continuing degradation of) UCH assets are a pressing challenge when considering in situ preservation. A site as a whole may appear pristine at the macrolevel; however, over a period of years individual diagnostic elements will degrade. This is not an argument against preservation in situ, more a recognition that sites will continue to change through time; biological, physical, and chemical processes will have an impact on diagnostic details. Therefore, the ability to extract this information while it is still present becomes a concern.

In the following sections, we explore the methodology developed and the testing of URTI. To assess the credibility of URTI in subaquatic archaeological applications, we were particularly interested in three questions:

  • 1. Can it produce data of requisite resolution for archaeological analysis?

  • 2. How does URTI respond to different turbidity levels?

  • 3. Is it a cost-effective and time-efficient mode of recording?


Can URTI Produce Archaeologically Useful Data Resolution for Qualitative Analysis?

Terrestrial work has demonstrated the ability of RTI to capture low relief diagnostic information under controlled conditions. Unfortunately, environmental control is not a luxury that image-based methodologies are afforded in underwater data capture. URTI is subject to the same challenges that plague all underwater photography: water turbidity, chromatic aberration from light attenuation by water, radial and barrel distortion resulting from light refraction in water, and a host of environmental obstacles such as fish, debris, thermoclines, haloclines, pycnoclines, and chemoclines, all of which can degrade underwater digital imagery. Of the factors listed above, the most significant is turbidity. Therefore, in order to establish the efficacy of URTI, experimentation with different levels of turbidity was required.


Water turbidity

Water clarity is impacted by the quantity, color, and size of suspended particulates. Silts, clays, and organic material can reduce visibility to 0 m in the worst conditions. Color changes can also result from, for example, excessive algae growth or chemoclines. These (and other) factors are cumulatively known as turbidity. When suspended particulates are extremely fine, they give a cloudy appearance (haze) to the water. When the particulate is larger, it causes light from a camera strobe to reflect back toward the lens (backscatter) and pollute the digital image with grainy-like spots. In either case, turbidity attenuates light penetration by absorbing light, changing its color and brightness, and backscattering it toward the source. All this results in degraded contrast in subaquatic images and the veiling of the subject’s true appearance.19 To date, there are no standard image processing methods that provide a convincing solution for the undesirable effects of backscatter and haze on underwater photographs due to turbidity.20 Although there has been considerable progress with dehazing affected images utilizing polarization techniques,19,21 the effects of turbidity continue to be the subject of research. Turbidity poses a unique problem for URTI. URTI mathematically describes the lightness changes with angle and can be used to estimate surface geometry. Distortions in URTI image-sets due to underwater variables equate to an empirical error in these modeling calculations that are not present in terrestrial RTI. The experiments presented in this paper seek to show that URTI is feasible and to investigate the impact of water turbidity.


URTI Data Overview

In order to better understand the degrading effects of turbidity on URTI, some key principles need to be defined. URTI allows us to virtually reilluminate a UCH object’s surface in viewer software. To achieve relighting, knowledge of the object’s surface normal vectors is required. The surface normal is the vector orthogonal to the plane that is tangential to the surface at that point. The vast majority of UCH presents a Lambertian surface for imaging. Lambertian is defined as scattering light evenly in all directions.22 Therefore, under a given light source direction, the brightness of a Lambertian object is constant regardless of viewing angle. Inversely, for a fixed viewing angle and varying light source direction, the brightness of a Lambertian surface is at maximum when the light source is oriented as the reciprocal of the surface normal. This means that a simple method of recovering the surface normal is to find the light source direction that maximizes the surface brightness for a given pixel in a registered image-set.

URTI does this by adopting the “highlight” method18 where in every image a reflective black sphere is set in a fixed position in the frame. As the light source moves at a fixed distance from the center of the scene, a specular highlight is generated on the sphere. A one-off calibration is not ideal (or even necessary), as the camera–lights relationship could change during use. However, given the geometric properties of the sphere it is possible to calculate the incident light vector to the scene in each photograph. The incident light vectors are then compared to the observed luminosity of each pixel for the entire image-set to determine the vector that produced the brightest response in each pixel. These observations are then fitted in light-space either to a biquadratic polynomial (PTM)11 or to a higher-order polynomial such as a hemispherical harmonic.23 The coefficients of these polynomials are encoded along with the color data for each pixel to create a “texel.” By taking the first derivative of these polynomials to find the local maximum for each texel, a fair estimation of the surface normal can thereby be calculated.

These surface normals can be visualized, checked, and exported by a “surface normal visualization” function in PTMViewer. This visualization translates each individual component of the surface normal vector (x,y,z) to a corresponding RGB color vector. The corresponding color is stored as 8-bit RGB values. The visualization in the viewer reveals changes in 3-D surface topography, which is why it is so valuable for researchers. It is the changes in these color values that interest us, because they equate to changes in the estimation of URTI normals. It should be noted that RTI is typically used as an accessible, qualitative photography tool, not as a replacement for 3-D scanning or photogrammetry for example, and we argue for a similar qualitative value of URTI here.


Materials and Methods

Two methods of URTI data capture are presented in this paper. Both were achieved by adapting terrestrial RTI equipment and highlight image capture protocols (e.g., 18,24,25). The first method is intended for field URTI acquisition, using a diver-deployed application of Masselus et al.’s26 “free-form” capture. Free-form assumes a camera is taking images continuously (interval shooting) and independent of a constant beam (nonflash) light source. The diver manually changes lighting incidence angle while maintaining equal radius distance throughout; i.e., the torch placement is “free-handed” hemispherically around the underwater object.

The second method incorporates URTI subaquatic adaptations into the design and construction of a prototype fixed-lighting semisubmersible terrestrial-style RTI photography dome (hereafter dome). The dome allows for a standardized and automated method of controlled URTI image capture in a laboratory environment. In our study, we deployed the dome in a tank of fresh water and generated 15 consecutive pixel-registered URTIs, each captured in progressively turbid water for comparison. All URTIs discussed in this paper were created using the PTM fitting method employed by RTIBuilder, a software developed at Universidade do Minho in Braga, Portugal, led by João Barbosa, and viewed with RTIViewer, open-source software developed by Cultural Heritage Imaging.


Materials: URTI Field Capture

As noted above, these trials were designed around affordable off-the-shelf technologies and ease of capture. We utilized a common point-and-shoot digital camera (Fuji FinePix F200EXR 12 M Pixel) in a proprietary Fuji waterproof housing, at a total cost of £297. A thick rubber band was used to hold down the camera housing shutter button to activate the interval shooting capability. The remaining ancillary items consisted of a tripod (Benbo Trekker MK3), a 1000 lumen high-intensity discharge (HID) torch (Diverite) and a small specular reflective sphere. This 25.27 mm sphere is part of an RTI starter kit (available for purchase from Cultural Heritage Imaging). It was attached to a threaded metal rod 20 cm in length to facilitate positioning the sphere underwater.


Methods: URTI Field Capture

A series of multi-image capture tests (open air and underwater) totaling more than 10,000 digital images were first acquired with the compact camera and assessed to ensure:

  • 1. Sufficient focus repeatability and

  • 2. Our ability to successfully identify and remove out-of-focus images from the sets.

Within these 10,000 test images, c. 15% appeared out of focus but were easily removable from our data sets prior to processing. URTI data sets were then acquired from two wooden shipwrecks of historical importance, each located in distinctly different marine environments (see Fig. 1). The first set was from the 18th century HMS Invincible located in the Solent, UK. The second set came from the Cap del Vol, a first century BC Roman shipwreck27 in the western Mediterranean. As illustrated in Table 1, these sites were selected as representative of two end members of common conditions on maritime archaeological sites.

Fig. 1

Shipwreck site locations where URTI was field tested.


Table 1

Comparison of URTI field capture conditions.

ConditionsHMS InvincibleCap del Vol
DepthShallow (7 m)Deep (25 m)
ClarityTurbid (1.5 m visibility)Clear (20 m visibility)
CurrentModerate (25  cm/s)None (0  cm/s)

At both sites, the same capture method was employed. The tripod-mounted camera was positioned above the archaeological material with a focus distance <1.0  m. The reflective sphere was placed in the field of view. With the camera aperture set to F11 (for good depth of field and reduced sensitivity), camera shutter speed was increased until surrounding ambient light from the surface had little or no exposure impact on the images. Interval shooting then took place while the HID torch was introduced into the field of view. An exposure-batch of 200 to 300 pixel-registered digital images was captured. As the camera fired, the diver moved the torch to assure distinctly different lighting angles of incidence (Fig. 2), pausing long enough for a minimum of three exposures per incidence. As a result of free-handing the light, not every image in the exposure-batch was of equal quality. Therefore, the image-set of c. 35 to 48 images required to produce the URTIs was subselected from the available three images at each position of incidence, based on three criteria:

  • 1. best focus;

  • 2. best torch beam position (center on the object);

  • 3. images that collectively characterized the widest distribution of lighting angles for the set. Priority was given to images with lower grazing angle lighting. This typically produces better RTI results.

Fig. 2

Selmo performing URTI on the Cap del Vol first century BC Roman shipwreck. Photo credit: Dr. Gustau Vivar.



Materials: Fixed-Lighting Dome

To facilitate control and consistency of URTI source image data capture in the turbidity experiment, we built a semisubmersible dome-shaped device capable of replicating camera optics and lighting conditions among URTI captures. The dome supports both a Nikon D70 underwater DSLR camera housing and a Fuji FinePix F200 point and shoot underwater camera housing, allowing for downscaling the system for ease of transport and field use (Fig. 3).

Fig. 3

(a) Dome CAD drawing, (b) heat-shrink wrapping of dome LED electronics, and (c) the completed dome featuring Nikon D300 camera.



Dome conceptual design and construction

The dome was designed and fabricated by Selmo at the workshop in the conservation department of the Mary Rose Museum at the Portsmouth Historic Dockyard, Portsmouth, UK, with the gracious help of its master builder, Mr. Denis Cook. The dome structure is made entirely of acrylic. All flat stock parts are 1 cm thick and cut on a flatbed laser cutter from design templates drawn in Corel Draw. The availability of a 627-l testing tank predetermined the dome dimensions to an 88-cm base-diameter by 44 cm tall. These dimensions are comparable to terrestrial RTI domes produced by the University of Southampton in use at the British Museum, the Ashmolean Museum, and the Louvre Museum. The dome is designed to be modular. The cylindrical camera support, eight legs, and circle base interlock without the need for fasteners or adhesive. The dome structure can be assembled and disassembled in under 1 min. However, time restraints limited the practicality of designing accompanying modular electronics. Instead, Selmo permanently fused the dome together with acrylic glue prior to wiring it with dome lighting.


Dome lighting

The dome features thirty-two 500 lumen-rated LEDs wired in a standard array of eight strings of four tiers; one tier at the 6 deg, 17 deg, 30 deg, and 45 deg positions from the dome base. While 32 lights is a low number for an RTI dome, it is, however, representative of the number of free-hand light position image captures typically acquired in free-form RTI technique. Limiting the dome to 32 lights ultimately reduced the cost and complexity of building the dome and helped in meeting budget and time constraints associated with a student-led Masters Course project. A 42-V DC power supply and a custom designed controller circuit board of optoisolators drive the 32 LEDs. Optoisolators are light-activated microswitches that allow one independent electrical signal (i.e., 5-V DC power supply) to control another independent circuit (i.e., 42-V DC power supply and controller card). An Arduino DUE™ microcontroller sequences the lights, LED on/off duration, and synchronization with the camera shutter. We refer to the dome as “semisubmersible” because the power supply and LED driver electronics must remain out of water. However, the dome’s LEDs have been encased in waterproof heat-shrink tubing allowing the dome to be submersible up to 2-m deep. Having the LEDs entirely submerged in cool water during use allowed us to overdrive them beyond their 500 lumens (lm) rating without risk of circuit destabilization from excessive heat. The result was c. 600 lm of available light for each source image exposure. Each light string is fastened to a dome leg by cable-ties. They can easily be removed so the dome may be used without LED electronics as a camera support and lighting template for future free-form URTI applications in a field marine environment.


Dome electronics flow

Arduino is programmed with brand-specific open source software. The code language is a series of basic programming “if/then, on/off, pause, go to” statements. Pressing the master switch in Fig. 4 turned our dome on and activated the following Arduino programmed electronics flow:

  • 1. Turn on a dome leg #1,

  • 2. Turn on tier 45-deg LED, [leg #1 and tier 45 deg are powered by commands to two different optoisolator switches on the control board. Therefore, 12 switches in array can control 32 separate lights; one switch for each leg (8) and one switch for each tier (4).],

  • 3. Pause 50 ms,

  • 4. Turn on the camera switch and hold for 100 ms (shutter snaps),

  • 5. Turn off the camera switch,

  • 6. Pause 1 s,

  • 7. Turn off the leg and the LED,

  • 8. Pause 50 ms,

  • 9. Turn on dome leg #1,

  • 10. Turn on tier 30-deg LED,

  • 11. Pause 50 ms,

  • 12. Turn on the camera switch and hold for 100 ms (shutter snaps),

  • 13. Turn off the camera switch,

  • 14. Pause 1 s,

  • 15. Turn off the leg and LED.

Fig. 4

Dome electronics flow chart pictorial.


The programming repeats in succession through all 32 LEDs. In its DSLR configuration, with the single push of a button the dome facilitates 32 underwater images from 32 different source-light incidences in c. 40 s as shown in the embedded video in Fig. 5.

Fig. 5

URTI dome video (Video 1, MP4, 14.2 Mb [URL:  http://dx.doi.org/10.1117/1.JEI.26.1.011029.1]).



Dome calibration

The LEDs performed consistently in terms of their light output (c. 600 lm) and color temperature (c. 5000 K). No camera or illumination calibration is required for RTI, as the resulting images are mainly used to investigate texture details rather than make color or geometry measurements. However, color, lens distortion, vignetting, and illumination variance can be calibrated for more uniform results among systems. The DSLRs typically used in RTI compensate for vignetting, geometry, and sensor noise. In proper RTI image capture protocol, the F number and focal length are always fixed for a capture sequence. In the photographic studios of the aforementioned museums using RTI, the dome light positions can be calibrated once per session with different objects. However, using a highlight sphere for each capture in our dome capture methodology negates this need. Source-light incidences from our URTI dome are simply determined by RTIBuilder software in postprocessing. The sphere allows our lights to be recalibrated for every scene and allows for our dome to be placed at slightly different heights conforming to the irregularities of the seabed bottom depending on conditions.


Methods: Dome Testing and Control Verification

Several tests were done to qualify the dome’s function prior to its use in the turbidity experiment presented in this paper. The final test involved imaging a small 4×6  cm electronic circuit board, first above water [Figs. 6(a) and 7] and then below [Figs. 6(b) and 8].

Fig. 6

Setting up for (a) terrestrial and (b) subaquatic dome testing.


Fig. 7

Screen-capture from RTIViewer software render of the dome terrestrial test.


Fig. 8

Screen-capture from RTIViewer software render of the dome subaquatic test.


For the submerged test, the circuit board was placed on black cloth in the bottom of the tank in 52 cm of clear fresh water. The cloth helped to create contrast in the images and simultaneously reduced the blue/green tint of light pollution caused by reflection off the tank sides. A small piece of white plastic below each sphere enabled contrast between the black spheres and black cloth. The resulting URTI can be seen as it appears in RTIViewer in Fig. 8.

Figure 9 compares terrestrial and subaquatic dome-generated RTI results by focusing in on a c. 1-cm electronic component on the circuit board. Despite some discussions of metric comparison, RTI is deployed largely as a qualitative technique to clarify observations of the surface textures of archaeological objects. In doing so, the contrast of surface features can be changed by the viewer “relighting” differently for separate areas of the object and selecting the best view for interpretation. Since there is little metrological use of the technique, due in part to the normal smoothing introduced by the algorithms deployed, the cameras and lighting are seldom calibrated. However, future calibration may help with comparisons among systems. For the main use-case of the technology we present here, this uncalibrated operation is sufficient. RTI’s only “deliverable” is what end-users can see with the naked eye. RTI consistently presents more surface detail than conventional photography. We believe that the submillimeter imprint on the component is equally discernable to the naked eye in both renders in Fig. 9. The slight variance in hue between the two is attributed to the dome’s 5000 K LED light subtly contaminated by the diffusion of blue/green reflection off the tank plastic. Although a faint hue differentiation is evident, the results reveal there is no discernible qualitative difference in the resolvable diagnostic detail between the terrestrial RTI [Fig. 9(a)] and the subaquatic URTI fresh clear water control [Fig. 9(b)]. This test demonstrated that within a laboratory environment our dome was capable of achieving repetitive high-resolution RTI image captures of both terrestrial and underwater objects with no visible loss of qualitative usefulness from the subaquatic deliverable.

Fig. 9

A comparison of discernable detail in (a) terrestrial RTI and (b) subaquatic URTI.



Materials: Turbidity Experiment

The dome, two specular reflective spheres, and the black cloth-lined tank together constitute the testing apparatus used to gather 15 consecutive URTIs under varied water turbidities. A small c. 4  cm×6  cm sherd of Roman terra sigillata from the south of Spain was selected for the test imaging (Fig. 10). It featured diagnostic surface relief typical of objects often seen in terrestrial RTI publications. Powdered bentonite was used to induce varied water turbidity. Bentonite is an absorbent impure clay (clay=particulate size<2μ) formed by the erosion of stone containing phyllosilicates. It was selected due to:

  • 1. its pale color;

  • 2. its tendency to equally disperse in solution;

  • 3. its ability to remain in suspension for the duration of the imaging;

  • 4. and for the hazing effect it creates in water.

Fig. 10

Potsherd of Roman terra sigillata from the south of Spain. This image was sourced from the collection of subaquatic dome-generated images taken in clear water.


Bentonite is used in a plethora of household and commercial products and applications and is often a component of characteristic sediment typically found in subaquatic archaeological sites. Figure 11 shows an experiment to discern the change in clarity of 1 l of clear fresh water when increments of .05 g of Bentonite are added. This experiment helped determine a target quantity of bentonite to be added to the 627-l testing tank.

Fig. 11

Illustration of the change in turbidity induced on 1 l of fresh water by the progressive addition of bentonite clay.



Methods: Turbidity Experiment

A single URTI image-set was captured in clear water as a control. Next, 14 URTI image-sets were captured sequentially by the dome under progressively higher water turbidity. This resulted in all 15 image-sets in pixel-registration with each other while water clarity progressively worsened. Variable turbidity was created between each image-set by adding 1 g of powdered bentonite to the tank between captures. This was performed by drawing 300 ml of tank water into a beaker and stirring in a single 1-g packet with a mechanical stirrer. The beaker solution was then reintroduced to the tank and gently stirred by hand to avoid any movement of the tank, camera, or dome. This procedure was repeated 14 times, resulting in the 15th URTI captured under 14× greater turbidity than the second URTI in the experiment.


Turbidity quantification

As previously discussed, water turbidity is the primary obstacle to underwater imaging. There is a variety of ways to quantify turbidity. In this research, we charted the progressive change as a function of mass concentration (CM) in grams of turbidity-causing grains per liter of suspension:28

Adding 1 g of bentonite to 555 l (test volume) of fresh water induced turbidity CM of .0018  gm/l bentonite. By the 14th iteration, it was not possible to see the sherd through the 52 cm of water. At this point a CM of 0.0252  g/l had been reached. Work by Davies-Colley and Smith29 allows for an approximation of through-water visibility to be deduced from the CM values. Based on their data, this sees a move from c. 4 m of visibility in URTI 2, dropping to less than 50 cm by URTI 15. In “scuba diver vernacular” we would say by URTI 15 “the viz was less than half a meter,” meaning a diver would not be able to see his/her own hand extended out and the diver’s air gauge would have to be brought up to his/her mask in order to read it. Most underwater photographers would consider this an untenable condition to attempt conventional underwater photography.


Signal-to-noise ratio quantification

As previously stated, the “normal visualization render” of a URTI is its x-y-z surface geometry represented as a corresponding RGB color; red=x, green=y, and blue=z. Therefore, when comparing one URTI normal visualization to another captured in cloudier water, the detectable changes in the RGB components equated to an empirical error in the representation of the true surface geometry. This error was the result of progressive water turbidity impeding the “truthfulness” of the calculations. To quantify this empirical error, image processing and analysis in Java (ImageJ) software was used to compare signal-to-noise ratios (SNR) in URTI datasets 2-15 against URTI 1 (the clear water control). ImageJ is open source software available online. As previously noted in Fig. 9, a clear water URTI’s diagnostic quality is commensurate with its terrestrial RTI counterpart. Therefore, URTI 1 captured in clear water was the signal control and treated as “true.” Any pixel outputs rendered in URTIs 2-15 that quantifiably deviated from URTI 1 were considered to have been affected by “noise.” Noise was defined in this project as the random unwanted digital data that visually manifested in the form of changes in color and/or brightness levels in the RGB pixel component assignments in the surface normal renders. The SNR plug-in for ImageJ, developed by Daniel Sage at Biomedical Image Group of École Polytechnique Fédérale de Lausanne, follows the Gonzalez and Woods formula:30

where r(x,y) refers to the reference image, and t(x,y) is the image being compared.

In this experiment, SNR is a ratio essentially expressing an empirically qualitative “distance” between URTI 1 and URTIs 2-15; the lower the SNR numerical value, the higher the noise and greater the degradation. SNR was first calculated for a sample of source photos used to generate URTI’s 1-15, then for the URTI normal visualization renders themselves.


Signal-to-noise ratio data extraction

The following steps were used to generate source photo and URTI SNR values. URTIs 1-15 were first rendered in RTIViewer. Various transformation-images from URTIs 1, 3, 5, 7, 9, 11, 13, and 15 were exported, cropped-to-subject, and aligned side-by-side for comparison. PTMViewer was then used to generate surface normal visualization renders of all 15 URTIs. (The surface normal renders of a PTM are its x-y-z geometric data converted to RGB tones for visual interpretation. Therefore, in a “surface normal render” the color hue, tone, and depth correspond to surface geometry calculations.) They were then stacked and cropped-to-subject resulting in 15 pixel-registered BMP files of the surface normals of the potsherd. The BMP files were then split into 15 sets of RGB channel grayscale JPEG files (45 “split-files”) using ImageJ. Grayscale conversion was necessary because the ImageJ SNR plug-in cannot examine RGB, only grayscale. Split-files from URTI 1, 3, 5, 7, 9, 11, 13, and 15 were grouped by channel and aligned side-by-side for visual comparison. ImageJ’s SNR plugin was used to calculate split-file channel SNR of URTIs 2-15 against the URTI 1 signal. Lastly, the same procedures as described above were applied to a sampling of source photos that were used (in part) to generate all 15 URTIs.




Field Trials: Diver Free-Form URTI


HMS Invincible, 18th century shipwreck

Figure 12 shows the URTI of a c. 3.8-cm hull planking treenail from HMS Invincible, produced from 366 digital images gathered on a single SCUBA dive. The treenail was located at 7 m depth on a section of exposed planking relatively flat to the seabed. The URTI’s submillimetric resolution clearly revealed:

  • the crisp edge of the treenail bore-hole in the tangential plane of the plank;

  • the radial grain pattern in the transverse plane of the treenail face;

  • the linear grain pattern in the transverse plane of the wedge face;

  • damage in the surrounding wood from marine surface wood-boring isopods;

  • and a clear representation of the overall surface condition including degradation from 255 years of being on the bottom of the Solent.

Fig. 12

Screen-capture from RTIViewer software render of the first polynomial texture map (PTM) produced from UCH using highlight image capture free-form URTI methodology: a treenail from the HMS Invincible.



Cap del Vol, first century BC Roman shipwreck

Figure 13 shows an anomaly in the moulded side (face) of a portside floor timber just aft of amidships. Two triangular indentations on either 90 deg edge were of interest. To the naked eye and camera lens both, they resembled an exterior hull-plank lashing-point transverse to the frame member. This is a lashing method associated with ancient Iberian naval architecture of northern Spain.31 However, we would not expect to find this technique on the Cap del Vol which instead features a parallel (through-timber bore hole) lashing method.32 Therefore, to help inform us as to the nature of the anomaly, we were interested to see if we could use URTI to detect characteristics of impact, abrasion, or tool-marks.

Fig. 13

Screen-capture from RTIViewer software render of a URTI off the Cap del Vol first century BC Roman shipwreck.


Figure 14 is an illustration of the right-side anomaly depicted under varied degrees of specular transformation enhancement. Relighting revealed four distinct planes of “mirror-like” reflection. These planes are characteristic of marks made by a straight bladed tool. As such, URTI revealed that this is unlikely to be a lashing point or the result of impact or abrasion, but instead an intentional modification of the timber at some point in its history.

Fig. 14

Illustration demonstrating relighting the URTI produces four distinct planes of “mirror-like” reflection under specular enhancement.



Turbidity Experiment: Dome-Generated URTIs


Dome URTI source photo comparison

Figure 15 shows a sample of source images generated by the dome under the varied turbidity levels. These images equate to underwater photography results a diver could expect to achieve in field conditions commensurate with those recorded in the tank. As turbidity increased, image resolution and the distinction of diagnostic features predictably deteriorated. As previously discussed in Sec. 2.1, the potsherd can be approximated as a Lambertian surface that appears brightest when light incidence is nearest 90 deg perpendicular to it. Under dome capture, this equates to images taken with the 45 deg tier of LED lights. Figure 15 verifies this Lambertian principle. The 45-deg tier images are clearly brighter and clearer than those of the 6-deg tier at all turbidity variants. By the last turbidity change, visibility had been reduced to less than c. 0.5 m. The digital images captured in these conditions (even at the 45-deg tier) no longer reveal surface detail of the potsherd with sufficient definition to be of archaeological interpretive value.

Fig. 15

Dome generated source image comparison under varied turbidity.



URTI default transformation comparisons

Figure 16 depicts odd number URTIs generated by the digital images discussed above and shown in Fig. 15. RTIViewer was used to render them in four commonly used reflectance transformations. Manual relighting would have dramatically enhanced the detail that is currently visible in the figure. However, for the purposes of comparison, the software’s standard defaults were used including RTIViewer’s fixed lighting position. URTIs in default view appeared much like the photograph digital images that produced them, showing the same predictable loss of contrast. However, specular enhancement clearly revealed diagnostic surface relief that was undetectable in the source photos of Fig. 15.

Fig. 16

Subject crops of odd numbered URTIs rendered in RTIViewer default settings of four common reflectance transformations.



Surface normal render visualizations

As previously discussed, the surface normal visualization used in RTI is an RGB representation of surface geometry. Figure 17 shows the normals render of our clear water control. The yellow/red/green hue in the object background does not reflect true geometry. It is a default calculation used by the PTM fitting algorithm in RTIBuilder software to represent inconsequential flat backgrounds. The raised surface of the potsherd displays an exceptionally clean and even polynomial texture map as evident by the solid mix of RGB pixilation. This indicates that we achieved proper camera settings (exposure and focus), a good sample of light incidences (32 from the dome), and no motion during data capture. Figure 18 shows surface normal visualization RGB renders split into the three channels of grayscale that were used for SNR analysis.

Fig. 17

PTMViewer software surface normal visualization renders of URTI 1 clear water control.


Fig. 18

Surface normal visualization RGB channels converted to grayscale.



Signal-to-noise ratio regressions

It is understood that the SNR of an RGB color channel is a change in radiometric measurement while the SNR of an RGB normal visualization render is a geometric change in normal vector measurement. Therefore, the SNR of source images and the SNR of URTI normal renders cannot be directly compared. However, it is the radiometric measurements taken from URTI source photo RGB channels that generate values used by the PTM algorithm to calculate surface normal geometry in the first place. Therefore, there is a direct correlation between the degradation of one and the resulting degradation of the other. In order to gain insight into this correlation, first we examined induced SNR in source photos. During the experiment, the dome produced 32 batches of source photos (eight legs, four LED lighting tiers per leg). However, time prohibited examining SNR in all 32 batches. Therefore, we selected two sample sets we believe bracket the SNR limits that all source photo batches should have fallen within.

Figures 19 and 20 are scatterplots regressing the SNR generated in the RGB color channels of photos from dome LEG1, LED4 (45-deg light incidence position on the leg), and LEG4, LED1 (6-deg light incidence position on the leg, directly opposite/across from LEG1). A sample set from the LEG1, 45-deg dome light incidence position was selected because it sourced from the lighting tier expected to generate the least noise due to the high lighting angle of incidence. Conversely, the photos in the sample set from the 6-deg light position are sourced from the tier expected to generate the greatest noise due to the low lighting angle of incidence.

Fig. 19

SNR in 45-deg tier URTI source photos under varied turbidity.


Fig. 20

SNR in 6-deg tier URTI source photos under varied turbidity.


We assumed SNR induced in photo batches from the 17-deg and 30-deg light incidence tiers are bracketed by the results shown in Figs. 19 and 20. Figure 19 shows that at less than halfway into the turbidity experiment (URTI image set 7 of 15), RGB color channels in the 45-deg tier set (the set expected to produce the “best” underwater photographs given the conditions) experience a high SNR (<10). The photos then rapidly degrade with increased SNR into single digits as turbidity progresses. However, Fig. 20 shows SNR in the 6-deg tier photos reach single digits in all three channels in just the first third of our progressive turbidity transitions.

Next, we examined the SNR in URTI normal geometry. Figure 21 is a scatterplot regressing SNR induced in x-y-z vector calculations of URTI surface normal visualization renders as a result of the progressively turbid water conditions in the tank (x-y-z vectors visually rendered as RGB color). At no point in the experiment does the SNR in any of the three vectors reach single digits, even at the 14th and final turbidity change (URTI 15) where the water condition has been previously described as “less than half a meter of visibility.” The regression also reveals a “very high”33 Pearson coefficient of determination (R2) rating of linear increase in noise induced across all three vectors. This equates to a high degree of correlation between SNR in underwater generated PTMs and turbidity increase. Finally, the scatterplot indicates the geometric z axis vector (blue channel, i.e., “depth” of point) was the most resistant to turbidity-induced error.

Fig. 21

SNR in the three geometric axis of URTI surface normal renders under varied turbidity.


Figure 22 shows results of this experiment arguably of greatest interest to the maritime archaeologist: image-based data collected in a laboratory setting simulating a small submerged object, in situ and in <0.5  m visibility water conditions. The top of Fig. 22 shows URTI 15’s eight source photos captured from all eight legs of the dome’s 45-deg light incidence tier contrasted against URTI 15 itself. The photos are representative of a maritime archaeologist diver’s attempt to bring to the surface the “ideal picture” by capturing eight photographs of the object from different light incidences using an underwater DSLR camera with flash in extremely difficult (turbid) water conditions. Although surface relief in the form of a vague silhouette is detectable to the naked eye in the photos, we argue the diagnostic detail rendered in these images is limited because of the poor photography conditions in which they were taken. In contrast, the URTI of the potsherd under reflectance transformations and relighting provides significantly greater visual detail for interpretation.

Fig. 22

Comparison of surface relief detail in photos versus URTI renders in c. 0.5-m water visibility.





Free-Form Highlight Image Capture

URTI worked very well on archaeological wood in a range of conditions. In 7-m water depth in the Solent, diver mobility was restricted by the necessity of a dry suit due to cold water temperatures. Diver visibility was relatively low (c. 1.5 m). Tidal-induced cross-currents transported sand and waterborne debris across the camera field of view. Despite all this, an image-batch necessary for the successful URTI of the treenail was achieved in a single dive. Although it was selected as a target solely to illustrate URTI viability, the treenail results demonstrate that URTI is entirely feasible in the challenging dive conditions typically associated with UK coastal waters.

At 25-m water depth in the western Mediterranean, water conditions were “pristine.” However, URTI capture was challenged by the time constraint of the diver’s shorter permissible bottom-time as a result of greater depth. Again, the image-batch was successfully captured in a single dive. In this case, URTI was able to detect the faintest remnants of in situ 2000+ year-old carpenter tool marks in UCH no bigger than 2.5 cm in diameter. This level of detail was informative to archaeologists of the Centro d’Arqueologia Subaquàtica de Catalunya, and constitutes the first use of URTI in an underwater archaeological investigation. These field trials prove the utility of URTI for both documentation and analysis.


URTI in Turbidity

SNR analysis of both source images and URTI normal renders captured in progressive turbidity demonstrated that underwater images degrade faster than URTI object geometry calculated from them. By the sixth turbidity change (less than halfway through the experiment), the SNR present in all three RGB channels of fourth tier images exceeded that of the URTI x-y-z normal vectors. As a general rule in image analysis, “most simple objects are barely visible with an SNR of 8 to 10.”34 By the ninth turbidity change (Fig. 15, second row, first pic), the SNR of the fourth tier dome images was below 10 in all three RGB channels. They further degrade to SNR values of c. 8, 7, and 2 by the 14th turbidity change. In contrast, the y (green) and z (blue) vectors of the normal calculations never drop below 12 throughout the experiment. Furthermore, the z (blue) component appears particularly robust in turbid water and likely contributes to URTI 15’s ability to render a high level of surface relief detail despite the cloudy conditions of the water (Fig. 22).


URTI Best Practices and Future Considerations

URTI has some limitations and there is room for future improvements in capture methodology and accuracy. Diver-deployed URTI requires the diver to free-hand a torch to provide multiple angles of light incidence in a batch capture of pixel-registered images that feature a specular sphere. Therefore, the camera must be able to fire consecutively without the need to continually depress the shutter button. We present field results achieved with an HID dive torch. However, better results may be achieved in the future using LED video flood style torches because these provide more even lighting and do not cast a “hot spot” in the center of their beam. Due to the need to set up an underwater camera in a fixed mount such as a tripod and to the fact that light radius management is limited to the full extent of a diver’s arm, URTI in its current configuration is best suited to the recording and analysis of objects <1  m in size.

There are a couple of noteworthy adaptations that will undoubtedly improve future accuracy of URTI renders. Our dome featured 32 fixed LED’s and our field trials featured batch captures of 35 to 48 images. However, better results in RTI have been noted when 60 to 70 varied lighting positions are examined.35 There are better algorithms than PTM such as the one proposed in Drew et al.36 that have yet to be widely adopted in the use of RTI for cultural heritage studies. The accuracy of normal calculations for URTI could improve by adopting new algorithms that are evaluated by the procedure involving calibrated targets outline by Giachetti et al.37

Regarding turbid water, in practical terms our experiment demonstrated in controlled conditions that variable turbidity does not have an impact on the efficacy of URTI up to c. 0.5 m visibility. Half a meter visibility translates to a diver just barely being able to see his/her own outstretched hand. However, when this visibility translates to underwater photography it implies that URTI can still be performed in turbid environments that would likely render ordinary underwater pictures of little value. We are confident that through the use of a custom PCB and smaller mirrorless camera a system can eventually be made with an underwater-friendly control.



URTI has conformed to our experience of RTI in terrestrial settings. The challenges and obstacles associated with the marine environment did not impede our ability to achieve entirely usable URTI using the polynomial texture maps algorithm. In clear water, our results were commensurate with terrestrial capability. However, URTI demonstrated itself to be exceptionally robust at rendering usable data gathered in turbid conditions. A URTI will render an object in turbid water with greater accuracy than a digital image. This is significant because underwater archaeological sites are often characterized by turbid conditions and limited time on site that challenge optical image data capture. Underwater archaeology needs new technologies for object-level digital recording of in situ UCH. The free-form highlight image capture diver-deployed methodology makes URTI immediately available, user-friendly, and affordable. There are no software costs to use URTI. RTIBuilder software, RTIViewer software, and Hewlett-Packard’s PTMViewer and PTMFitter are freely available. The perceived value of URTI is not limited to cultural heritage recording. The capability allows for a diverse use across a wide spectrum of subaquatic disciplines. URTI is not the solution to every problem, but, as a rapid, affordable and easily deployable in situ recording method, it will allow for the capturing of subaquatic in situ detail that is not visible to the naked eye by any other means. It is poised to be a valuable asset in our archaeological recording tool-kit.


The authors wish to express thanks to the following institutions and individuals for their collective support in the development, execution, and interpretation of the results of this research: the Center for Maritime Archaeology and the Archaeological Computing Research Group of the University of Southampton; the Web & Internet Science Group of Electronics & Computer Science of the University of Southampton; the Sediment Analysis Laboratory of the Ocean & Earth Sciences Department of the University of Southampton; the Department of Classics, Queen’s University, Kingston, Ontario, Canada; Professor Mark Jones and Mr. Denis Cook of the Mary Rose Museum in Portsmouth, UK; Dr. Gustau Vivar and Rut Geli of the Centro d’Arqueologia Subaquàtica de Catalunya, Spain; Historic England for access to the HMS Invincible; Dan Pascoe and Mark James for diving logistics and support; and finally our associates at Cultural Heritage Imaging.


1. J. Holden et al., “Hydrological controls of in situ preservation of waterlogged archaeological deposits,” Earth Sci. Rev. 78, 59–83 (2006).ESREAV0012-8252 http://dx.doi.org/10.1016/j.earscirev.2006.03.006 Google Scholar

2. P. Palma, “A scientific strategy for in situ stabilization of wrecks: a pilot study on the Swash Channel Wreck,” in World Archaeological Congress, University College, Dublin (2008). Google Scholar

3. P. Palma, “Environmental study for the in situ protection and preservation of shipwrecks: the case of the Swash Channel wreck,” in Ars Nautica, Dubrovnik, Croatia (2009). Google Scholar

4. K. Camidge, “HMS Colossus, an experimental site stabilization,” Conserv. Manage. Archaeol. Sites 11, 161–188 (2009). http://dx.doi.org/10.1179/175355210X12670102063742 Google Scholar

5. T. J. Maarleveld, U. Guerin and B. Egger, “Manual for activities directed at underwater cultural heritage,” in Guidelines to the Annex of the UNESCO 2001 Convention, p. 20, UNESCO, Paris (2013). Google Scholar

6. R. Pletts, J. Dix and R. Bates, Marine Geophysics Data Acquisition, Processing and Interpretation, p. 21, Historic England, London (2013). Google Scholar

7. National Oceanic and Atmospheric Administration (NOAA), “Thunder Bay National Marine Sanctuary: 2014 resource protection highlights,” 2014,  http://thunderbay.noaa.gov/pdfs/science_highlights_tbnms%202014.pdf (1 January 2017). Google Scholar

8. S. Demesticha, “The 4th-century-BC Mazotos Shipwreck, Cyprus: a preliminary report,” Int. J. Naut. Archaeol. 40, 39–59 (2011). http://dx.doi.org/10.1111/ijna.2011.40.issue-1 Google Scholar

9. J. Henderson et al., “Mapping submerged archaeological sites using stereo-vision photogrammetry,” Int. J. Naut. Archaeol. 42, 243–256 (2013). http://dx.doi.org/10.1111/ijna.2013.42.issue-2 Google Scholar

10. J. McCarthy and J. Benjamin, “Multi-image photogrammetry for underwater archaeological site recording: an accessible, diver-based approach,” J. Marit. Archaeol. 9(1), 95–114 (2014). http://dx.doi.org/10.1007/s11457-014-9127-7 Google Scholar

11. T. Malzbender, D. Gelb and H. Wolters, “Polynomial texture maps,” in Proc. of the 28th Annual Conf. on Computer Graphics and Interactive Techniques, pp. 519–528, ACM (2001). Google Scholar

12. Ø. Hammer et al., “Imaging fossils using reflectance transformation and interactive manipulation of virtual light sources,” Palaeontologia Electron. 5, 9 (2002). http://dx.doi.org/ Google Scholar

13. G. Earl, K. Martinez and T. Malzbender, “Archaeological applications of polynomial texture mapping: analysis, conservation and representation,” J. Archaeol. Sci. 37, 2040–2050 (2010).JASCDU http://dx.doi.org/10.1016/j.jas.2010.03.009 Google Scholar

14. T. Freeth et al., “Decoding the ancient Greek astronomical calculator known as the Antikythera mechanism,” Nature 444, 587–591 (2006). http://dx.doi.org/10.1038/nature05357 Google Scholar

15. A. Gabov and G. Bevan, “Recording the weathering of outdoor stone monuments using reflectance transformation imaging (RTI),” in The Case of the Guild of All Arts, Scarborough, Ontario (2011). Google Scholar

16. P. Klausmeyer, “Applications of reflectance transformation imaging (RTI) in a fine arts museum: examination, documentation, and beyond,” in Lecture presented at the 3D Digital Documentation Summit held July 10-12, 2012 at the Presidio, San Francisco, California, National Center for the Preservation Technology and Training at Northwestern State University of Louisiana, Natchitoches, Louisiana (2013). Google Scholar

17. M. Mudge et al., “Reflectance transformation imaging and virtual representations of coins from the hospice of the grand St. Bernard,” in Proc. of the 6th Int. Conf. on Virtual Reality, Archaeology and Intelligent Cultural Heritage, pp. 29–39, Eurographics Association (2005). Google Scholar

18. M. Mudge et al., “New reflection transformation imaging methods for rock art and multiple-viewpoint display,” in The 7th Int. Symp. on Virtual Reality, Archaeology and Cultural Heritage VAST, pp. 195–202, Citeseer (2006). Google Scholar

19. T. Treibitz and Y. Y. Schechner, “Active polarization descattering,” IEEE Trans. Pattern Anal. Mach. Intell. 31, 385–399 (2009). http://dx.doi.org/10.1109/TPAMI.2008.85 Google Scholar

20. N. Gracias et al., “Mapping the moon: using a lightweight AUV to survey the site of the 17th century ship ‘La Lune’,” in Proc. of the Oceans MTS/IEEE OCEANS Conf., Bergen, Norway (2013). Google Scholar

21. Y. Y. Schechner, S. G. Narasimhan and S. K. Nayar, “Instant dehazing of images using polarization,” in Proc. of the 2001 IEEE Computer Society Conf. on Computer Vision and Pattern Recognition (CVPR ‘01), Vol. 1, pp. 325–332, IEEE (2001). http://dx.doi.org/10.1109/CVPR.2001.990493 Google Scholar

22. J. H. Lambert and E. Anding, Lamberts Photometrie: Photometria, sive De mensura et gradibus luminus, colorum et umbrae, W. Engelmann, Leipzig, Germany (1892). Google Scholar

23. M. Manfredi et al., “Measuring changes in cultural heritage objects with reflectance transform imaging,” in Digital Heritage Int. Congress, pp. 189–192, The Eurographics Association (2013). Google Scholar

24. C. H. Imaging, “Reflectance transformation imaging: guide to highlight image capture v2.0,” Cultural Heritage Imaging, San Francisco, California, 2013,  http://culturalheritageimaging.org/What_We_Offer/Downloads/Capture/index.html (4 July 2013). Google Scholar

25. S. M. Duffy et al., Multi-light imaging for heritage applications,” in Heritage, English Heritage Publishing, Swindon, United Kingdom (2013). Google Scholar

26. V. Masselus, P. Dutré and F. Anrys, “The free-form light stage,” in Proc. of the 13th Eurographics Workshop on Rendering, Eurographics Association, pp. 247–256 (2002). Google Scholar

27. F. Foerster, “A Roman wreck off Cap del Vol, Gerona, Spain,” Int. J. Naut. Archaeol. 9, 244–253 (1980). http://dx.doi.org/10.1111/ijna.1980.9.issue-3 Google Scholar

28. R. Soulsby, Dynamics of Marine Sands: a Manual for Practical Applications, Thomas Telford Publications, London (1997). Google Scholar

29. R. J. Davies-Colley and D. G. Smith, “Turbidity, suspended sediment, and water clarity: a review,” J. Am. Water Resour. Assoc. 37(5), 1085–1101 (2001).JWRAF51093-474X http://dx.doi.org/10.1111/jawr.2001.37.issue-5 Google Scholar

30. R. C. Gonzalez and R. E. Woods, Digital Image Processing, 3rd ed., Prentice Hall, Upper Saddle River, New Jersey (2008). Google Scholar

31. P. Pomey, Y. Kahanov and E. Reith, “Transition from shell to skeleton in Ancient Mediterranean ship-construction: analysis, problems, and future research,” Int. J. Naut. Archaeol. 41(2), 235–314 (2012). http://dx.doi.org/10.1111/ijna.2012.41.issue-2 Google Scholar

32. G. Vivar, C. de Juan Fuertes and R. Geli, “Cap del Vol, un producto, un barco y un comercio del Conventus Tarraconensis en época de Augusto,” in Actas I Congreso de Arqueologia Naútica y Subacuática Espanola, ARQUA, Cartagena (2014). Google Scholar

33. A. Asuero, A. Sayago and A. Gonzalez, “The correlation coefficient: an overview,” Crit. Rev. Anal. Chem. 36, 41–59 (2006).CCACBB1040-8347 http://dx.doi.org/10.1080/10408340500526766 Google Scholar

34. NIST, “National Institute of Standards and Technology,” US Department of Commerse, 2013,  http://www.nist.gov/lispix/imlab/histo/hist2.htm (6 September 2013). Google Scholar

35. M. Dellapiane et al., “High quality PTM acquisition: reflection transformation imaging for large objects,” The 7th Int. Symp. on Virtual Reality, Archaeology and Cultural Heritage VAST, and M. Ioannides et al., Eds, pp. 179–186  https://diglib.eg.org/handle/10.2312/459 (2006). Google Scholar

36. M. S. Drew et al., “Robust estimation of surface properties and interpolation of shadow/specularity components,” Image Vision Comput. 30(4), 317–331 (2012).IVCODK0262-8856 http://dx.doi.org/10.1016/j.imavis.2012.02.012 Google Scholar

37. A. Giachetti et al., “Light calibration and quality assessment methods for reflectance transformation imaging applied to artworks’ analysis,” Proc. SPIE 9527, 95270B (2015).PSISDG0277-786X http://dx.doi.org/10.1117/12.2184761 Google Scholar


David Selmo is the president of S&A Underwater Imaging Specialists LLC, a company specializing in acoustic and 3-D point cloud underwater imaging for the US military, government, industry, and science. He received his MSc in maritime archaeology from the University of Southampton in 2013. Underwater reflectance transformation imaging is the result of his dissertation research. He is currently involved in ongoing underwater archaeological projects in Canada, Spain, Cyprus, and the Middle East.

Fraser Sturt is an associate professor of archaeology at the University of Southampton. He is a specialist in maritime prehistory and geoarchaeology with a particular emphasis on acquisition and integration of diverse datasets through use of advanced computational systems from data capture in the field through to modeling in the laboratory.

James Miles is an archaeological PhD student at the University of Southampton. He specializes in computational processes within archaeology, with an emphasis on three-dimensional and surface based recording, including laser scanning, computed tomography and RTI. His PhD focuses on the use of engineering within archaeological simulation where a number of recording processes have been used. He is also the director of Archaeovison, a UK and Estonian based company specializing in digital processes in cultural heritage.

Philip Basford was awarded his PhD in computer science in 2015 from the University of Southampton. This follows his MEng degree in computer science from the same institute in 2008. He is currently a member of the Institution of Engineering and Technology. His research interests include environmental sensor networks, the internet of things, and facilitating commercial use of reflectance transformation imaging.

Tom Malzbender is a researcher working in computer vision, imaging, and 3-D graphics. During his 31 years at Hewett-Packard Laboratories, he developed the techniques of reflectance transformation imaging, polynomial texture mapping, and Fourier volume rendering, the first two being extensively used in archaeology. He developed the sensing technology leading to HP’s entry into the graphics tablet market and has helped organize over a dozen conferences in the fields of graphics, vision, and scientific visualization.

Kirk Martinez is a professor of electronics and computer science at the University of Southampton. His imaging and image processing research includes the VASARI and MARC projects on high-resolution colorimetric imaging. His work with the Viseum project resulted in a new system to allow web browsers to view high-resolution images (which became IIPimage). These projects led to the VIPS image-processing library. He has developed RTI imaging systems and is active in wireless environmental sensor networks.

Charlie Thompson is a senior research fellow in sediment dynamics at the University of Southampton. Her research focuses on benthic boundary layer processes, including fluid and solid-transmitted stresses during sediment transport and resuspension, and their effects on submerged archaeological artifacts. This takes the form of both laboratory and fieldwork, where she specializes in the use of laboratory and in situ annular flumes.

Graeme Earl is a professor of digital humanities at the University of Southampton. He has been deploying and researching RTI and associated methods since 2005 and has led a series of research projects focused on the application and extension of the RTI approach in a broad range of cultural heritage contexts. He has also worked on RTI visualization and annotation tools, and on the archiving of large-scale RTI datasets.

George Bevan is an associate professor in the Department of Classics at Queen’s University, Canada. After receiving his PhD in classics in 2005 from the University of Toronto, he has worked to apply different methods in computational photography, principally RTI, and photogrammetry, to problems in archaeology and epigraphy. His current research is focused on archaeological documentation at a variety of scales in the Balkan region.

© The Authors. Published by SPIE under a Creative Commons Attribution 3.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
David Selmo, Fraser Sturt, James Miles, Philip Basford, Tom Malzbender, Kirk Martinez, Charlie Thompson, Graeme Earl, George Bevan, "Underwater reflectance transformation imaging: a technology for in situ underwater cultural heritage object-level recording," Journal of Electronic Imaging 26(1), 011029 (28 February 2017). https://doi.org/10.1117/1.JEI.26.1.011029 Submission: Received 30 June 2016; Accepted 2 February 2017
Submission: Received 30 June 2016; Accepted 2 February 2017

Back to Top