Photoacoustic Imaging (PAI) is a promising hybrid modality capable of providing image contrast by optical absorption, but with tissue penetration and spatial resolution at depths that are superior to pure optical imaging methods.1 PAI has been employed in vivo to visualize blood-vessel structures,2, 3, 4 detect breast tumors,5, 6, 7 estimate oxygenation levels,8, 9, 10 perform functional imaging,3, 11 and track molecular probes.12
The technique is based on volume irradiation with a short laser pulse followed by preferential energy deposition at the optically absorbing structures within the volume. These structures heat up slightly, but so rapidly that the condition of thermal confinement is met and thermoelastic expansion leads to a transient bipolar pressure wave that propagates outward toward the surface.13 The frequency composition of the pressure transient (i.e., the “photoacoustic wave”) is typically in the ultrasonic range,14 and is detectable by wide-band ultrasound detector(s) placed in acoustic contact with the surface of the volume. Based on the time-domain surface measurements, the distribution of the photoacoustic (PA) sources (structures) inside the volume can be reconstructed using a back-projection algorithm, several of which have been suggested and adapted from medical imaging.15, 16, 17, 18, 19
The photoacoustic process is intrinsically three-dimensional. This is a product of the optical radiation scattered in tissue, which diffusely illuminates a volume, rather than at a single point, line, or plane. Furthermore, the PA waves are of a spherical nature and propagate in all directions from their point of generation. Several approaches have been suggested for 3-D PAI and are differentiated by detection scheme. These can be divided into scanning methods, where a single detector is scanned along the detection surface in two dimensions,2, 3, 4, 20, 21 and staring methods, where a 2-D array of detectors is used and no scanning is necessary.6, 22, 23 A combined scanning–staring approach has been suggested as well, where a linear array or combination of single detectors is mechanically scanned.7, 24, 25, 26, 27
Motivation and Approach
We were motivated to develop a method for 3-D PAI that can be used for small animal research with eventual translation to clinical applications. We based the development of the imaging system on the following design goals: (1) real-time image acquisition capability, (2) easy placement and accessibility to the scanned object, (3) simple design (i.e., low detector count), and (4) potential for transportability to permit integration with other imaging modalities for multimodality hybrid imaging and validation [e.g., magnetic resonance imaging (MRI) and x-ray computed tomography (CT)]. Review of the literature suggested that the methods developed to date lack in one or more of these design features. Mechanically scanned approaches require long times for data acquisition, and therefore cannot be used in real time. Depending on the scanning geometry, some also lack accessibility to the subject under investigation and are too cumbersome to be considered candidates for transportability. They are, however, usually inexpensive and simple to set up, since only a single transducer and detection channel is used. Staring methods do not require mechanical scanning and hence, in principle, can acquire all the required data during a single laser pulse, which has the potential to create 3-D photoacoustic images in real time. Staring methods also have open access to the imaging region, and some have been built as transportable systems. 6, 7 However, all published implementations of the staring method have incorporated a dense array of detector elements, which is difficult to manufacture. Furthermore, real-time 3-D imaging with dense detector arrays is prohibitively expensive due to the high channel count required for simultaneous data acquisition from all detector elements. A combined scanning–staring approach can reduce system cost compared to a pure staring approach; however, imaging speed is still limited by the need to scan the detector array mechanically.
Based on the design goals and the implementations described in the literature, we chose to build a system based on a staring 2-D detector array due to its inherent speed and sample accessibility. In order to reduce channel count, a sparse detector array geometry was investigated, where instead of packing detection elements as close together as possible, the elements were spaced to cover a large area but with a low packing ratio. The spread of the detection elements over a large area was necessary to provide a wide range of viewing angles of the volume to be imaged, which is known to improve the boundary reconstruction of PA objects in the field of view.28 The geometry of the sparse array was annular to accommodate backward-mode illumination of the object through the center of the annulus, which was amenable to easy object placement and accessibility.
Due to the small number of detectors, and the limited viewing angles they cover, a simple radial back-projection of the PA signals into volume space would have resulted in streaking image artifacts nearby the reconstructed location of the PA sources. For this reason, we chose to adapt an iterative image reconstruction algorithm, originally developed by Paltauf, 17 to handle data collected with the sparse detector array. In addition, we developed a characterization method29 to measure the 3-D sensitivity distribution of the annular detector array, which was used to constrain the iterative image reconstruction algorithm.
The 3-D Photoacoustic Imaging System
The 3-D photoacoustic imaging system [Fig. 1a ] used backward-mode illumination, where the laser beam illuminated the imaging volume from the same side as the ultrasound detectors. The ultrasound detectors were sparsely arranged around the laser entrance window and pointed toward the volume of interest. The electronic signal generated by the detectors was captured in parallel by a dedicated data acquisition (DAQ) system and was sent to the computer, where it was processed by an iterative image reconstruction algorithm based on back-projection. The imaging system was built from the hardware components described here.
The laser was a Q-switched Nd:YAG coupled to an OPO (VIBRANT, Opotek, Carlsbad, California). The laser had a pulse duration at a repetition rate of , with energy per pulse , and was wavelength tunable between .
The platform on which the sample was placed for imaging had a flat, even surface that was fully accessible from the top. The optical window for laser illumination, and the sparse array of detectors set around the window, were integrated into the platform. An electronic card for detector signal amplification was housed in a metallic box immediately below the sample platform and was modified to allow the laser beam to pass through.
Robotic 3-D gantry
A robotic gantry constructed from three linear slides (T-LLS260 Zaber Technologies, Vancouver, Canada) in an configuration was used to scan a photoacoustic point source through a multitude of locations within the imaging volume. This characterization setup, shown in Fig. 1b and described elsewhere,29 provided the 3-D sensitivity distribution maps of the detector array, which were used to constrain the image reconstruction algorithm.
Computer and data acquisition system
The imaging system was controlled by a standard Windows-based PC ( CPU and RAM). We developed a dedicated DAQ system capable of acquiring multiple signals in parallel with high temporal resolution and wide dynamic range. The system was comprised of a chassis holding multiple boards, each board having eight analog-to-digital converter channels. Each channel had a sampling rate of , a resolution of ( resolution with input range of ), and its own memory where the digital signal was stored after conversion. The digital signals were transferred from the DAQ memory to the computer via a USB communication bus. Data transfer was facilitated by custom control software (LabView, National Instruments, Austin, Texas) that also served to control and synchronize the laser, the robotic gantry, and the DAQ system.
Sparse detector array
The sparse array concept was tested with an annular geometry of detectors and backward mode laser illumination through the axial center of the array.29 Characterization of the sensitivity distribution revealed that there was much less signal produced by the detectors above the center of the annulus than directly above each element. To improve the sensitivity within the region of the laser beam path, we added an annular delay line to the top of the annular array with a wedge located directly over each detector [see Fig. 1c]. The wedge angle was designed to redirect the line of sight of each detector through the center of the array at a height of above the plane of the detectors. The delay line was CNC machined from clear acrylic with a wedge angle of . The detectors themselves were manufactured in-house from -thick metallized PVdF film (P/N 3-1003702-4, Measurement Specialties, Inc., Hampton, Virginia).
Iterative Image Reconstruction Algorithm
We used an iterative image reconstruction algorithm first proposed by Paltauf 17 The main assumption of the model was that the PA pressure profile measured at each detector could be described as a linear superposition of individual PA pressure profiles generated by spherical sources located on a 3-D grid of voxels. In the forward model, each spherical source generates a pressure profile given by:is the magnitude of the pressure wave propagating radially from the center of the source, is the speed of sound, is the distance between the center of the source and the detection point, and is the radius of the spherical source.14 The pressure profile has a bipolar shape with a triangular positive compression phase followed by an inverted negative triangular tension phase.
The velocity potential is obtained by integrating the pressure profile with respect to and has the form:is a positive inverted parabola, peaking at and dropping to zero when .
For a given detector located at position , the calculated for a PA source located at can be written as:is the intensity of source , is the source–detector separation, is the sensitivity matrix element indicating the relative sensitivity of detector to a source at location , and2, centered in time at . When multiple sources are present, the total measured at detector is obtained by summing Eq. 3 over all the sources (index ). Note that, in general, is inserted into at different time points for each source.
The back-projection model is based on the assumption that each time point is a result of a linear superposition of sources that lie on a shell around detector whose distance from the detector is in the range .
The amount of signal back-projected into each voxel is given byis the number of voxels that are crossed by the shell defined by . Note that in practice, is set to zero when drops below a threshold to maintain stability in for voxels that lie on the edge of the shell.
Experimentally, the measured PA signals were rectified and filtered with a time-domain moving average to compute . A first estimate of the master image was formed by setting all voxel intensities to zero. The master image was forward-projected to obtain . The difference was calculated and back-projected into volume space. The resultant difference image was added to the master image, and the outcome was forward-projected to obtain the new . The iterative process was repeated until either of two stop criteria was met: (1) a set number of iterations was reached, or (2) a relative change in between successive iterations dropped below a given threshold.
The sensitivity map of the detector array (see the following) was interpolated and used as the weighting function for the forward- and back-projections. This map attenuated estimates from voxels where detector sensitivity was low and enhanced estimates from voxels where detector sensitivity was high.
All images were reconstructed with the following algorithmic parameters: image volume of , voxel size of , Cartesian coordinate system as defined in Fig. 1c, and 500 iterations. The relative voxel intensities were displayed on a linear grey scale.
Sensitivity Mapping of the Detector Array
Kruger have reported the use of a scanned PA point source for calibrating the spatial response of a linear detector array along two orthogonal axes.30 However, to account for the response profile of each detector element and the overall sensitivity profile of the array in the entire imaging volume, we raster-scanned a PA point source in three dimensions [Fig. 1b] to generate volume maps of detector sensitivity and coverage.29 The point source was constructed from a -core optical fiber whose tip was polished and coated with black paint. At each point of the scan, the source was motionless and the laser was fired through the fiber, which resulted in a pressure transient generated at the coated tip due to the photoacoustic effect. Each element of the detector array received the outward-propagating pressure wave with a delay time and viewing angle that depended on the source–element separation geometry. Each detector element produced an analog voltage proportional to the pressure profile, which was amplified, digitized, and stored. For each laser pulse, part of the beam was directed toward a photodiode and recorded synchronously with the PA signals to monitor fluctuations in laser power.
The scan was performed in water for acoustic coupling between the source and the detector array. The volume under investigation was a cube, extending from to . The volume was chosen to cover the area of the optical window in the plane and to cover the focal zone of the annular array in the direction [see Fig. 1c]. The acquisition points were spaced apart in the , , and directions for a total of 216 points (i.e., ) that spanned the volume. The data sets acquired from the array elements at each scan point were fed into the analysis software, where the location and strength of each PA peak was detected, and the effects of laser power variation and acoustic spherical divergence were taken into account. In a separate experiment, the angular emission profile of the point source was measured by rotating the source at a fixed point above one array element and recording the relative change in PA signal. The resulting curve was interpolated and incorporated into the calculations of the sensitivity maps. The software generated a sensitivity map for the detector array that depicted the relative strength of the PA signal detected by each element at each scan point. The sensitivity map was used as a weighting function by the image reconstruction algorithm.
Element Coverage Mapping of the Detector Array
The analysis software also generated a coverage map, which depicted the number of elements in the array that received signals above background noise for each voxel. Due to the limited angular response of the detectors and the sparse geometry of the array, it was expected that there would be voxels where not all detectors had enough sensitivity to detect the point source. Visualization of the regions of maximum coverage gave an indication of where the best imaging quality was to be expected, since for voxels where coverage was good, the reconstruction algorithm would be able to incorporate data from a multitude of angles.
To validate the accuracy of the 3-D image reconstruction, we developed a 3-D phantom that had a distribution of PA sources with known spatial locations. Instead of building a physical phantom of small absorbing objects, we used a composite of the PA signals collected during the sensitivity scan to build up a synthetic phantom of point sources. This approach had two advantages: (1) the position of each source was accurately known and referenced to the same coordinate system as the imaging volume, and (2) the PA signal intensity was constant regardless of position, since the source was illuminated through the optical fiber and the signals were corrected for variations in laser energy. This approach provided a means to generate a variety of synthetic phantoms with user-defined distributions of point sources. Although the synthetic sources were accurately localized and uniform in intensity and therefore well suited as test objects for the 3-D PAI system, they did not represent a real imaging condition with external illumination and simultaneous detection of multiple point absorbers. To test 3-D PAI on real objects with external illumination, we first used the coated fiber tip as a point absorber and illuminated it in backward mode through the laser access window instead of through the fiber directly [Fig. 1a]. We then imaged an extended object in the form of a -diameter graphite rod, which was suspended above the center of the detector ring at a height of . All phantom experiments were carried out with distilled water as the acoustic coupling medium.
Characterization of the Sparse Array
Figure 2a and 2b show typical PA signals acquired at one of the characterization scan points. Figure 2c represents the calculated coverage map. The coverage peaked at the center of each plane, as expected from symmetry, but there was also a focusing effect along the axis. The largest lateral extent was observed in the plane , and the coverage diminished toward both higher and lower values. The coverage distribution in 3-D, hence, took a form similar to an ellipsoid. The center of the ellipsoid was located directly above the array center and at a distance of from the plane of detection. Its short axis, oriented laterally, and long axis, oriented along the axis, were 15 and long, respectively. Figure 2d shows the calculated sensitivity map, which describes the relative sensitivity of each array element to each voxel in the volume under investigation. The map displays the distribution for each plane (map rows) and for each detector element (map columns). In the direction, the combined effect of the delay line and the detector response profile can be appreciated: starting at the bottom row (i.e., the highest value and the plane closest to the detectors), the most compact distribution is observed. Moving away from the detectors, the distribution becomes extended in the plane. The location of the center of the distribution, however, also shifts away from the detector location for increasing distances from the detector plane. For example, at , the distribution for detector 13 peaked at the bottom-left corner, which was closest to the detector location; at , it peaked on the opposite side, at the top-right corner. These 3-D sensitivity distributions can be explained by the angular response profile of each detector/wedge combination. We can imagine each detector’s sensitivity field as a beam of light (e.g., from a spotlight) and correlate the beam divergence with the angular profile’s FWHM and the pointing angle with the refracted angle of the delay line. It is then easy to visualize that when the beam enters the imaged volume, it is compact, and when it leaves at the farthest plane, it is broad. Now, thinking of a multitude of light beams, it is evident that the region where they meet (the region of maximum coverage) will have the highest intensity and an ellipsoidal shape.
3-D Images of Phantoms
The reconstruction of a single synthetic point source is shown in Figs. 3a, 3b, 3c . Fig. 3a displays three of the VPs calculated from the measured PA signals . In Fig. 3b, three of the , obtained after 500 iterations ( run time on the PC) are shown. The magnitude, shape, and arrival time of the peaks in are well reproduced. Figure 3c presents three orthogonal sections of the reconstructed volume, showing that the point source was reconstructed at the expected location, but with broadened dimensions. It is interesting to note that the broadening was stronger in the and directions, and the edges were better defined in the direction.
Similarly, reconstruction of a single, externally illuminated point source is presented in Figs. 3d, 3e, 3f. In this experiment, only 10 of the 14 detector elements recorded signals above the noise level. Although the exact cause of the signal dropout could not be determined, it may have been related to asymmetry in the PA wave generated by the source. Nevertheless, the algorithm was able to reconstruct the point source at the proper position even with signals from an incomplete set of detectors. Comparison of the in Fig. 3e with in Fig. 3d demonstrates that some detector signals that were not present in the (e.g., solid curve) were reconstructed in .
The same VP signal that “reappeared” in Fig. 3e is also present in Fig. 3b, and at the same arrival time, indicating that the reconstruction could accurately compensate for the missing data. Image artifacts, in the form of scattered dim voxels, appear in most of the images. However, these artifacts seem to be more pronounced in images where did not reproduce the shape and magnitude of well.
Next, we reconstructed an array of three synthetic sources arranged along the axis [Figs. 4a, 4b, 4c ] and three synthetic sources along the axis [Figs. 4d, 4e, 4f]. As can be appreciated from the orthogonal slices, the sources along the axis were better defined and better separated than the sources along the axis. This was to be expected from the detector arrangement relative to the imaged volume: a separation of in the axis created a time-of-flight difference approximately two-times longer than the same separation along the axis. Therefore, for the sources distributed along the axis, the reconstruction algorithm was presented with well-separated peaks, which were better reconstructed and resulted in sharper edges.
In both Fig. 4c and Fig. 4f, the intensity of the reconstructed objects varied with location in the image even though the original PA source used to synthesize them was constant in intensity. The objects seemed to reconstruct brighter when closer to the plane of the detectors [Fig. 4c] and when closer to the axis of the detector ring [Fig. 4f]. A potential explanation for the inhomogeneity in voxel intensity could be related to the detection geometry. For example, objects closer to the detectors are better separated in time, and hence better reconstructed. On the other hand, on-axis objects will be covered by a greater number of detectors, which will also result in better reconstruction.
Last, the reconstruction of the graphite rod is presented in Fig. 5 . Comparing Fig. 5a with Fig. 3d, we find that the PA signals recorded on some of the detectors were substantially stronger with the rod than with the point source, while on other detectors, they were weaker. This was likely due to the different propagation of the PA waves from a point source (isotropic) versus a line source (perpendicular to the line). Detectors on the ring that view the rod perpendicularly to its length will hence detect stronger PA signal than detectors that view the rod from a direction parallel to its length. Interestingly, we observe again that even when signals were absent at some detectors, they were generated by the reconstruction algorithm at their expected location. This is another indication of the robustness of the algorithm. The images in Fig. 5c reproduce the rod shape quite reliably. The cross section along the direction, as can be seen from the and slices, is only one voxel wide, which agrees well with the rod diameter of . The in-plane orientation of the rod, as indicated by the tilt in the slice, is also in very good agreement with the photograph in Fig. 5d. It is worthwhile to note that the PA images of the rod were obtained with a single laser shot and no averaging.
Although the image reconstruction algorithm had robust qualities, some image artifacts were found, appearing as bright voxels near the boundary of the imaging volume [lower-left corner in the slice of Fig. 5c] similar to those observed for the synthetic sources.
Photoacoustic imaging by sparse-array detection and iterative image reconstruction was investigated and found to be a viable method for generating 3-D photoacoustic images in backward mode. Distributions of synthetic sources were accurately reconstructed from the time-domain PA signals. Furthermore, with backward-mode illumination, 3-D images of a point source and an extended source were obtained.
The images obtained from a point source and an array of point sources confirmed the localization accuracy of in each dimension. This further validated the accuracy with which the positions of the array elements were registered to the imaging volume using the scanned PA source method.31 Analysis of line profiles through the point-source image (Fig. 3) along the lateral axes ( and ) and the vertical axis resulted in estimates of the point spread function (PSF) for the three coordinate directions of the imaging system. Image resolution was determined from the FWHM of the PSF. From this data, the spatial resolution for the 3-D PAI system was in and in both and . This compared well to the spatial resolution reported by other groups that have implemented staring 3-D PAI methods. For example, the Twente photoacoustic mammoscope6 was reported to have axial resolution of and lateral resolution of at imaging depths of . Using an acoustic lens and optical stereo imaging system, Niederhauser 22 reported results that implied axial PSF and lateral PSF , measured at depths . Using a dense 2-D micromachined detector array and delay-and-sum reconstruction, Vaithilingam 23 produced images that had an axial resolution of at a depth of . (No estimate of the lateral resolution was given.) Our estimated resolution of axially and laterally at a depth of compares well to these other results.
The 3-D sensitivity distribution of the focused annular sparse array was characterized using a scanned PA point source method. The characterization results indicated that the region of best coverage for the specific detector geometry was at distances of perpendicular to the detector surface, with lateral extent of centered directly above the array. This confirmed our specification of the wedge angle, which redirected the viewing direction of each element through the center of the detector annulus at a height above the plane of the array. The success of this approach suggests that the depth and extent of the volume coverage by such an array could be readily adjusted by varying three parameters: the wedge angle, the element diameter (which is inversely related to the extent of the angular response profile), and the annular array diameter. An increase in wedge angle would bring the focus closer to the plane of the detector surface and shorten the depth of focus. An increase in element diameter would reduce the angular extent of each element’s sensitivity profile but at the same time increase the spatial area, resulting in a mixed effect on the coverage, both in depth and in lateral dimensions. An increase in annular diameter would take the focus deeper and slightly increase the coverage. An interesting possibility would be to use several concentric detector rings, each with different combinations of geometric parameters. One could then achieve greater flexibility. In one realization, all detector rings could be designed to focus on the same spot, which would provide higher sensitivity and resolution limited to a small volume. In a second realization, construction of a set of concentric ring detector arrays with an inverse relation between the ring diameter and focal distance could achieve a “collimated” coverage field.
From a manufacturing point of view, there is a strong advantage to the wedge method compared to a curved surface array. Machining of wedged surfaces is simple with modern CNC technology. Additionally, there would be no need to modify the electronics or sensor material if a detector array of a different focal distance was desired; one would need to change only the wedge angle. This would permit a flexible and modular approach to detector design.
The images presented earlier were reconstructed from data acquired over 10 laser pulses for averaging and noise reduction. Hence, image acquisition took (laser repetition rate was ). Image reconstruction, however, took on the order of minutes ( for 500 iterations). Reconstruction computations scale linearly with number of detectors, and with number of voxels in the image. Faster reconstruction times are possible by reducing the resolution and/or specifying a smaller image volume.
The iterative image reconstruction has shown the capacity to compensate for missing measurement data, as in the case of the point source reconstruction, where signal was measured on only 10 out of 14 detectors [Fig. 3d]. This indicates the robustness of the algorithm (i.e., as long as most detectors can detect the PA signal, the algorithm is able to compensate for those that do not record signal). This feature is beneficial when combined with the sparse detection geometry, since it provides a means to obtain image estimates with limited data sets.
We have shown that it is possible to acquire a 3-D photoacoustic image with a single laser shot and no averaging (Fig. 5). In principle, 3-D single-shot imaging should carry information at time scales comparable to the laser pulse (i.e., ), which could be useful for observing fast dynamic processes at an instant in time. Taken one step further, one could fire a sequence of hundreds or perhaps even thousands of laser pulses, and for each pulse, capture and store a 3-D PA frame on the DAQ hardware. The frames could then be reconstructed off-line to be viewed later as a 3-D photoacoustic movie. This, for example, could provide a method for 3-D visualization of optically absorbing tracers in small animals as they transit through the circulation after injection.
We have demonstrated an approach to 3-D PA imaging that uses a sparse array of ultrasound detectors and an iterative reconstruction algorithm. We have shown that the method produces 3-D images of point sources and a line source with good contrast and accurate volume localization. The resolution was estimated to be perpendicular to the detection plane and in the detection plane. However, validation with more complex 3-D absorbing structures and the presence of tissue-mimicking optical scatter is needed. In addition, the issue of artifact suppression on the volume boundary needs to be addressed, possibly through incorporation of an apodizing function into the reconstruction software. It is also not clear yet whether, and to what extent, the small number of detectors limits the complexity and resolution of structures that can be imaged. Theoretical studies of that question would be of great value to determine the range of applications best suited for this method.
This project was funded by grants from the Academic Development Fund (ADF) at the University of Western Ontario and the Natural Sciences and Engineering Research Council of Canada (NSERC) to JJLC. PE was funded by the NSERC Canada Graduate Scholarship for Doctoral Studies (CGS-D3). In addition, partial funding was provided by MultiMagnetics, Inc. and by the Ontario Research and Development Challenge Fund (ORDCF).