Translator Disclaimer
18 September 2019 Single-pixel camera photoacoustic tomography
Author Affiliations +
Abstract

Since it was first demonstrated more than a decade ago, the single-pixel camera concept has been used in numerous applications in which it is necessary or advantageous to reduce the channel count, cost, or data volume. Here, three-dimensional (3-D), compressed-sensing photoacoustic tomography (PAT) is demonstrated experimentally using a single-pixel camera. A large area collimated laser beam is reflected from a planar Fabry–Pérot ultrasound sensor onto a digital micromirror device, which patterns the light using a scrambled Hadamard basis before it is collected into a single photodetector. In this way, inner products of the Hadamard patterns and the distribution of thickness changes of the FP sensor—induced by the photoacoustic waves—are recorded. The initial distribution of acoustic pressure giving rise to those photoacoustic waves is recovered directly from the measured signals using an accelerated proximal gradient-type algorithm to solve a model-based minimization with total variation regularization. Using this approach, it is shown that 3-D PAT of imaging phantoms can be obtained with compression rates as low as 10%. Compressed sensing approaches to photoacoustic imaging, such as this, have the potential to reduce the data acquisition time as well as the volume of data it is necessary to acquire, both of which are becoming increasingly important in the drive for faster imaging systems giving higher resolution images with larger fields of view.

1.

Introduction

Photoacoustic tomography (PAT) is a hybrid imaging technique based on the use of laser-generated ultrasound within soft tissue that has been demonstrated in a wide variety of applications in preclinical research and clinical medicine.1,2 When a short pulse of near infrared (NIR) light is absorbed by chromophores within soft tissue, it gives rise to a pressure increase that propagates through the tissue as an ultrasound pulse and can be detected at the surface. Following the measurement of these photoacoustic signals on the tissue surface, an image of the initial pressure distribution can be reconstructed.

Photoacoustic signals are broadband, containing frequencies up to and above 10 MHz, at which the wavelength is <150  μm. The classical approach to sampling—spatial and temporal—follows the Shannon–Nyquist theorem, which states that a band-limited signal can be recovered exactly if the sampling rate is at least twice the maximum frequency present in the signal. This suggests a large number of detectors are required for PAT, because the measurement plane should subtend a large solid angle at the imaging target in order to avoid limited-view artifacts, e.g., a 2×2  cm aperture sampled at 75-μm spacing results in more than 70,000 detection points. However, a dense array of many tens of thousands of small elements can be expensive and difficult to fabricate. One alternative is to use a smaller number of detectors and scan them across the surface, but this has the drawback of reducing the imaging frame rate. Another alternative is compressed sensing (CS),3 also called compressive sampling. The idea behind CS is that, under certain conditions, it is possible to reconstruct a target image accurately from fewer samples than determined by the Nyquist rate. The first requirement is that the target is known to be of low spatial (or more generally spatiotemporal) complexity, e.g., that it is sparse in a given basis. The second requirement is that the set of measurements contains nonredundant information about the target at all relevant scales. With these two requirements satisfied, it is often possible to reconstruct an accurate image of the target with only a fraction of the data that would be acquired with Nyquist sampling. The original and classic demonstration of this CS paradigm is the single-pixel camera.4,5

In this paper, a single-pixel camera is used to measure time-varying photoacoustic signals reaching a planar Fabry–Pérot (FP) ultrasound sensor,6 thereby facilitating compressed-sensing PAT. The FP sensor comprises a polymer film spacer sandwiched between a pair of dichroic mirrors. Any changes in the optical thickness of the spacer will modulate the optical reflectivity. When the wavelength of the interrogating laser light is tuned correctly, the reflected optical power will be proportional to the acoustic wave modulating the spacer thickness. A focussed laser beam is usually used to read out these acoustic pressure waves point-by-point. In this way, it is possible to synthesize arrays of many tens of thousands of detection points with small element sizes. This approach has been shown to give high-resolution images with high contrast.7 However, as mentioned above, the need to scan results in slow data acquisition. To acquire the data more quickly, one possibility is to use an interrogation system that can read out multiple points simultaneously,8 but it comes with high equipment cost and is technically challenging to implement. Another choice would be to use a camera, e.g., a CCD or CMOS camera, to record the signal at many pixels simultaneously,9,10 but this requires a separate measurement for every time point. In this paper, the FP sensor was interrogated using wide-beam illumination and the reflected light was patterned on reflection from a digital micromirror device (DMD) before being collected into a single photodetector. With this system, it is possible to measure the spatial integral of the product of the pattern and the acoustic field at the sensor, i.e., projections of the field onto arbitrary spatial patterns. When choosing the set of patterns it is necessary to ensure that they will capture the data in such a way as to facilitate maximizing the image quality for a given size of dataset during the image reconstruction. Here scrambled Hadamard patterns are used. This approach has previously been demonstrated for real-time ultrasound detection;6 here, that work is extended to PAT.

Several experimental demonstrations of CS in PAT have been reported since Provost and Lesage11 first proposed it. The systems used have included ring arrays or circular scanning systems,1115 systems employing integrating line detectors,1619 linear arrays or line scans,2023 and two-dimensional (2-D) arrays.2429 In all of these studies, the sensors are restricted to subsampling the acoustic field at a set of points (or lines in the case of the integrating line detectors). In contrast, this paper is concerned with measurements made with a 2-D planar sensor interrogated with patterns.6 As well as a variety of detection schemes, a number of approaches to reconstruction from sparse data have been proposed for PAT: principal component analysis,21 sparsifying transforms,18,30 deep learning,17,16,25,26,31 and variational approaches that minimize a functional11,14,15,29,28,32 including joint motion estimation.24 Here a variational minimization approach will be taken.29 (For clarity, the term “compressed sensing” has been used in the photoacoustic imaging literature to refer to 2-D photoacoustic imaging using patterned excitation light.3335 This is difficult to extend to 3-D imaging and is quite a different idea from the patterned acoustic sensing described here.)

2.

Compressed Sensing Photoacoustic Tomography with Patterned Detection

In a PAT experiment, a laser pulse is used to illuminate the target volume, and where the light is absorbed it generates an initial acoustic pressure distribution p0(x,y,z). The aim is to image this initial acoustic pressure distribution. Because tissue is elastic, the initial pressure distribution excites acoustic waves, which propagate through the tissue to the sensor on the tissue surface. The acoustic field at the FP sensor at time t can be denoted by p(x,y,t)=Ap0(x,y,z), where A is the acoustic propagation operator. To describe how the FP sensor facilitates CS, it is useful to start by considering a point-by-point interrogation scheme, in which there are N detection points on the sensor. A complete dataset, PRN×RT, which can be described as a collection of measurements at different times P={pt,t=0,,T1}, where pt={ptn,n=1,,N}RN represents the measurements at the N detection points at a single time t, and ptn=p(xn,yn,t) is the scalar acoustic pressure amplitude at a single point with coordinates (xn,yn) at time t. If, for a single time t, the pressure field pt can be represented sparsely in a basis Ψ, then we can write pt=Ψat, or

Eq. (1)

(pt1ptN)=(ψ11ψQ1ψ12ψQ2ψ1NψQN)(at1atQ),
where the columns ψq of Ψ are basis functions and at={atq,q=1,,Q<N} are the corresponding coefficients. If measurements ptn are recorded at all N points, as they are with a complete point-by-point scan, it would be possible to calculate the sparse coefficients at using an inner product atq=ψq,pt. This is analogous to the case of image compression. In the CS approach, the coefficients at are obtained directly from M<N measurements. Each measurement in the pattern-interrogation scheme is an integral of the field weighted by a pattern. In other words, the measurements are the set of amplitudes wt={wtm,m=1,,M} given by

Eq. (2)

wtm=ϕm,pt,m=1,,M,
where each ϕm is a measurement pattern. The idea behind CS is to assume that pt is sparsely represented in a basis Ψ and use measurement patterns Φ={ϕm,m=1,,M<N} incoherent to it. The incoherence of the basis Ψ and the measurement patterns Φ is crucial. If one knew, ahead of time, which sparse coefficients represent the solution, it would be possible to coherently measure the relevant coefficients atq directly using the respective basis functions as the measurement patterns, ϕm=ψm. In practice, however, the basis in which the data will be sparse is rarely known in advance, so CS proceeds using patterns that, through their incoherence, “equally” sense all the basis vectors ψm and afterward use sparse recovery to extract the sparse coefficients from these measurements.

Once the M measurements at each time step wt have been recorded, the challenge is to reconstruct the initial acoustic pressure distribution p0. If a full set of amplitudes w={wtm,m=1,,M=N,  t=0,,T1} have been recorded, one can obtain the original acoustic field at the FP sensor as p=Φ1w and use standard PAT image reconstruction techniques for this scanning geometry, e.g., time-reversal.36 However, when only a subset of the data has been recorded, M<N, there are two options: two-step schemes first reconstruct p from compressed measurements, solving a problem akin to basis pursuit in CS (see e.g., Ref. 27, which assumes sparsity of pt in a Curvelet basis), and then use a standard photoacoustic inversion technique, whereas one-step schemes reconstruct the image directly from the compressed data. Reference 29 contains a detailed description of the one-step approach used here, namely an accelerated proximal gradient-type algorithm to solve the following minimization problem:

Eq. (3)

p0*=argminp00ΦAp0wt22+λTV(p0),
where TV(p0) denotes the total-variation regularization. k-Wave37 was used to compute both the acoustic operator A and its adjoint.38 The regularization parameter λ was chosen via manual inspection. In the experiments reported here, the measurement matrix Φ was chosen as a scrambled Hadamard matrix. This choice is both theoretically and practically appealing as explained below. The Hadamard transform is a 2j×2j matrix that can be recursively defined as

Eq. (4)

Hj=12(Hj1Hj1Hj1Hj1),j>0
and H0=1 (note that this means that a Hadamard matrix can be written as a matrix with entries that are 1 or 1, multiplied by a normalization factor). The Hadamard transform is orthogonal and self-inverse, i.e., Hj=HjT=Hj1. A scrambled Hadamard matrix is formed by permuting the columns and rows of the matrix Hj, to give Hjs=PrHjPc, where Pc and Pr are the column and row permutation matrices, respectively. Notice that the scrambled Hadamard transform

Eq. (5)

y=Hjsx=PrHjPcx
amounts to applying the column permutation to the vector x, Pcx, applying the Hadamard transform to the permuted vector and subsequently permuting the rows. Thus, the cost of the scrambled Hadamard transform is of the same order as that of the Hadamard transform, which in turn can be realized with a fast algorithm, O[2jlog(2j)]. Similarly, the inverse scrambled Hadamard transform

Eq. (6)

x=Hjs1y=Pc1Hj1Pr1  y=PcTHjPrTy
is equivalent to applying inverse row permutation to y, PrTy, applying the Hadamard transform to the permuted vector and subsequently applying the column permutation to the vector. Each row of the matrix Hjs represents a measurement pattern, so for CS, where M<N=2j, the first M rows out of N are selected. The selection of M rows yields an underdetermined matrix with desirable properties for CS, similar to those of a random Gaussian/Bernoulli matrices.39

3.

Experimental Measurements

The experimental setup is shown in Fig. 1(a). The FP sensor (aluminum coatings, PPXC spacer, thickness of 40 μm) was illuminated by a 20-mm diameter expanded interrogation beam (Santec TSL-510 tunable laser source connected to a IPG Laser GmbH Erbium Fiber Amplifier EAD-4-L), Fig. 1(b). The reflected beam from the sensor was then redirected to the DMD (ViaLUX V-7000 DLP 0.7″ XGA 1024×768 array, pitch size of 13.68 μm) by the polarized beam splitter (Thorlabs CM1-PBS254). The optical reflected beam from the DMD was collected by lens L2 (f=25.4  mm) and focused into a photodetector (InGaAs Hamamatsu G8376-03 and a customized transimpedance amplifier configuration with DC- and AC-coupled outputs). A digitiser (NI PCI-5114) was used to acquire time series signals from the photodetector’s output. The system’s sampling rate and bandwidth were set at 50 and 20 MHz. The excitation laser was a Q-switched fiber-coupled Nd:YAG laser (Continuum Minilite-20, repetition rate of 20 Hz). The excitation beam diameter was 15  mm and the total pulse energy was set to 17  mJ. Since the FP sensor used in this study was not transparent, the system was operating in forward mode.

Fig. 1

(a) Experimental setup: L1, lens 1; L2, lens 2; LP, linear polarizer; PBS, polarizing beam splitter; λ/4, quarter waveplate; PD, photodetector; DMD, digital micromirror device; (b) Fabry–Perot sensor with a wide interrogation beam; and (c) illustrations of scrambled Hadamard patterns.

JBO_24_12_121907_f001.png

Due to the periodic distribution of the micromirrors, a DMD acts as a 2-D diffraction grating when laser light is reflected from it.40 The reflected light from the FP sensor is, therefore, diffracted at the DMD into many diffraction orders that, within the NIR range, are well separated. The detector is positioned so that it collects light from only the strongest order. The details of how to optimize the DMD arrangement have been discussed elsewhere.6 In this setup, the strongest order at 1580 nm is at about 50 deg with the incident angle of about 26 deg. With this arrangement, the DMD was used to pattern (spatially sample) the reflected beam from the sensor, before it was passed through an integrating lens and onto the photodetector. The scrambled Hadamard patterns, Fig. 1(c), were sequentially displayed on the DMD, and a time series signal was recorded at the photodetector for M<N patterns. Because light intensities cannot be negative, it was not possible to implement (1,1) Hadamard patterns experimentally so (0, 1) patterns were used and the mean value of all the time series was subtracted from the set. Also, to avoid saturating the photodiode, the data corresponding to the all-1 pattern were constructed from two half-1 half-0 patterns. The active area on the DMD was chosen to be 640×640  pixels (8.7×8.7  mm2). It was positioned to align with the most uniform region on the FP sensor so only one wavelength was required to interrogate the sensor. A scrambled Hadamard operator for 1282  pixels was used for these experiments. Each image pixel corresponded to a group of 5×5 micromirrors on the DMD, so the effective sensing element size was 68  μm×68  μm. The DMD and the digitizer were synchronized and triggered by the Q-switched laser.

The two phantoms shown in Fig. 2, a knotted artificial hair and a twisted polymer ribbon, were used in this study. The phantoms were immersed in 1% Intralipid with a reduced optical scattering coefficient μs of 1  mm1 and were positioned 2  mm above the sensor and 4 mm below the Intralipid surface. The diameter of the hair was about 150  μm and the ribbon width was 350  μm.

Fig. 2

Phantoms for photoacoustic imaging experiments (the black bar in both images measures 1 mm): (a) artificial hair (150  μm in diameter) and (b) twisted black polymer ribbon (350  μm).

JBO_24_12_121907_f002.png

4.

Results and Discussion

Photoacoustic images of the phantoms were recovered using Eq. (3) for different degrees of compression, and the results are presented in Figs. 3 and 4. Figure 3 shows the 3-D images of the knotted hair. The images were obtained using 100%, 50%, 20%, and 10% of the scrambled Hadamard patterns. It is instructive that with the omission of 50% of the data it is still possible to recover an image of similar quality to that recovered using the full data, and the main features of the targets are recovered successfully even when only 10% of the data are used. Figure 4 shows the zy slice images of the polymer ribbon.

Fig. 3

CS-PAT reconstructions (8×8×2.5  mm) of the knotted hair with different levels of compression, visualized as maximum intensity projections from the top, i.e., (top row) through the sensor plane and (bottom row) from the side. The sensor plane is located on the top of the side view; the first six depth slices below it are set to zero to prevent sensor noise from dominating the maximum intensity projection from the top.

JBO_24_12_121907_f003.png

Fig. 4

CS-PAT reconstructions of the twisted polymer ribbon with different levels of compression visualized by a single slice through the ribbon, orthogonal to the sensor plane (located on top of the slice view).

JBO_24_12_121907_f004.png

In the accelerated proximal gradient-type algorithm29 used here for the reconstructions, the loss of data was compensated for through the use of the total variation regularization. However, there are many other minimization algorithms and regularization strategies described in the literature that could be employed to tackle this CS inverse problem. In particular, because the scrambled Hadamard basis is close to ideal for CS, the data obtained with this system allows an investigation into which basis is the best for reconstructions with partial data. It remains a focus of future work to determine which approach is optimal for photoacoustic imaging systems such as this.

Despite the advantages a CS approach has over full-data scanning systems, it still requires multiple sequential measurements. The data acquisition speed is, therefore, currently limited by the pulse repetition rate of the photoacoustic excitation laser (although the pulse repetition rates of photoacoustic lasers are gradually increasing). Devices using multichannel systems with arrays of detectors that can detect simultaneously do not suffer from this limitation. However, they are usually limited in their bandwidth and the cost and complexity typically becomes prohibitive beyond a thousand or so channels.

The images obtained here are not as high quality as published images obtained using a FP sensor with point-wise interrogation.7 This is due to a lower signal-to-noise ratio. There are principally two factors that affect the image quality for both patterned and point-wise detection: (1) the signal-to-noise ratio, and (2) the effectiveness of the CS, i.e., the degree to which the undersampling can be ameliorated in the reconstruction. It is the second point that is fundamental here. The signal-to-noise ratio, while important practically, is not a fundamental constraint, and it could in future be enhanced, for example, with improved sensor fabrication, averaging, or using higher power illumination and a photodiode with a greater dynamic range.

The sensitivity of FP sensors that are interrogated point-by-point can be maximized using high-finesse cavities and by tuning the wavelength to the optimal bias point on the interference fringe at each point. With the wide-beam, single-wavelength, illumination that this system requires, it is not possible to tune the interrogation wavelength to the optimal wavelength for each point, and so for this proof-of-principle experiment, a low-finesse cavity was used to ensure that every point in the field of view could be interrogated with some, if not optimal, sensitivity at the one wavelength. FP sensors with more uniform thickness over cm-sized areas are being developed, which will allow higher finesse cavities to be used in future, and therefore, larger signal-to-noise ratios to be achieved with this system.

5.

Conclusions

3-D CS PAT using a single-pixel camera has been demonstrated experimentally. Scrambled Hadamard patterns from a DMD were used to sample the photoacoustic field as detected by a planar FP ultrasound sensor. Photoacoustic images of knot and ribbon phantoms were obtained with compression rates as low as 10%. CS will become important in PAT as the demand grows for fast, high-resolution systems with large fields of view.

Disclosures

The authors have no relevant financial interests in this article and no potential conflicts of interest to disclose.

Acknowledgments

The authors acknowledge the support from the Engineering and Physical Sciences Research Council, UK (No. EP/K009745/1), Netherlands Organization for Scientific Research (No. NWO 613.009.106/2383), ERC Advanced Grant Ref: 741149, and the European Union’s Horizon 2020 Research and Innovation Program H2020 ICT 2016-2017 under Grant Agreement No. 732411, which is an initiative of the Photonics Public Private Partnership.

References

1. 

P. Beard, “Biomedical photoacoustic imaging,” Interface Focus, 1 (4), 602 –631 (2011). https://doi.org/10.1098/rsfs.2011.0028 Google Scholar

2. 

W. Choi, E. Seungwan and J. Chulhong, “Clinical photoacoustic imaging platforms,” Biomed. Eng. Lett., 8 139 –155 (2018). https://doi.org/10.1007/s13534-018-0062-7 Google Scholar

3. 

E. Candès and M. Wakin, “An introduction to compressive sampling,” IEEE Signal Process. Mag., 25 (2), 21 –30 (2008). https://doi.org/10.1109/MSP.2007.914731 ISPRE6 1053-5888 Google Scholar

4. 

M. Duarte et al., “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag., 25 (2), 83 –91 (2008). https://doi.org/10.1109/MSP.2007.914730 ISPRE6 1053-5888 Google Scholar

5. 

M. Edgar, G. Gibson and M. Padgett, “Principles and prospects for single-pixel imaging,” Nat. Photonics, 13 13 –20 (2019). https://doi.org/10.1038/s41566-018-0300-7 NPAHBY 1749-4885 Google Scholar

6. 

N. Huynh et al., “Single-pixel optical camera for video rate ultrasonic imaging,” Optica, 3 (1), 26 –29 (2016). https://doi.org/10.1364/OPTICA.3.000026 Google Scholar

7. 

A. Jathoul et al., “Deep in vivo photoacoustic imaging of mammalian tissues using a tyrosinase-based genetic reporter,” Nat. Photonics, 9 (4), 239 –246 (2015). https://doi.org/10.1038/nphoton.2015.22 NPAHBY 1749-4885 Google Scholar

8. 

N. Huynh et al., “Photoacoustic imaging using an 8-beam Fabry–Perot scanner,” Proc. SPIE, 9708 97082L (2016). https://doi.org/10.1117/12.2214334 PSISDG 0277-786X Google Scholar

9. 

M. Lamont and P. Beard, “2-D imaging of ultrasound fields using a CCD array to detect the output of a Fabry–Perot polymer film sensor,” Electron. Lett., 42 (3), 187 –189 (2006). https://doi.org/10.1049/el:20064135 ELLEAK 0013-5194 Google Scholar

10. 

B. Cong et al., “A fast acoustic field mapping approach based on Fabry–Perot sensor with high-speed camera,” IEEJ Trans. Electr. Electron. Eng., 9 477 –483 (2014). https://doi.org/10.1002/tee.2014.9.issue-5 Google Scholar

11. 

J. Provost and F. Lesage, “The application of compressed sensing for photo-acoustic tomography,” IEEE Trans. Med. Imaging, 28 (4), 585 –594 (2009). https://doi.org/10.1109/TMI.2008.2007825 ITMID4 0278-0062 Google Scholar

12. 

H. He et al., “Optoacoustic tomography using accelerated sparse recovery and coherence factor weighting,” Tomography, 2 (2), 138 –145 (2016). https://doi.org/10.18383/j.tom.2016.00148 Google Scholar

13. 

C. Zhang, Y. Wang and J. Wang, “Efficient block-sparse model-based algorithm for photoacoustic image reconstruction,” Biomed. Signal Process. Control, 26 11 –22 (2016). https://doi.org/10.1016/j.bspc.2015.12.003 Google Scholar

14. 

Z. Guo et al., “Compressed sensing in photoacoustic tomography in vivo,” J. Biomed. Opt., 15 (2), 021311 (2010). https://doi.org/10.1117/1.3381187 JBOPFO 1083-3668 Google Scholar

15. 

Y. Zhang, Y. Wang and C. Zhang, “Total variation based gradient descent algorithm for sparse-view photoacoustic image reconstruction,” Ultrasonics, 52 (8), 1046 –1055 (2012). https://doi.org/10.1016/j.ultras.2012.08.012 ULTRA3 0041-624X Google Scholar

16. 

S. Antholzer et al., “NETT regularization for compressed sensing photoacoustic tomography,” (2019). Google Scholar

17. 

J. Schwab et al., “Real-time photoacoustic projection imaging using deep learning,” (2018). Google Scholar

18. 

M. Haltmeier et al., “A sparsification and reconstruction strategy for compressed sensing photoacoustic tomography,” J. Acoust. Soc. Am., 143 (6), 3838 –3848 (2018). https://doi.org/10.1121/1.5042230 JASMAN 0001-4966 Google Scholar

19. 

P. Burgholzer et al., “Sparsifying transformations of photoacoustic signals enabling compressed sensing algorithms,” Proc. SPIE, 9708 970828 (2016). https://doi.org/10.1117/12.2209301 Google Scholar

20. 

J. Meng et al., “Compressed-sensing photoacoustic computed tomography in vivo with partially known support,” Opt. Express, 20 (15), 16510 –16523 (2012). https://doi.org/10.1364/OE.20.016510 OPEXFF 1094-4087 Google Scholar

21. 

J. Meng et al., “High-speed, sparse-sampling three-dimensional photoacoustic computed tomography in vivo based on principal component analysis,” J. Biomed. Opt., 21 (7), 076007 (2016). https://doi.org/10.1117/1.JBO.21.7.076007 JBOPFO 1083-3668 Google Scholar

22. 

J. Meng et al., “Compressed sensing with a Gaussian scale mixture model for limited view photoacoustic computed tomography in vivo,” Technol. Cancer Res. Treat., 17 1533033818808222 (2018). https://doi.org/10.1177/1533033818808222 Google Scholar

23. 

H. Jin et al., “A compressed sensing based miniaturized photoacoustic imaging system,” in Proc. IEEE Int. Ultrason. Symp., (2018). https://doi.org/10.1109/ULTSYM.2018.8580179 Google Scholar

24. 

F. Lucka et al., “Enhancing compressed sensing 4D photoacoustic tomography by simultaneous motion estimation,” SIAM J. Imaging Sci., 11 (4), 2224 –2253 (2018). https://doi.org/10.1137/18M1170066 Google Scholar

25. 

A. Hauptmann et al., “Model-based learning for accelerated, limited-view 3-D photoacoustic tomography,” IEEE Trans. Med. Imaging, 37 (6), 1382 –1393 (2018). https://doi.org/10.1109/TMI.42 ITMID4 0278-0062 Google Scholar

26. 

A. Hauptmann et al., “Approximate k-space models and deep learning for fast photoacoustic reconstruction,” Lect. Notes Comput. Sci., 11074 103 –111 (2018). https://doi.org/10.1007/978-3-030-00129-2 LNCSD9 0302-9743 Google Scholar

27. 

M. Betcke et al., “Acoustic wave field reconstruction from compressed measurements with application in photoacoustic tomography,” IEEE Trans. Comput. Imaging, 3 (4), 710 –721 (2017). https://doi.org/10.1109/TCI.6745852 Google Scholar

28. 

M. Haltmeier et al., “Compressed sensing and sparsity in photoacoustic tomography,” J. Opt., 18 (11), 114004 (2016). https://doi.org/10.1088/2040-8978/18/11/114004 Google Scholar

29. 

S. R. Arridge et al., “Accelerated high-resolution photoacoustic tomography via compressed sensing,” Phys. Med. Biol., 61 8908 –8940 (2016). https://doi.org/10.1088/1361-6560/61/24/8908 PHMBA7 0031-9155 Google Scholar

30. 

M. Sandbichler et al., “A novel compressed sensing scheme for photoacoustic tomography,” SIAM J. Appl. Math., 75 (6), 2475 –2494 (2015). https://doi.org/10.1137/141001408 SMJMAP 0036-1399 Google Scholar

31. 

S. Guan et al., “Fully dense UNet for 2D sparse photoacoustic tomography artifact removal,” (2018). Google Scholar

32. 

J. Rogers et al., “Demonstration of acoustic source localization in air using single pixel compressive imaging,” J. Appl. Phys., 122 (21), 214901 (2017). https://doi.org/10.1063/1.5003645 JAPIAU 0021-8979 Google Scholar

33. 

D. Liang, H. Zhang and L. Ying, “Compressed-sensing photoacoustic imaging based on random optical illumination,” Int. J. Funct. Inf. Pers. Med., 2 (4), 394 –406 (2009). https://doi.org/10.1504/IJFIPM.2009.030835 Google Scholar

34. 

M. Sun et al., “Photoacoustic imaging method based on arc-direction compressed sensing and multi-angle observation,” Opt. Express, 19 (16), 14801 –14806 (2011). https://doi.org/10.1364/OE.19.014801 OPEXFF 1094-4087 Google Scholar

35. 

M. Sun et al., “Photoacoustic image reconstruction based on Bayesian compressive sensing algorithm,” Chin. Opt. Lett., 9 (6), 061002 (2011). https://doi.org/10.3788/COL CJOEE3 1671-7694 Google Scholar

36. 

K. Wang, M. Anastasio, “Photoacoustic and thermoacoustic tomography: image formation principles,” Handbook of Mathematical Methods in Imaging, 1081 –1116 Springer, New York (2015). Google Scholar

37. 

B. E. Treeby and B. T. Cox, “k-wave: MATLAB toolbox for the simulation and reconstruction of photoacoustic wave fields,” J. Biomed. Opt., 15 (2), 021314 (2010). https://doi.org/10.1117/1.3360308 JBOPFO 1083-3668 Google Scholar

38. 

S. Arridge et al., “On the adjoint operator in photoacoustic tomography,” Inverse Prob., 32 (11), 115012 (2016). https://doi.org/10.1088/0266-5611/32/11/115012 INPEEY 0266-5611 Google Scholar

39. 

L. Gan, T. Do and T. Tran, “Fast compressive imaging using scrambled block Hadamard ensemble,” in Proc. IEEE 16th Eur. Signal Process. Conf., (2008). Google Scholar

40. 

“Using lasers with DLP DMD technology,” (2008). Google Scholar

Biographies of the authors are not available.

© The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Nam Huynh, Felix Lucka, Edward Z. Zhang, Marta M. Betcke, Simon R. Arridge, Paul C. Beard, and Benjamin T. Cox "Single-pixel camera photoacoustic tomography," Journal of Biomedical Optics 24(12), 121907 (18 September 2019). https://doi.org/10.1117/1.JBO.24.12.121907
Received: 12 June 2019; Accepted: 19 August 2019; Published: 18 September 2019
JOURNAL ARTICLE
6 PAGES


SHARE
Advertisement
Advertisement
Back to Top