Photoacoustic computed tomography (PACT) is a rapidly developing biomedical imaging modality and has attracted substantial attention in recent years. Image reconstruction from photoacoustic projections plays a critical role in image formation in PACT. Here we review six major classes of image reconstruction approaches developed in the past three decades, including delay and sum, filtered back projection, series expansion, time reversal, iterative reconstruction, and deep-learning-based reconstruction. The principal ideas and implementations of the algorithms are summarized, and their reconstruction performances under different imaging scenarios are compared. Major challenges, future directions, and perspectives for the development of image reconstruction algorithms in PACT are also discussed. This review provides a self-contained reference guide for beginners and specialists in the photoacoustic community, to facilitate the development and application of novel photoacoustic image reconstruction algorithms. |
1.IntroductionPhotoacoustic tomography (PAT), also referred to as opto-acoustic tomography, is a cross-sectional or three-dimensional (3D) biomedical imaging technique. The physical foundation for PAT is the photoacoustic effect discovered in 1880 by Alexander Graham Bell[1], who found that modulated light can excite sound waves in materials. Modern PAT typically uses a short-pulsed laser to illuminate biological tissues. The laser energy is absorbed by tissue chromophores and converted into heat. Under the conditions of stress confinement and thermal confinement, acoustic signals are generated due to the thermoelastic effect and are recorded by ultrasound detectors to recover the optical absorption property of tissues[2–5]. By combining optical excitation and acoustic detection, PAT has the advantages of rich optical contrast and high ultrasonic resolution in deep tissues and has found unique applications in a range of biomedical fields[6–9]. PAT has three major embodiments: photoacoustic computed tomography (PACT), photoacoustic microscopy (PAM), and photoacoustic endoscopy (PAE). Among them, PACT uses diffused light to illuminate biological tissues and can achieve hundred-micron-level imaging resolution at centimeter-level tissue depth[10–13]. Analogous to X-ray computed tomography, high-quality image formation in PACT relies on sophisticated image reconstruction procedures, without which the details in an image may not be able to be properly resolved. In 1981, Bowen studied thermoacoustic imaging of soft tissues using ionizing radiation (high-energy electrons, X-ray photons, neutrons, and other charged particles) and non-ionizing radiation (radio waves, microwaves, and ultrasonic waves)[14–16] and presented one-dimensional (1D) depth-resolved thermoacoustic signals from a human upper arm using radio-wave excitation. However, the results are only 1D depth-resolved and have no lateral resolution. In the mid-1990s, Oraevsky[17–19] and Kruger[20,21] independently studied laser-induced photoacoustic imaging of biological tissues and presented experimental 1D signals and two-dimensional (2D) scan images (see Fig. 1). The results are depth-resolved but poorly laterally resolved due to the lack of image reconstruction procedures. To obtain cross-sectional or 3D images resolved both axially and laterally, Kruger et al. in 1995 developed an approximate filtered back projection (FBP) image reconstruction algorithm for laser-induced PACT analogous to the FBP algorithm used in X-ray computed tomography[22]. The results suggest that the approximate FBP algorithm helps resolve imaging targets both in the axial and lateral directions (see Fig. 1). High-performance image reconstruction is critical to image formation in modern PACT imaging systems. Ever since the publication of the approximate FBP algorithm by Kruger and colleagues, several novel image reconstruction approaches have been proposed to reconstruct high-quality PACT images. According to whether the emerging deep learning technique is involved, image reconstruction approaches in PACT can be divided into two main categories: conventional image reconstruction methods and deep learning (DL)-based image reconstruction methods. Conventional image reconstruction methods do not involve deep learning and mainly include five classes of algorithms: FBP, delay and sum (DAS), series expansion (SE), time reversal (TR), and iterative reconstruction (IR). FBP is a class of analytical image reconstruction algorithms originating from X-ray computed tomography that project filtered photoacoustic signals back to target regions according to the time of flight (TOF) of photoacoustic signals. Kruger adopted this idea and proposed an approximate version of FBP for image reconstruction in PACT in 1995, as mentioned previously[22]. However, this algorithm can only achieve approximate image reconstruction. To reduce reconstruction error, Finch et al. developed an exact FBP algorithm for image reconstruction in a spherical detection geometry in 2004[23], and Xu and Wang developed a universal FBP algorithm for exact image reconstruction in infinite planar, cylindrical, and spherical detection geometries in 2005[24]. FBP algorithms are simple yet efficient and have been widely used for image reconstruction in PACT. DAS-based reconstruction is essentially a simplified version of the FBP algorithm, which directly projects measured photoacoustic signals back to target regions without filtering. It is one of the most widely used beamforming methods in ultrasound imaging and can potentially be adapted for image reconstruction in PACT. In 1998, Hoelen et al. first used a simple weighted DAS algorithm for image reconstruction in PACT and achieved good reconstruction results[25,26]. Several modified algorithms were subsequently developed to improve the reconstruction performance of DAS[27–30]. DAS-based reconstruction algorithms are typically fast and easy to implement. However, they are intuitive and empirical image reconstruction techniques and can only achieve approximate image reconstruction. SE is a class of image reconstruction algorithms that use mathematical series to represent the image to be reconstructed. In 2001, Köstli et al. derived an exact SE-based inverse formula in the Fourier domain for image reconstruction in a planar detection geometry[31]. Kunyansky took a step further and derived SE reconstruction formulas for image reconstruction in circular, spherical, cylindrical, and cubic geometries[32,33]. Compared with other image reconstruction algorithms in PACT, SE algorithms may be computationally efficient for certain detection geometries (e.g., planar geometry) due to the application of the fast Fourier transform (FFT) algorithm during computation[31]. TR algorithms recover photoacoustic images by running a forward numerical acoustic propagation model backward and re-transmitting the photoacoustic signals measured by each detector in a temporally reversed order. In 2004, Xu and Wang presented an analytical TR model for image reconstruction in arbitrary closed detection geometries[34]. Burgholzer et al. developed a numerical TR algorithm for image reconstruction in arbitrary detection geometries based on the finite difference method[35]. To improve computational efficiency and image quality, Treeby et al. further developed a numerical k-space pseudospectral TR algorithm for image reconstruction in heterogeneous media[34]. The TR algorithms can couple the acoustic properties of media (e.g., SOS, density, dispersion, and absorption) and can be used for image reconstruction in arbitrary closed detection geometries. Therefore, they are regarded as the “least restrictive” image reconstruction algorithms in PACT. The last class of conventional image reconstruction methods is IR. It solves the image reconstruction problem by iteratively seeking the optimal solution that minimizes the error between measured projection data and the estimate from constructed mathematical models. In 2002, Paltauf et al. proposed the first IR method to improve image reconstruction quality under non-ideal imaging conditions and achieved 3D image reconstruction with reduced artifacts[36]. After that, much research was conducted by different groups to improve the performance of IR, such as improving the computational accuracy of the system matrix in the model[37,38], compensating for the response of detectors[39,40], coupling the acoustic property of the media[41], and accelerating the reconstruction process[42,43]. Compared with other algorithms, IR algorithms are typically slow but can yield high-quality imaging results when only limited projection data are available. Therefore, they are more suitable for non-ideal imaging scenarios, such as limited-view imaging, sparse-view imaging, and imaging in acoustically heterogeneous media. The second major category of image reconstruction approaches in PACT is fast-developing DL-based methods, which are inspired by the successful use of DL in a range of fields. DL-based methods typically train neural networks and use them to automatically transform input data into output photoacoustic images. Compared with conventional image reconstruction approaches, DL methods are more efficient and can handle more complicated scenarios. In 2018, Antholzer et al. first proposed a deep convolutional neural network (CNN)-based method for PACT image reconstruction under sparse-view sampling and opened new opportunities for intelligent image reconstruction in PACT[44]. After that, a series of DL-based methods was developed for image reconstruction using both simulated and experimental datasets[45–47]. State-of-the-art DL methods are powerful enough to achieve preprocessing in the data domain, postprocessing in the image domain, hybrid processing in both the data and image domains, learned IR, and direct image reconstruction from the data domain to the image domain[48–50]. They have been used to address a range of image reconstruction problems in PACT, such as detector bandwidth expansion[51,52], resolution enhancement[53,54], artifact removal[55], ultralow-laser-energy imaging[56], reconstruction acceleration[57], and reconstruction enhancement in sparse-view and limited-view imaging[44,46,47,58,59]. Over the past three decades, great achievements have been made in PACT image reconstruction. To date, there have been a few excellent reviews devoted to summarizing the research progress in this field[60–63]. However, most review papers in the literature are from more than 10 years ago and cannot reflect the latest research advances[60–62]. Moreover, some review papers cover only particular subjects of the field, such as IR reconstruction[63,64] or DL-based reconstruction[48–50,65–68]. There is a pressing need to prepare new work to systematically review the recent achievements in this field. For the above reasons, we prepared this review, which differs from other works in the following aspects. First, the review was prepared from a historical perspective. We surveyed the development of each type of reconstruction algorithm and put them into the context of the entire history of PACT. Second, the review is comprehensive. It not only contains the five classes of conventional reconstruction algorithms (FBP, DAS, SE, TR, and IR) but also covers the state-of-the-art DL-based reconstruction algorithms. Third, the review contains comparative studies. A dedicated part (Sec. 5) was prepared to compare the performance of each type of image reconstruction algorithm in terms of image reconstruction quality, reconstruction speed, and memory footprint. Finally, the review is expected to be beginner-friendly. The entire review contains 51 figures, 14 tables, 126 mathematical equations, and comparative studies of each algorithm and is easy for novices to understand. The remainder of this review is organized as follows. Section 2 describes the forward problem of PACT, including photoacoustic signal generation, propagation, detection, and the mathematical foundation (i.e., the Radon transform) for image reconstruction. Section 3 reviews the basic principles of the five classes of conventional image reconstruction algorithms, namely, FBP, DAS, SE, TR, and IR. Section 4 surveys the DL-based image reconstruction methods from the perspective of preprocessing, postprocessing, hybrid processing, learned IR, and direct image reconstruction. Section 5 provides comparative studies of major image reconstruction algorithms in PACT. Section 6 highlights the major challenges and future directions for PACT image reconstruction. Finally, Sec. 7 offers concluding remarks. Figure 2, Table 1, and Table 2 list major topics, abbreviations, and symbols used in this review, respectively. Table 1Abbreviations Used in this Review
Table 2Symbols Used in this Review
It is worth pointing out that image reconstruction in PACT involves two aspects: acoustic inversion for determining the distribution of initial acoustic pressure and optical inversion for determining the map of optical absorption[61,62,69]. This review focuses on the acoustic inversion problem, which is important for image formation in PACT. Moreover, the mathematical foundation for image reconstruction in PACT is equivalent to that in thermoacoustic tomography[70–73]. Therefore, the image reconstruction algorithms discussed in this review can in principle be used for thermoacoustic tomography. 2.The Forward Problem2.1.Photoacoustic Signal GenerationTo achieve efficient photoacoustic signal excitation and generation, two conditions, i.e., thermal confinement and stress confinement, should be satisfied. The two conditions are related to two timescales, i.e., the thermal relaxation time and the stress relaxation time . The thermal relaxation time characterizes the diffusion of heat from the region heated by laser pulses, while the stress relaxation time describes the propagation of acoustic waves from the heated region. is given by[74] where is the characteristic dimension of the heated region and is the thermal diffusivity (). is given by[74] where is the speed of sound (SOS). If the laser pulse duration is less than the thermal relaxation time, i.e., , the thermal diffusion is negligible during laser heating, in which case the thermal confinement condition can be considered to be satisfied. Similarly, if the laser pulse duration is less than the stress relaxation time, i.e., , the pressure propagation can be ignored during pulse heating, in which case the stress confinement condition can be considered to be satisfied. In PACT, the thermal and stress confinement conditions should be satisfied simultaneously. For instance, assuming that in soft tissues, the SOS is 1500 m/s, the thermal diffusivity is , and the dimension of the heated region is 50 µm, the thermal relaxation time is calculated to be 18 ms, and the stress relaxation time is 33 ns. This indicates that the laser used for imaging should have a pulse duration of less than 33 ns to achieve efficient photoacoustic signal excitation and generation.In PACT, the local fractional volume expansion induced by the photoacoustic effect can be described as[74] where and represent the changes in temperature and pressure, respectively, is the thermal coefficient of volume expansion, and is the isothermal compressibility, which can be written as where is the mass density and and are the specific heat capacities at constant pressure and volume, respectively.Under the conditions of thermal and stress confinement, the fractional volume expansion in Eq. (3) is negligible (i.e., ). The initial photoacoustic pressure and can be written as where the temperature rise can be calculated by[75]Here is the percentage of the laser energy converted into heat, and is the heat energy deposited per unit volume, which is defined as the product of the absorption coefficient and the optical fluence (i.e., ). The initial acoustic pressure can thus be rewritten as where is the Grüneisen parameter, a dimensionless constant representing the efficiency of the conversion of heat to pressure. For water at body temperature (37°C), , which indicates that 20% of the thermal energy deposited by a laser in water couples into acoustic energy.The generation of photoacoustic signals can be illustrated using a numerical example. Assume that the laser employed for imaging has a fluence of (), which is within the American National Standards Institute (ANSI) safety limit[76], and the biological tissue being imaged has the following physical parameters: , , , and . The factor can be set to 1 because the fluorescence and nonthermal absorption of biological tissues are typically weak. In this way, the temperature rise is calculated to be 0.25 mK and the initial acoustic pressure is 200 Pa, which indicates that a 1 mK temperature rise can produce an acoustic pressure rise of 800 Pa. 2.2.Photoacoustic Signal Propagation2.2.1.The photoacoustic wave equationsThe generation and propagation of photoacoustic waves can be mathematically modeled by the photoacoustic wave equation[77]. Generally, three first-order equations, including the linearized equation of motion, the linearized equation of continuity, and the thermal elastic equation, can be used to model the properties of acoustic wave generation and propagation[78]. If the medium is lossless and the thermal conductivity can be ignored, the three equations can be written as[79] where is the particle velocity, is the SOS, is the mass density, is the acoustic pressure at position and time , and are the temperature and temperature rise, respectively, and is the heating function that represents the thermal energy deposited per unit volume and per unit time. The second-order photoacoustic wave equation can be obtained by eliminating the variable from the three equations as[78,79]Equation (11) describes the relationship between the acoustic pressure and the photoacoustic source associated with the heating function . The source term [right side of Eq. (11)] is proportional to the first derivative of the heating function , which indicates that the heating function should be time-varying to achieve efficient photoacoustic signal generation. When the medium is acoustically homogeneous, the photoacoustic wave equation in Eq. (11) reduces to[78,79] 2.2.2.Forward Green’s function solutionThe forward solution for the wave equation in a homogeneous medium in Eq. (12) can be obtained using Green’s function[75,80] as where is the acoustic pressure detected at position and time and and represent the location and time of the photoacoustic source, respectively. The Green’s function in an infinite 3D space has the following form: where is the Dirac delta function.Under the condition of stress confinement, the heating function can be decomposed as , where is the heat deposited per unit volume. In this way, Eq. (13) becomes where is the derivative of the Dirac delta function. Using the property , Eq. (15) becomesApplying the relation in Eq. (7) ( is set to 1), and substituting the Green’s function [Eq. (14)] into Eq. (16), we obtain the general forward solution of the photoacoustic wave equation, i.e.,The measured pressure is associated with the velocity potential via[81] The velocity potential can be written as Equations (18) and (20) indicate that the measured pressure and velocity potential are linearly related to the initial acoustic pressure and are inversely proportional to the distance between the detector and the source, i.e., . 2.2.3.Photoacoustic signal of a spherical absorberThe concept of acoustic pressure and velocity potential can be illustrated through a simple example[81]. Suppose that the photoacoustic source is a uniform sphere and that the detector is a point detector. We examine the velocity potential and acoustic pressure received by the point detector in two situations. In the first situation, the point detector is located at the center of the photoacoustic source, as shown in Fig. 3(a). The velocity potential can be obtained as [Eq. (20)] where is the radius of the spherical source. In the second situation, the point detector is located outside the photoacoustic source, as shown in Fig. 3(d). The velocity potential can be calculated as where is the distance between the point detector and the center of the photoacoustic source. The acoustic pressure in these two cases can be accordingly calculated using the relationship in Eq. (19).To quantify the velocity potential and acoustic pressure measured by the point detector, we make the following assumptions: the radius of the spherical source is 4 mm (), the mass density of the medium is (), the SOS is 1480 m/s (), the Grüneisen parameter at room temperature is 0.12 (), the optical absorption coefficient is (), and the optical fluence is (). Therefore, the heat energy deposited at time zero is , and the initial acoustic pressure [Eq. (7)] is . Figures 3(b) and 3(c) show the velocity potential and acoustic pressure measured by the point detector in the first case (the point detector is located at the center of the source). The results show that the negative velocity potential measured by the point detector first linearly increases as the shell of the integration [Eq. (20)] increases with time and then suddenly drops to zero due to no energy deposition outside the source. Consequently, the acoustic pressure (i.e., the first derivative of the velocity potential) is a non-zero constant at the beginning and becomes zero with a negative impulse caused by the sudden drop of velocity potential to zero. Figures 3(e) and 3(f) show the velocity potential and acoustic pressure measured by the point detector in the second case (the point detector is located 10 mm away from the center of the source). The results show that the negative velocity potential measured by the point detector first increases as the incremental hemispherical shell of the integration [Eq. (20)] advances to the center of the source, and then decreases with time as the shell advances to the rear of the source. Consequently, the acoustic pressure is initially positive, passes through zero, then becomes negative, and has an N-shaped waveform. Based on the example in Fig. 3, we can also deduce that the size of a photoacoustic source impacts the characteristics of the photoacoustic signals received by a detector in the time domain. To illustrate this, Fig. 4(a) shows three time-domain photoacoustic signals measured by a point detector 3 mm away from the centers of three uniform spherical sources with diameters of 1 mm, 200 µm, and 50 µm. The time-domain photoacoustic signals all have an N-shaped waveform as expected. However, the amplitude and duration of the photoacoustic signals are distinct. The photoacoustic signals produced by sources with a smaller size have smaller amplitudes and shorter durations, which correspond to higher frequencies and broader bandwidths in the frequency domain, as shown in Fig. 4(b). The center frequency of the photoacoustic signals generated by a spherical source can be estimated by , where is the radius of the spherical source[82]. This indicates that detectors with a high center frequency should be employed for high-frequency imaging. 2.2.4.Photoacoustic field visualization: the k-Wave toolboxNumerical simulation of the forward propagation of photoacoustic fields helps visualize the sound fields in complex media and solve the acoustic inversion problem in PACT. Numerical simulation can be implemented using the powerful open-source k-Wave toolbox[83], which was developed by the Photoacoustic Imaging Group at University College London (UCL). In the k-Wave toolbox, the propagation of photoacoustic fields is modeled by three coupled first-order partial differential equations[84], which are fundamentally the same as the three equations in Eqs. (8(9))–(10). The propagation model considers the acoustic properties of media, such as acoustic speed, dispersion, and absorption, and can characterize the acoustic propagation problem in heterogeneous media. The k-Wave toolbox solves the propagation model via a k-space pseudospectral method[83,85,86], which can perform fast and accurate computations with reduced memory. Figure 5 is an example showing the propagation of the photoacoustic fields of a 2D disk and a 3D sphere using the k-Wave toolbox. The results visualize the instant characteristics of the photoacoustic fields during propagation. 2.3.Photoacoustic Signal DetectionThe photoacoustic signals propagating outward from a photoacoustic source need to be captured by ultrasound detectors for image formation. Ideally, for photoacoustic signal detection, detector arrays that can perfectly record original signals in time and space should be used. However, perfect detector arrays never exist in reality. The characteristics of a practical detector array, such as detector aperture, detector bandwidth, detector number, and view angle, impact detected photoacoustic signals and final images. Aperture and bandwidth are two fundamental characteristics of an ultrasound detector and have important impacts on measured photoacoustic signals. An ideal ultrasound detector should have a point-like aperture, in which way it has an omnidirectional response. Moreover, an ideal ultrasound detector should also have an infinite bandwidth so that it can respond to all frequency contents of a signal. Neither of the two conditions, however, is attainable in practice. A practical ultrasound detector always has a finite aperture size and a finite bandwidth. The finite aperture averages photoacoustic signals in space, resulting in a smoothed spatial impulse response (SIR), while the finite bandwidth affects the conversion of photoacoustic signals to electrical signals, leading to a degraded electrical impulse response (EIR). Taking the SIR and EIR of an ultrasound detector into account, the photoacoustic signal measured by a practical ultrasound detector can be mathematically formulated as[4] where is the ideal photoacoustic signal, represents temporal convolution, and and represent the SIR and EIR of a detector, respectively. The non-ideal SIR and EIR of an ultrasound detector distort photoacoustic signals measured in the time domain, as illustrated in Fig. 6, which eventually degrades the image quality of photoacoustic images.In addition to the aperture and bandwidth of a detector, the detector number and view angle of a detector array also play important roles in photoacoustic signal detection. Ideally, the number of detectors in a detector array used for photoacoustic signal measurement should meet the spatial Nyquist sampling theorem for complete signal acquisition[4]. The view angle of a detector array should be steradian (full 3D view) to record complete photoacoustic signals in 3D space. However, these two conditions are actually unattainable in practice due to the high fabrication cost of a large number of detectors and the configuration constraints of an imaging system (e.g., a separate space is required for laser illumination). Violating these conditions leads to the problems of image reconstruction from sparse-view and limited-view projections, which are mathematically challenging and will be discussed later in this review. Figure 7 showcases commonly used 2D and 3D detection geometries in PACT[87,88], which have limited view angles except for the closed spherical array. 2.4.Radon TransformThe forward signal propagation and detection processes in PACT can be described by the well-known Radon transform[89], which is the mathematical foundation of computed tomography (CT). Before discussing the Radon transform in PACT, we first introduce the linear Radon transform in X-ray CT, which is defined as the integral of a function along a straight line. Specifically, the linear Radon transform in X-ray CT is written as[90] where is the original function, is the sinogram or projection data of the function along the straight line defined in the Dirac delta function , and are two parameters of the normal equation of the straight line in the delta function. The inverse Radon transform reverses the forward process and recovers the original function from measured sinograms . A schematic representation of the linear Radon transform is shown in Fig. 8(a).In PACT and thermoacoustic tomography, the Radon transform integrates a function along a circle (2D) or a sphere (3D) instead of a straight line. The 2D circular Radon transform can be written as[91] where represents the position of the detector in a polar coordinate system and is the radius of the integral circle. Similarly, the 3D spherical Radon transform can be mathematically written as[92] where (, , ) represents the position of the detector in a spherical coordinate system and is the radius of the spherical shell to be integrated. Schematic representations of the circular and spherical Radon transforms are shown in Figs. 8(b) and 8(c).Using the definition in Eq. (26), the spherical Radon transform in PACT can be obtained from Eq. (18) and has the following form: Equation (27) is in the form of the spherical Radon transform [Eq. (26)], representing the integral over a spherical shell with a radius of centered at the detector position . The projection data are related to the measured photoacoustic signal via The projection data have a similar definition as the velocity potential in Eq. (19) and can be calculated from the measured photoacoustic signal . The task of image reconstruction in PACT is to find the inverse spherical Radon transform and recover the original function or from the measured photoacoustic signal , which is the focus of the following sections. 3.Conventional Approaches3.1.DAS-Type AlgorithmsDelay and sum (DAS)-based beamforming is a commonly used image reconstruction technique in ultrasound imaging[93]. Due to similar image formation processes, DAS-based beamforming is also widely used in PACT, where it reconstructs an image by summing the delayed raw photoacoustic signals of each detector. The time delay between each channel is calculated according to the acoustic TOF of the photoacoustic signals from the point of interest to each ultrasound detector. To yield high-quality images, preprocessing of raw photoacoustic signals and/or postprocessing of reconstructed images are usually needed. Here we review the five most commonly used DAS-type image reconstruction algorithms: DAS, delay multiply and sum (DMAS), short-lag spatial coherence (SLSC), minimum variance (MV), and coherence factor (CF), which adopt different preprocessing and/or postprocessing strategies. The basic workflows of these methods are first illustrated in Fig. 9, and detailed descriptions are presented below. 3.1.1.Delay and sumDAS is the most basic beamforming algorithm in ultrasound imaging due to its simplicity, speed, and robustness[94–96]. Hoelen et al. introduced a DAS method for 3D PACT imaging of blood vessels based on a planar detection geometry in 1998[26,97]. Feng et al. applied a DAS method in linear-scanning thermoacoustic tomography in 2001[98]. The DAS method can be mathematically formulated as where is the reconstructed image, is the photoacoustic signal measured by the th detector at time , is the total number of detectors, and represents the position in a coordinate system. The variable in denotes the TOF of the photoacoustic signals from position to the th detector. The workflow and principle of the DAS algorithm are illustrated in Figs. 9(a) and 10, respectively.The performance of the DAS algorithm was evaluated using a designed numerical phantom with multiple point sources equally distributed along the longitudinal centerline [Fig. 11(a)]. In the evaluation, a linear detector array with a width of 200 mm and 128 elements was placed at the top of the phantom to receive the photoacoustic signals generated from the sources. Figure 11(b) shows the image reconstructed by the DAS method. The results show that DAS can successfully reconstruct the structural information of photoacoustic sources but fails to reproduce the correct amplitude. The reconstructed image has both positive and negative components and is bipolar in magnitude. A unipolar image can be obtained by finding the envelope of the bipolar image, as shown in Fig. 11(c). Log transformation can be further used to improve the contrast of the reconstructed image, as shown in Fig. 11(d). Since DAS treats the delayed photoacoustic signals from different detectors equally and is non-adaptive, significant side lobes are present in the reconstructed image in Fig. 11(d), which degrades the spatial resolution of the image. 3.1.2.Delay multiply and sumDelay multiply and sum (DMAS) is a variant of the DAS algorithm that can provide improved contrast, signal-to-noise ratio (SNR), and resolution. Similar to DAS, DMAS also sums photoacoustic signals measured by different detectors according to calculated time delays. However, the measured photoacoustic signals in DMAS need to be combinatorically coupled and multiplied before summation. Therefore, DMAS is essentially a nonlinear algorithm. The DMAS algorithm was first proposed by Lim et al. for confocal microwave detection of breast cancer in 2008 and showed improved identification of embedded malignant tumors in a variety of numerical breast phantoms compared with DAS[99]. In 2015, Matrone et al. modified this method for B-mode ultrasound imaging and demonstrated that DMAS can provide higher contrast resolution than DAS[100]. Inspired by the success of DMAS in confocal microwave imaging and ultrasound imaging, several research groups worldwide have conducted in-depth studies of DMAS-based photoacoustic image reconstruction. For example, Alshaya et al. introduced DMAS to the field of PACT in 2016 and proposed a filter DMAS to improve the SNR of reconstructed images[101]. To further improve the performance of DMAS, in 2018, Mozaffarzadeh et al. developed a double-stage DMAS algorithm that can produce images with improved quality compared with DAS and DMAS but at the expense of greater computational cost[27]. In the same year, Kirchner et al. proposed a signed DMAS algorithm that can provide linear image reconstruction with increased image quality[102]. In 2022, Mulani et al. presented a high-order DMAS method, in which multi-term (e.g., three, four, or five terms) multiplication is used to replace two-term multiplication in the original DMAS algorithm[103]. Generally, the original DMAS algorithm can be mathematically written as[100] where is the reconstructed image, is the total number of detectors, and is given by where denotes the signum function. The workflow of the DMAS algorithm is illustrated in Fig. 9(b). If the center frequency of the photoacoustic signals and is , the multiplication operation in Eq. (31) shifts the center frequency of the resultant signal to 0 and . As a result, the output signal needs to be filtered by a bandpass filter centered at to extract the second harmonic component while removing the direct current (DC) component. Therefore, DMAS is also called filtered-DMAS. Compared with DAS, DMAS uses both the amplitude and spatial correlation of delayed photoacoustic signals from different detectors for image reconstruction. For this reason, it can partially overcome the drawbacks of DAS and reconstruct images with improved spatial resolution and reduced side lobes[100].A downside of DMAS is that the combinational multiplication in the algorithm increases the computational complexity. To solve this problem, in 2019, Jeon et al. proposed a DMAS algorithm with a modified CF, which avoids combinatorial multiplication in DMAS and significantly reduces the total number of multiplication operations[104]. In 2022, Paul et al. proposed a simplified-delay-multiply-and-standard-deviation (SDMASD) method[105], which is based on the measurement of the standard deviation of delayed and multiplied signals instead of normal delayed signals. Compared with native DAS and DMAS algorithms, the SDMASD algorithm can achieve real-time imaging using graphics processing units (GPUs) and produce improved image quality. 3.1.3.Short-lag spatial coherenceShort-lag spatial coherence (SLSC) is a beamforming technique that was initially developed by Lediju et al. for ultrasound imaging in 2011[106]. SLSC reconstructs an ultrasound image by calculating the spatial coherence of measured signals, and the reconstructed image is thus independent of the amplitude of the signals. As a result, SLSC can eliminate adverse effects caused by the different strengths of scatterers in ultrasound imaging. It has been demonstrated that compared with the conventional DAS beamforming algorithm, SLSC can achieve image reconstruction with considerable improvements in terms of resolution, contrast-to-noise ratio (CNR), and SNR. In PACT, Muyinatu Bell et al. applied the SLSC algorithm to achieve imaging of prostate brachytherapy seeds[28,107] and demonstrated that the SLSC algorithm can enhance photoacoustic image quality compared with DAS, especially when the intensity of laser illumination is insufficient. Graham and Muyinatu Bell later developed a spatial coherence theory based on the van Cittert–Zernike theorem, a classical theorem in statistical optics, to explore the strengths and limitations of the SLSC algorithm[108,109]. In 2021, Mora et al. combined SLSC with DMAS and proposed a generalized spatial coherence algorithm for PACT, which can preserve relative signal magnitudes and improve the CNR and SNR of a reconstructed image[110]. Similar to DMAS, the photoacoustic signals in SLSC are also first delayed according to the TOF from the point of interest to each ultrasound detector and then combinatorically coupled and multiplied before summation. The SLSC algorithm can be formulated as[106] where is the normalized spatial coherence of the signals measured by a detector, is the total number of detectors, and represents the number of intervals between two detectors used to calculate the spatial coherence and is called the lag. The final SLSC image is obtained by summing all terms, i.e., where is the total number of lags. A large helps improve the lateral resolution but decreases the CNR and SNR. Therefore, the value of needs to be elaborately selected to achieve the best tradeoff between key image quality metrics, such as lateral spatial resolution, CNR, and SNR. The workflow of the SLSC algorithm is illustrated in Fig. 9(c).3.1.4.Minimum varianceMinimum variance (MV) is a weighted DAS method that was originally devised by Capon in 1969 for narrowband signal processing applications such as sonar, radar, and wireless communication[111]. In 2002, Mann and Walker used a constrained adaptive beamformer (the MV method) for medical ultrasound imaging[112] and demonstrated its effectiveness in spatial resolution and contrast enhancement. Due to the remarkable improvement in spatial resolution, a large number of MV-based methods have been developed for ultrasound imaging[113–116]. MV-based image reconstruction has also been studied in PACT in recent years[30,117–120]. The MV reconstruction formula can be written as[114,116] where is the total number of detectors, is the optimal weight for the photoacoustic signal measured by the th detector, is a vector containing delayed photoacoustic signals from all detectors, is a vector containing optimal weights for the delayed photoacoustic signals in , and the superscript denotes the conjugate transpose. The workflow of the MV algorithm is illustrated in Fig. 9(d). The weight can be adaptively calculated by minimizing the variance of the output while maintaining the unit signal gain at the focal imaging point. The optimization problem can be mathematically written as[114] where is the spatial covariance matrix of the delayed photoacoustic signals , denotes the expectation, and is the equivalent of the steering vector in narrowband applications[114,116]. When the photoacoustic signals of each detector are delayed, is a simple all-one vector. The optimization problem in Eq. (35) can be solved by the method of Lagrange multipliers[121], which givesBy substituting Eq. (36) into Eq. (34), MV-based image reconstruction can be achieved. To improve the robustness of the MV method, the covariance matrix can be calculated based on a spatial smoothing strategy, where a detector array is divided into a group of overlapping subarrays, and the covariance matrices of all subarrays are calculated and averaged to form the final covariance matrix[114]. In this way, the covariance matrix is given as[114] where is the number of detectors in a subarray and is the index of the detector in the current subarray. To estimate the covariance matrix more accurately and enhance the contrast of MV, temporal averaging of multiple samples can be used together with spatial smoothing[116]. Once the covariance matrix is estimated, the optimal weights can be obtained by Eq. (36). The final image reconstructed by the MV algorithm can be written as[114]Although the MV method can produce narrower main lobes (i.e., the lobe located at the target) and higher spatial resolution than algorithms such as DAS and DMAS, its performance in terms of side lobe (i.e., the lobes adjacent to the main lobe) suppression and image contrast enhancement is limited. Therefore, many studies have been devoted to the development of MV-based hybrid beamforming algorithms. In 2008, Park et al. imposed additional CF weights on MV and achieved enhanced spatial resolution, contrast, and side lobe suppression[117]. In 2018, Mozaffarzadeh et al. developed an MV-based DMAS method[118] and an eigenspace-based MV method combined with DMAS for resolution improvement and side lobe reduction[119]. In 2021, Paul et al. proposed an adaptive-weighting-based MV to address the side lobe issue in MV beamformed images. It was demonstrated that the weighted MV approach can improve SNR while reducing main lobe width and side lobe intensity and has the potential for use in PACT imaging systems with a limited number of ultrasound detectors[30]. 3.1.5.Coherence factorThe coherence factor (CF) is a pixel-level adaptive weighting factor that can improve the performance of DAS-based beamforming methods in side lobe suppression and spatial resolution improvement. CF was originally proposed by Mallart and Fink in 1994 for the evaluation of phase aberration correction techniques in scattering media[122] and was later used as an image quality metric for ultrasound imaging by Hollman et al.[123]. In 2003, Li et al. presented a generalized CF for ultrasound beamforming in heterogeneous media and showed that the combination of generalized CF and DAS-based beamformers could yield improved image quality[124]. In PACT, Liao et al. incorporated CF into DAS in 2004 and demonstrated the superiority of the CF-weighted DAS method in improving spatial resolution and SNR compared with DAS[29]. Wang and Li considered the local SNR in the formulation of CF and developed an SNR-dependent CF for ultrasound and photoacoustic imaging in 2014[125]. Wang et al. integrated CF with a focal-line-based image formation method to improve the contrast and elevational resolution of 3D photoacoustic imaging in 2016[126]. Mozaffarzadeh et al. proposed a high-resolution CF weighting technique and achieved improved resolution, SNR, and side lobe suppression in 2019[127]. Paul et al. considered the noise level variations of raw ultrasound data in the formulation of CF and achieved improvements in image resolution, side lobe reduction, SNR, and contrast in 2021[128]. In the same year, Mukaddim and Varghese extended CF from the spatial domain to the spatiotemporal domain[129]. This extension helps cancel out signals with low spatial and temporal coherence and results in higher background noise cancellation while preserving the main features of interest in reconstructed images. Mathematically, the CF is defined as the ratio of the coherent sum of photoacoustic signals across detectors to the incoherent sum and can be formulated as[123] where is the photoacoustic signal measured by the th detector at time and is the total number of detectors. The value of the CF ranges from zero to one. A large value means that the signals at that point are highly focused and that the point can be reconstructed with high quality. In contrast, a small CF value indicates that the signals are weakly focused and will result in lower image quality. The CF-weighted DAS method can be expressed as where is the reconstructed image. The term can be replaced by the outputs of other beamforming methods, such as DMAS, SLSC, and MV. The workflow of the CF algorithm is illustrated in Fig. 9(e).Each beamforming method mentioned above has advantages and disadvantages. To improve the performance of image reconstruction in PACT, they can be combined to yield hybrid beamforming methods such as DMASD plus DAS/DMAS[105], SLSC plus filter DMAS[110], MV plus DMAS[118,119], CF plus DMAS[104,130], and CF plus MV[117,127,131]. Representative work on DAS-based image reconstruction in PACT is summarized in Table 3. For completeness, Table 3 also includes relevant work in ultrasound or microwave imaging. Table 3Representative DAS-Type Algorithms Used for Image Reconstruction in PACT
3.2.Filtered Back ProjectionDAS-type algorithms can achieve approximate photoacoustic image reconstruction and are inexact reconstruction techniques. To achieve exact image reconstruction, advanced algorithms are needed. Filtered back projection (FBP) is a class of algorithms that are based on the Radon transform (see Sec. 2.4). It achieves image reconstruction by first filtering measured photoacoustic signals and then back-projecting the filtered signals into the image domain. The back-projection operation in FBP is similar to the reconstruction procedure in DAS-type algorithms. Therefore, the native DAS algorithm can be regarded as a simplified version of FBP, which achieves image reconstruction by directly back-projecting original photoacoustic signals into the image domain. 3.2.1.Approximate filtered back projectionEarly FBP algorithms were developed based on the condition of far-field approximation[35], which states that if the distance between a detector and a photoacoustic source is much greater than the size of the photoacoustic source itself (i.e., , : source size), the approximation holds (see Fig. 12). Under the condition of far-field approximation, the photoacoustic integral over a spherical shell can be approximated by the integral over its tangential plane. Consequently, image reconstruction in PACT can be achieved by inverting the linear Radon transform. Kruger et al. proposed an approximate FBP algorithm in 1995[22], which is the first formal image reconstruction method in PACT. Xu and Wang developed approximate FBP algorithms from a more rigorous perspective in 2002[134,135] and 2003[136]. They deduced exact solutions to the image reconstruction problem and proposed approximate time-domain FBP algorithms for circular[134], spherical[135], planar, and cylindrical detection geometries[136]. The preceding approximate FBP algorithms were generally developed for specific detection geometries. In 2007, Burgholzer and Matt extended the approximate FBP algorithms for image reconstruction in arbitrarily closed detection geometries under the assumption of the far-field approximation. The extended FBP reconstruction formula can be written as[35] where is the reconstructed photoacoustic image, is the Dirac delta function, is the back-projection term given by and is the solid angle subtended by the element of a detection surface and can be calculated by Here is the unit normal vector of the detector surface pointing to the region of interest (ROI).The time-domain first derivative in Eq. (42) can be interpreted in the frequency domain as where and denote the forward and inverse Fourier transforms, respectively; is a ramp filter, which suppresses the low-frequency contents of the measured photoacoustic signal and amplifies high-frequency information. Since the value of extends from to , the ramp filter is not integrable, and the inverse Fourier transform in Eq. (44) is undefined. To solve this problem, the ramp filter can be band-limited by a window function and Eq. (44) becomes where represents the window function. In practice, a smooth window such as the Hanning function is preferred over a box function because the latter may introduce undesirable ringing artifacts in the image domain.One benefit of the far-field approximation in FBP is that it allows for simplified calculation of the solid angle . In other words, under the far-field approximation, in Eq. (43) reduces to Compared with Eq. (43), Eq. (46) involves only the detector position when calculating and is independent of the source position , thereby reducing the computational complexity. For example, assuming that the size of a 3D image to be reconstructed is voxels () and the number of detectors used for imaging is (), the computational complexity is for Eq. (43) and only for Eq. (46). Furthermore, if the detection geometry is spherical, in Eq. (46) becomes a constant (), indicating that the computational complexity is . 3.2.2.Exact filtered back projectionThe approximate FBP algorithms are based on the condition of far-field approximation. This condition, however, may not be fully met in practice considering that photoacoustic signals may attenuate with propagation distance and that signal detection in space may be restricted. To solve this problem, in 2004, Finch et al. derived an exact FBP formula for image reconstruction in spherical detection geometries with odd dimensions ()[23]. In 2005, Xu and Wang presented a universal FBP formula for image reconstruction in infinite planar, infinite cylindrical, and closed spherical detection geometries[24]. The reconstruction formula is given as where the back-projection term is the solid angle of the detection surface with respect to the reconstruction point and equals for the infinite planar geometry and for the spherical and cylindrical geometries. FBP is one of the most commonly used algorithms in PACT.By comparing Eqs. (47) and (41), we find that the exact FBP algorithm is very close to the approximate FBP algorithm in the formulas. They both consist of three reconstruction steps: filtering, back projection (the function), and summation. Their major difference is in the back-projection terms. The back-projection term in the exact FBP algorithm has an extra term , which is not present in the approximate FBP algorithm. The reasons are as follows. According to the forward solution of the photoacoustic wave equation [Eq. (18)], the amplitude of the photoacoustic signals received by a detector is proportional to the size of the source and inversely proportional to the detection distance, i.e., . Under the condition of the far-field approximation, i.e., , the amplitude of is small enough compared with the derivative term in Eq. (48) to be ignored without significant loss of reconstruction accuracy. Therefore, the back-projection term in the approximate FBP algorithm does not involve the term . Figure 13 presents an example demonstrating the principle of FBP-based image reconstruction. In this example, the photoacoustic source is a 5-mm-diameter uniform spherical absorber, and the photoacoustic signals generated from the source are recorded by a 40-mm-diameter circular detector array [Fig. 13(a)]. Figures 13(b) and 13(c) show the representative photoacoustic signal measured by a detector and the corresponding back-projection signal , respectively. Figure 13(d) shows the projection images of detectors at different positions. Figures 13(e)–13(g) are the reconstruction results obtained by summing the projection images of four, 16, and 256 detectors, respectively. The results demonstrate that the FBP algorithm can effectively reconstruct the structure and amplitude information of the photoacoustic source. Note that the artifacts in the reconstructed images are not caused by the FBP algorithm but by the limited view angle of the circular detector array[137]. To evaluate the performance of the FBP algorithm for different detection geometries, a group of image reconstruction simulations was conducted. In the simulations, a multi-sphere phantom is used as the photoacoustic source. The phantom contains nine spherical absorbers with unit intensity, among which eight with diameters uniformly varying from 1 mm to 2 mm are evenly distributed on a circle with a diameter of 10 mm, while the ninth one, with a diameter of 1.6 mm, is seated at the origin. Figures 14(a)–14(c) show the relative positions of the multi-sphere phantom and a finite planar, finite cylindrical, and closed spherical detection surface. The planar detection surface is located 12 mm below the plane, and the cylindrical and spherical detection surfaces are centered at the origin. The three detection surfaces have the same number (32768) of evenly distributed point-like detectors and approximately equal detection areas (plane: ; cylinder: 70 mm base diameter, 100 mm height; sphere: 84 mm diameter). The reconstructed images of the and cross sections of the phantom are displayed in Figs. 14(d)–14(f) and 14(g)–14(i), respectively. The results show that the FBP algorithm achieves stable image reconstruction for all three detection geometries. However, the image reconstructed in the closed spherical detection surface has much fewer artifacts than those reconstructed in the planar and cylindrical detection surfaces. This is because the spherical detection surface is closer to an ideal detection geometry than the finite planar and finite cylindrical surfaces. To extend the application scenarios of FBP, many in-depth studies have also been carried out[138]. For example, in 2007, Kunyansky proposed a set of FBP-type formulas for image reconstruction in spherical detection geometries with arbitrary dimensions ()[139]. In the same year, Finch et al. also developed a set of FBP-type inverse formulas for spherical detection geometries with even dimensions[140]. In 2009, Nguyen derived a family of inverse formulas for thermoacoustic tomography, in which many previously known FBP inverse formulas can be obtained as special cases[141]. In addition, the exact FBP algorithms mentioned above are primarily used for planar, cylindrical, and spherical detection geometries with point-like detectors. In 2007, Burgholzer et al. developed a two-step FBP algorithm for integrating line detectors[142]. In 2012, Natterer proposed a novel FBP algorithm for elliptical detection geometries[143], which was further developed by Palamodov in 2012[144], Haltmeier in 2014[145,146], and Salman in 2014[147]. Moreover, the FBP algorithm is well suited for parallel computing. GPUs have been used to achieve real-time FBP image reconstruction in both 2D and 3D imaging[42,148–153]. A field-programmable gate array (FPGA) was also employed to accelerate image reconstruction in low-cost PACT systems[154]. Table 4 summarizes representative work on FBP-based image reconstruction in PACT. Table 4Representative Work on FBP-Based Image Reconstruction in PACT
3.3.Series ExpansionThe basic principle of series expansion (SE) algorithms is to approximate the image to be reconstructed using mathematical series. Compared with other reconstruction algorithms in PACT, SE algorithms are simple and fast for specific detection geometries such as planar geometries because they can be implemented using FFT. 3.3.1.Series expansion for planar detection geometriesNorton et al. proposed exact SE methods for the reconstruction of acoustic reflectivity in circular geometries in 1980[156] and in planar, cylindrical, and spherical geometries in 1981[157]. Several groups have studied similar ideas for image reconstruction in PACT. Köstli et al. in 2001 proposed an exact SE algorithm for image reconstruction in planar detection geometries[31], and Xu et al. in 2002 reported a similar algorithm[158]. Mathematically, SE-based PACT image reconstruction for planar detection geometries can be formulated as[31,83] where is the photoacoustic signal measured by a planar surface at position and time ; , , and are the spatial wavenumber components in each Cartesian direction; is the temporal frequency; is the SOS; → represents the interpolation operation between the temporal and spatial frequencies and ; and are the initial photoacoustic pressure and its spatial Fourier transform; and and represent the forward and inverse Fourier transforms, respectively. The reconstruction procedure above involves one interpolation and two Fourier transforms and has a computational complexity of for a 3D image with a size of voxels by use of FFT[159]. The numerical implementation of the SE algorithm is available in the k-Wave toolbox[83].Figure 15 is a numerical example showing SE-based PACT image reconstruction using the k-Wave toolbox. The photoacoustic source in the plane contains nine spherical absorbers with unit intensity. Among them, eight absorbers with diameters uniformly varying from 1 mm to 2 mm are evenly distributed on a circle with a diameter of 10 mm, while the ninth absorber with a diameter of 1.6 mm is seated at the origin. The signal detection geometry is a planar surface 12 mm below the plane. The planar surface has point detectors uniformly distributed spanning a physical size of . Figures 15(b) and 15(c) are the and cross sections of the photoacoustic source, respectively, and Figs. 15(d) and 15(e) are the corresponding reconstructed images, which show that the SE algorithm can recover major structures of the source. Note that the distortion and blurring in the reconstructed images are due to the finite size of the planar detection surface. 3.3.2.Series expansions for circular, cylindrical, and spherical detection geometriesIn 2002, the Wang group reported exact SE algorithms for PACT image reconstruction in cylindrical and spherical geometries[135,160]. However, the image reconstruction procedures in these two cases involve complicated mathematical calculations, preventing their implementations using FFT and are thus time-consuming[135,160]. Based on Norton’s pioneering work on circular geometries[156], Haltmeier and Scherzer in 2007 proposed a 3D reconstruction algorithm for cylindrical detection geometries, where photoacoustic signals are recorded by linear integrating detectors[161]. The proposed algorithm has a computational complexity of for a 3D image with a size of voxels, which is higher than that of the SE algorithm in planar geometries [][159] but lower than that of FBP [][24]. In 2012, Kunyansky proposed fast image reconstruction algorithms suitable for circular, cylindrical, and spherical detection geometries[33]. In Kunyansky’s work, image reconstruction is based on the Fourier transforms of the initial photoacoustic pressure and the measured photoacoustic signals , as summarized below. Suppose that the detection geometry is a circle, as shown in Fig. 7(c). By expanding the Fourier transform of the measured photoacoustic signal and the initial photoacoustic pressure in the Fourier series in and , we have[33] where is the imaginary unit, is the angular frequency, is the radius of the detection circle, is the distance from the reconstruction point to the center of the circular detection geometry, and and are expansion coefficients given by[33] where is the Hankel function of the first kind of order and is the Bessel function of the first kind of order [162]. Equation (55) is somewhat similar to an expression in Norton’s work[156], where the Bessel function rather than the Hankel function is in the denominator and a term corresponding to the real part of in the numerator.By discretizing Eq. (55), we can obtain and then the initial photoacoustic pressure [Eq. (53)]. However, direct discretization of Eq. (55) results in a computational complexity of for each and for the whole reconstruction. To achieve fast reconstruction, substituting Eq. (55) into Eq. (53) yields[33] where is given by where is the radius of the circular detection geometry. Equation (56) indicates that the initial photoacoustic pressure can be obtained by calculating the 2D inverse Fourier transform of , which has a computational complexity of , lower than that of the direct-discretization-based reconstruction .For cylindrical detection geometries with linear integrating detectors, image reconstruction can be readily realized by combining the fast image reconstruction procedure for 2D circular geometries and the 3D inverse Fourier transform[33]. This procedure can yield fast reconstruction with a computational complexity of . For spherical detection geometries, the derivation of the image reconstruction procedure is similar to those for circular geometries [Eqs. (52(53)(54)(55)(56))–(57)]. A prominent difference is that and are expanded into spherical harmonics. In addition, the fast spherical harmonics (FSH) transform is adopted to achieve fast reconstruction[163], which results in a reconstruction complexity of [33]. Wang and Anastasio in 2012 demonstrated that a mapping relationship exists between the spatial frequency components of initial photoacoustic pressure and the temporal frequency components of measured photoacoustic signals [164]. They thus proposed a Fourier-transform-based image reconstruction algorithm whose computational complexity is for 2D image reconstruction in circular geometries and for 3D image reconstruction in spherical geometries[165]. The reconstruction formula does not involve series expansion of special functions or multi-dimensional interpolation operations in Fourier space, which are commonly used in previous work. 3.3.3.Series expansion for general detection geometriesKunyansky in 2007 presented a generalized SE method for image reconstruction in arbitrarily closed detection geometry provided that the eigenfunctions of the Dirichlet Laplacian are explicitly known[32]. Assuming that and are the eigenvalues and normalized eigenfunctions of the Dirichlet Laplacian in the interior of a closed detection surface , we have[32] where denotes the spatial dimension. Since the eigenfunctions are orthogonal, the initial photoacoustic pressure can be formulated in series form as where the coefficients can be obtained from the measured projection data , and is the known eigenfunctions of the Dirichlet Laplacian determined by detection geometries. is the solution of the Dirichlet problem for the Helmholtz equation with zero boundary conditions and has the following Helmholtz representation[32]: where is a free-space rotationally invariant Green’s function[166], is the unit normal vector of the detector surface pointing to the ROI, and is the position of the detector. The coefficients can be calculated by[32] where is given byHere denotes the diameter of , and is the integral over a spherical shell [Eq. (27)] centered at the detector position , which can be calculated from the measured projection data . With the calculated coefficients and eigenfunctions , the initial photoacoustic pressure can be reconstructed [Eq. (59)]. If the eigenfunctions of the Dirichlet Laplacian of a detection geometry such as a sphere, a half-sphere, a finite cylinder, or a cube are explicitly known, the eigenfunction-based SE method can provide exact reconstruction for any closed detection geometry. In particular, when the detection geometry is a cube, the reconstruction procedure can be implemented using 3D FFT, resulting in efficient computation with a complexity of [32]. Furthermore, unlike the FBP algorithm, this technique can provide theoretically exact reconstruction within the region enclosed by the detection geometry even if part of a photoacoustic source is outside the region. The image reconstruction procedures discussed for planar, circular, cylindrical, spherical, and arbitrarily closed geometries are based on the assumption that the acoustic medium is homogeneous[32]. For the image reconstruction problem in heterogeneous media, Agranovsky and Kuchment proposed an analytic reconstruction formula that works for any closed detection geometry and variable SOS satisfying a non-trapping condition[167]. This algorithm can be regarded as a generalization of the eigenfunction-based SE method for arbitrary closed detection geometries [Eqs. (58(59)(60)(61))–(62)][167]. Table 5 lists representative work on SE-based fast image reconstruction algorithms and Table 6 summarizes representative work on SE-based image reconstruction in PACT. Table 5Representative Work on SE-Based Fast Image Reconstruction in PACT
Table 6Representative Work on SE-Based Image Reconstruction in PACT
3.4.Time ReversalTime reversal (TR) is a type of algorithm that involves recovering initial photoacoustic pressure via numerically running a forward acoustic propagation model backward and re-transmitting the photoacoustic signals measured by each detector into the image domain in a temporally reversed order. TR algorithms can couple the acoustic properties of media such as SOS, density, dispersion, and absorption into the acoustic propagation model and can be used for image reconstruction in arbitrary closed detection geometry[170]. They are thus regarded as the ‘least restrictive’ image reconstruction algorithms in PACT[170,171]. 3.4.1.Standard time reversalIn 2004, Xu and Wang proposed a TR algorithm in PACT and derived an analytical expression for image reconstruction based on the Green’s function with a homogeneous Dirichlet boundary condition[34]. Under the condition of the far-field approximation [see Sec. 3.2.1], the derived analytical expression in[34] is reduced to the reconstruction formula in FBP [Eq. (47)]. Subsequent research on TR-based image reconstruction primarily focused on coupling the acoustic properties of heterogeneous media into the TR model to improve reconstruction performance[35,170–172]. For example, Hristova et al. studied TR-based image reconstruction in heterogeneous media, in even dimensions, and with partially closed detection surfaces[171]. Treeby et al. considered the effect of acoustic absorption of media and proposed a TR algorithm for absorbing acoustic media[170]. The TR algorithms are based on the Huygens principle[171]. When the SOS of a medium is constant and the spatial dimension of an acoustic field is odd, the Huygens principle indicates that the acoustic wave field vanishes after a moment [171] and TR algorithms enable exact image reconstruction for a sufficiently large . When the SOS is variable or the spatial dimension is even, the Huygens principle no longer holds, and TR algorithms can only provide approximate reconstructions[171]. In a lossy medium, the photoacoustic wave equation can be formulated as an initial value problem by three coupled acoustic equations and corresponding initial conditions[170]. The coupled acoustic equations, including the linearized equation of motion, the linearized equation of continuity, and the adiabatic equation of state, are given as[170] which are subject to the following initial conditions: Here is the acoustic pressure at time and position inside the imaging region, is the acoustic particle velocity, is the acoustic density, is the ambient density, and and are the absorption and dispersion proportionality coefficients given by where is the absorption coefficient () and is a real constant representing the power law exponent.TR-based image reconstruction is achieved by running the same photoacoustic forward model in temporally reversed order and thus can be formulated by the same set of equations [Eqs. (63(64)(65)(66))–(67)] except that the initial conditions in Eq. (66) are replaced with Figure 16 illustrates the basic principle of TR-based image reconstruction. Generally, there is no explicit analytical solution to the partial differential equations in Eqs. (63(64))–(65). Numerical methods are required to solve these equations. Finite element and finite difference are two commonly used methods for computing numerical solutions to partial differential equations due to their flexibility, accuracy, and rigorous mathematical foundations. However, these approaches require many mesh points per wavelength and small time steps to achieve accurate calculations, which is computationally expensive, especially for high-frequency and large-scale imaging models. To solve this problem, Cox et al. developed a k-space pseudospectral approach that can achieve accurate computations using fewer mesh points and more time steps[173]. The k-space-pseudospectral-method-based TR algorithm has been implemented in the k-Wave toolbox[83] and is based on the discrete form of the coupled acoustic equations [Eqs. (63(64))–(65)][170], which can be written as where is the imaginary unit, denotes the Cartesian coordinates [ represents 1D space, represents 2D space, represents 3D space], is the spatial wavenumber component, is the time step, is a k-space adjustment to the spatial derivative[170], and and represent the forward and inverse Fourier transforms, respectively. Equations (69) and (71) are equations for gradient calculation and Eqs. (70) and (72) are update equations. Equation (73) is an equation of state, including the absorption and dispersion of a medium, which is given byThe above equations are calculated iteratively and the entire acoustic field is updated for each time step. One of the most prominent features of the TR algorithm is that it is well-suited for image reconstruction in acoustically heterogeneous media. To demonstrate this, a simulation is presented. Figure 17(a) shows an acoustically heterogeneous phantom consisting of a uniform background, several blood vessels, and a bone mimicking the cross section of a human finger. The background and blood vessels have a mass density of and an SOS of 1500 m/s. In contrast, the acoustically heterogeneous region (i.e., the bone) has a density of and an SOS of 3800 m/s. A 512-element full-ring detector array with a diameter of 50 mm enclosing the phantom is used for imaging. Figure 17(b) shows the image reconstructed by the TR algorithm under the assumption that the media are homogeneous. A typical SOS of 1505 m/s and a density of are used. In this case, extensive image artifacts are present in the reconstructed image due to the inaccurate setting of the SOS and density for the TR model. As a comparison, Fig. 17(c) presents the image reconstructed by the TR algorithm coupling the true SOS and density of the heterogeneous media. The results show that by incorporating the acoustic properties of media, the TR algorithms can achieve improved image reconstruction in PACT. Another important feature of TR is that it can address the image reconstruction problem in arbitrary closed detection geometry. Figures 18(a)–18(c) show an imaging example, where a 2D multi-disk phantom is enclosed by three different closed detection geometries, including a square, an octagon, and a circle. The octagon and the circle in Figs. 18(b) and 18(c) can be circumscribed by the square in Fig. 18(a). Figures 18(d)–18(f) show the corresponding images reconstructed by the TR algorithm, which shows that TR can produce good results for all three detection geometries. 3.4.2.Modified time reversal based on the Neumann seriesThe standard TR algorithms discussed above are unable to find analytical solutions for the wave equations when the SOS is variable or the spatial dimension of an acoustic field is even. To address this problem, a Neumann-series-based TR algorithm was proposed by Stefanov and Uhlmann[174] and further developed by Qian et al.[175]. The basic idea of the Neumann series-based TR algorithm is to first modify the measurement data at using a harmonic extension operator, perform TR reconstruction, and then repeat the TR process[175]. The Neumann series-based TR algorithm can yield exact reconstruction if the detected pressure is known on the whole boundary and the measurement time is greater than a stability threshold[175]. In the Neumann-series-based TR algorithm, the initial conditions [Eq. (68)] for the coupled wave equations are modified to[63,175] where denotes the Poisson operator of the harmonic extension defined as , and is the solution of the elliptic boundary value problem where is the domain defined by a detection surface and is the boundary of the domain.By introducing a forward acoustic propagation operator and a modified TR reconstruction operator , the photoacoustic forward problem and the modified TR reconstruction can be expressed as The Neumann-series-based TR reconstruction can be expressed as[175] where is an identity operator. Theoretically, Eq. (80) can provide an exact reconstruction if the variable varies from zero to . However, in practice, is always finite, and exact reconstruction is impossible. If is cut off at a particular value , the reconstruction formula in Eq. (80) can be written asEquation (81) can be further formulated in the form of iterative TR reconstruction[176], i.e., The estimated initial photoacoustic pressure usually contains pixels with negative intensity due to non-ideal imaging conditions, such as finite detector bandwidth and limited view angle. To improve reconstruction results, non-negative constraints can be imposed on the solutions during iteration. Moreover, the Poisson operator of harmonic extension can be neglected in some cases, e.g., when the spatial dimension of an acoustic field is even and the medium is homogeneous or when the time is large enough to guarantee that acoustic waves inside a detection region decay sufficiently. Figure 19 shows an example illustrating Neumann-series-based TR reconstruction in limited-view imaging. Table 7 summarizes representative work on TR-based image reconstruction in PACT. Table 7Representative Work on TR-Based Image Reconstruction in PACT
3.5.Iterative ReconstructionThe last class of conventional image reconstruction algorithms in PACT is iterative reconstruction (IR), which achieves image reconstruction through iteration. IR algorithms construct a set of linear equations based on a discrete photoacoustic imaging model and iteratively seek numerical solutions by minimizing the error between measured projection data and their estimates (see Fig. 20). Compared with the aforementioned DAS, FBP, SE, and TR algorithms, IR algorithms can yield improved photoacoustic image quality under non-ideal imaging scenarios, such as spare-view and limited-view imaging. They can also incorporate the physical models of an imaging system, such as transducer responses and acoustic heterogeneity, to achieve improved imaging. The downside of IR algorithms is that they are slow and computationally expensive due to the numerical optimization involved. 3.5.1.Discrete forward imaging modelThe basic idea of IR algorithms is to establish a mapping relationship between the photoacoustic image to be reconstructed and projection data using a set of linear equations and then solve them iteratively. To determine the relationship, a discrete photoacoustic imaging model was established and is illustrated in Fig. 21. In the discrete model, the 2D photoacoustic image is represented by evenly distributed grid points, and the value of the th grid point is . The photoacoustic signal measured by a detector is discretely sampled, and the sampling length is . The th sampling point of the photoacoustic signal, denoted as , relates to the spherical shell integral of the photoacoustic image over the th detection shell, whose radius equals the TOF of the photoacoustic signal (, where is the temporal sampling interval) multiplied by the SOS . The goal of IR-based image reconstruction is to solve for from the projection data . Based on the discrete imaging model, the relationship between the unknowns and the projection data can be described by the following set of linear equations: which can be written as where is a weighting factor representing the contribution of the th grid point to the th detection shell, is the total number of grid points, and is the total number of sampling points of a detector. If we consider the projection data measured by all detectors and denote Eq. (83) can be written in matrix form as where is the system matrix that transforms the initial photoacoustic pressure to the measured projection data . To solve Eq. (86), the system matrix should be constructed first.3.5.2.System matrix constructionA. System matrix construction for acoustically homogeneous media If the imaging medium is acoustically homogeneous, the photoacoustic wave equations have an explicit analytical solution, as presented in Eq. (18). The solution describes how the measured photoacoustic signal relates to a photoacoustic source . Discretizing the solution [Eq. (18)], the system matrix can be constructed as where denotes the position of the th detector, is the temporal sampling interval, and is the photoacoustic signal measured by the th detector at time . To calculate Eq. (87), can be expanded using a set of basis functions as[38] where represents the position of the th grid point in the discrete image , and is the basis function at the th grid point. Substituting Eq. (88) into Eq. (87), a discrete imaging model can be obtained asComparing Eqs. (84) and (89), the elements of the system matrix can be obtained as where denotes the element in the th row and th column of the system matrix .To construct a system matrix with sufficient accuracy, it is necessary to choose a suitable expansion function . Several expansion functions are available for this purpose. Among them, a spherical-voxel function[36,177–179], Kaiser–Bessel functions[38], and linear interpolation functions[37,40,42,178,180] are the most commonly used. Figure 22 shows a schematic of the three expansion-function-based discrete grid models. The expansion function can help interpret the meaning of the system matrix . Suppose that a photoacoustic image includes only a source defined by an expansion function at grid point . The element of the system matrix is the response of the th detector at time , as illustrated in Fig. 23. This implies that to construct the system matrix , we can simply compute the photoacoustic signals measured by each detector and arrange the signals in a way according to Fig. 23. Since the system matrix is only associated with the discrete photoacoustic imaging model (Fig. 21) and the expansion function [Eq. (90)], it can be used for different experiments once constructed for a particular imaging system. A1. Spherical-voxel-based system matrix A spherical voxel function is defined as[181] where is the spacing between two grid points, and denotes the coordinate of the th grid point. In a spherical-voxel-based discrete imaging model, each grid point in the image to be reconstructed is regarded as a uniform spherical source[36,177–179]. It is possible to directly construct the system matrix based on the definition of the spherical voxel function. However, there may be problems. This is because a spherical voxel is typically small in radius. The photoacoustic signals are short in time and require a high sampling rate, which leads to an increase in computational complexity or aliasing artifacts[181]. To address this problem, a spherical-voxel-based system matrix can be constructed in the frequency domain[181].Define and as the discrete Fourier transforms of the measured projection data and the system matrix . The matrix form of the imaging model in Eq. (86) can be rewritten as[38,42] and the elements of the system matrix are given by[38,42] where denotes the frequency, is the frequency sampling interval, is the Fourier transform of the SIR of the th detector and can be written as[38,42] and is the Fourier transform of the signal generated from a photoacoustic source defined by a spherical voxel and is given as[38,42]Combining Eqs. (93(94))–(95), the spherical-voxel-based system matrix can be constructed in the frequency domain. This frequency domain approach can solve the sampling problem of high-frequency photoacoustic signals and make it easier to incorporate detector responses (see Sec. 3.5.3). A2. Kaiser–Bessel-function-based system matrix The Kaiser–Bessel (KB) functions are radially symmetric functions defined as[182,183] where represents the modified Bessel function of the first kind of order [184], is the radius of support, and is a shape parameter governing the width of the radial profile. When , , and , the KB function reduces to the spherical voxel function [Eq. (91)]. The KB-function-based expansion function can be expressed as where denotes the coordinate of the th grid point.The elements of the KB-function-based system matrix have an analytical solution and can be calculated in the frequency domain[38]. The derivation is similar to that of the spherical-voxel-based system matrix described in Eqs. (92(93))–(94) except that Eq. (95) should be replaced with Here denotes the frequency, is the Fourier transform of the signal generated from the photoacoustic source defined by a KB function, and is the spherical Bessel function of the first kind of order [184].In the spherical-voxel-based imaging model, the expansion function is a sphere with uniform intensity, which is not differentiable at the boundary. Therefore, generated photoacoustic signals have an infinite bandwidth and may induce numerical instabilities when calculating the system matrix using Eq. (90). In the KB-function-based imaging model, the expansion function is also a sphere but with a smoothly varying intensity. It is differentiable at any position and thus can avoid numerical instabilities encountered by the spherical-voxel-based imaging model. The shape of the KB function can be adjusted as needed. The parameter affects the differentiability of the KB function and is typically set to a value greater than 2 () to guarantee the continuity of derivatives at the boundary. The parameter determines the effective size of a voxel and can be selected according to the desired spatial resolution. The parameter describes the smoothness of the KB function and impacts the width of the radial profile. It typically has a value of 10.4 for image reconstruction in PACT but may vary depending on application scenarios. Figure 24 illustrates the profiles of a family of KB functions for a given set of parameters. More details about the KB functions can be found in[185]. A3. Interpolation-function-based system matrix In addition to the spherical-voxel- and KB-function-based approaches, the system matrix can also be constructed using linear interpolation functions. Several different interpolation functions can be used as the expansion function to describe the mapping relation between a continuous image and its discrete counterpart. Generally, a high-order interpolation function helps produce high-accuracy results, which, however, involves more calculations. Considering the trade-off between accuracy and efficiency, bilinear and trilinear interpolation functions are commonly used for 2D[43] and 3D[38,40,42] IR models, respectively. Specifically, when the trilinear interpolation method is employed in 3D space, the expansion function can be expressed as[42] where denotes the coordinate of the th grid point, and is the spacing between two grid points. The equation implies that the non-zero values of the expansion function exist only in the -neighborhood of position . The image value at any position can be interpolated from its four neighbors in 2D space or eight neighbors in 3D space.The interpolation-function-based system matrix can be constructed in two steps as[42] where the matrix represents the spherical Radon transform in 3D [Eq. (27)], and the matrix represents the differential operation in Eq. (18). The matrix can be obtained by accumulating all image grid points intersecting with an integrating spherical shell and satisfies the following relationship[42]: where is the approximation of the spherical Radon transform in Eq. (27), and and are the numbers of angular divisions over the polar and azimuth directions, respectively, on a local spherical coordinate system centered at [42]. The differential matrix can be obtained using a difference operation and satisfies the following relationship[42]:The preceding procedure implies that the system matrix can be constructed via the two matrices and , while the matrix elements in do not need to be explicitly calculated. In addition to the implicit method discussed above, the interpolation-function-based system matrix can also be constructed by explicitly calculating its elements via analytical methods[37,40,42,43]. The analytical methods typically have better computational stability due to more accurate calculations of derivatives. B. System matrix construction for acoustically heterogeneous media The system matrix discussed above is primarily constructed based on the analytical solution of the photoacoustic wave equations [Eq. (18)], where the acoustic medium is assumed to be homogeneous. When the acoustic media are heterogeneous, the photoacoustic discrete imaging model can still be described by the coupled photoacoustic wave equations [Eqs. (63(64)(65)(66))–(67)], which, however, have no analytical solutions. Based on the k-space pseudospectral method (see Sec. 3.4.1), Huang et al. proposed a method for constructing the system matrix in acoustically heterogeneous media[41], which is briefly reviewed below. Define a vector containing all photoacoustic field variables at the time step as[41] where is a vector, is the total number of grid points in an image, and (, 2, 3) denote the particle velocity and acoustic density in the th direction on a 3D Cartesian grid at the th time step, is the acoustic pressure at the grid points, and the superscript denotes the matrix transpose. The update of the field variables from to is described by where is a matrix, and is the total number of time steps. The vector represents the values of the field variables at and can be calculated from the initial photoacoustic pressure as where is a matrix that maps the initial pressure to the initial value of the field variables at time [41]. Since detectors generally do not coincide with image grid points, the measured photoacoustic projection data can be related to the field quantities via interpolation as[41] where denotes the interpolation operation, such as the Delaunay triangulation-based interpolation. Combining Eqs. (104(105))–(106), we haveBy comparing Eq. (107) with Eq. (86), the system matrix can be obtained as As mentioned above, the th column of system matrix corresponds to the response of a detector to the source defined by an expansion function at grid point . Therefore, the system matrix in heterogeneous media can be constructed by computing the response of a detector when the photoacoustic source defined by an expansion function traverses all image grid points, which can be calculated using the k-Wave toolbox[186]. In a homogeneous medium, the construction of the system matrix can be simpler. The response of a detector to an expansion-function-defined source at the first (reference) image grid point is computed and serves as the first column of . The remaining columns of can be obtained by time-shifting the elements in the first column while taking into account the signal attenuation according to the relative position between the current and the reference grid points. This can greatly improve the construction speed of the system matrix . 3.5.3.Transducer response modelingThe IR model discussed above is established based on the assumption that an ultrasound detector is point-like in shape and has an infinite bandwidth. However, a real ultrasound detector always has a finite aperture size and a limited bandwidth, which makes the IR model less accurate. To address this problem, it is necessary to incorporate the SIR and EIR of an ultrasound detector (see Sec. 2.3) to make the system matrix as realistic as possible. A. EIR modeling When the system matrix is constructed in the time domain, the discrete photoacoustic imaging model [Eq. (86)] incorporating the EIR of a detector can be written as[37] where the matrix denotes the Fourier transform of the EIR of a detector, and and represent the forward and inverse Fourier transforms, respectively. In contrast, when the system matrix is constructed in the frequency domain using the spherical voxel or the KB function, the EIR of a detector can be incorporated into a system matrix via[39] Here, is the Fourier transform of the acoustic pressure generated from the photoacoustic source defined by the spherical voxel or the KB function, is the frequency, is the frequency sampling interval, is the index of sampling points, and and are the Fourier transforms of the SIR and EIR of a detector, which can be modeled or measured in practice.B. SIR modeling B1. Far-field SIR Several SIR models have been proposed and used for image reconstruction in PACT[187,188]. When a detector has a flat and rectangular surface with an area of [see Fig. 25(a)] and the condition of far-field approximation[189] is met, the temporal Fourier transform of the SIR of a detector can be expressed as[187] where and represent the coordinates of the th grid in a local coordinate system at the th detector. Other variables have the same meanings as those defined in Eq. (90). With Eq. (111), the system matrix can be constructed according to Eq. (93).B2. Patch-based SIR When the far-field approximation condition does not hold, the far-field SIR model may lead to inaccurate reconstruction results. To address this problem, the surface of a detector can be divided into small patches [Fig. 25(b)]. For each small patch, the aforementioned far-field assumption is approximately met. The SIR of each small patch can be characterized by Eq. (111), and the SIR of the whole detector is obtained by summing the SIRs of all small patches. Suppose that a detector has an area of and is divided into patches. Denoting and as the length and width of each small patch (i.e., , ), we have Here is the SIR of the th patch, which can be calculated using Eq. (111). B3. Discrete-detection-element-based SIR The far-field and patch-based SIR models are easy to incorporate into the system matrix based on the spherical voxel function or the KB function but are difficult to incorporate into the system matrix based on linear interpolation or the k-space pseudospectral method. Huang et al.[41] and Ding et al.[40] solved this problem by dividing the entire detector surface into a set of detection elements with equal areas, as shown in Fig. 25(c). In this case, the acoustic pressure recorded by a detector can be approximately written as[40] where is the total number of detection elements, is the area of a detection element (dimensionless), and is the position of the th detection element. is the acoustic pressure recorded at position and time and can be written in matrix form as [Eq. (86)] where is the acoustic pressure recorded by the th detection element, and is the system matrix for the th detection element. Substituting Eq. (114) into Eq. (113), we haveThe system matrix in the discrete imaging model in Eq. (86) is replaced with a weighted summation of system matrices in the detection-element-based SIR model. To achieve accurate representation, the size of the discrete detection element should be small, preferably much less than the acoustic wavelength[41]. Therefore, the discrete-detection-element-based SIR model involves more computations and is generally more time-consuming than the far-field- and patch-based SIR models. 3.5.4.Iterative reconstructionOnce the system matrix of a PACT imaging system is constructed, the initial photoacoustic pressure can be recovered from the measured projection data by solving Eq. (86). In principle, the linear photoacoustic imaging model in Eq. (86) can be solved by the pseudo-inverse matrix method, i.e., where is the Moore–Penrose pseudo-inverse matrix of , and the superscript denotes the conjugate transpose. However, the pseudo-inverse method is not commonly used in practice because the system matrix is generally too large to be inverted[61]. Alternatively, the imaging model can be solved iteratively. In this way, the image reconstruction problem is equivalent to solving the least-square problem where represents the -norm, and is the approximate solution of the least-square problem. Since the initial acoustic pressure is non-negative, a non-negativity constraint is imposed on Eq. (117). Generally, the optimization problem in Eq. (117) is ill-posed under non-ideal imaging conditions. To address this problem, Eq. (117) is typically regularized by a penalty function and can be rewritten as where is the data fidelity term, is a regularization term, and is a regularization coefficient controlling the weight of regularization, which can be automatically optimized by methods such as the generalized cross-validation (GCV) method[190] and the L-curve method[191,192].One popular regularization technique used in Eq. (118) is Tikhonov regularization, which can be expressed as[63] where is the Tikhonov matrix. In many cases, the Tikhonov matrix is chosen as the identity matrix (), giving preference to solutions with smaller norms. When the regularization term is convex and differentiable, such as in Tikhonov regularization, the optimization problem in Eq. (118) can be solved by optimization methods such as stochastic gradient descent (SGD)[193,194], least-square QR (LSQR)[195], and conjugate gradient (CG)[196]. When a gradient descent method[197] is used, the optimization problem in Eq. (118) can be solved by where is the iteration number, is the step size controlling the update rate, is the gradient of the fidelity term , and is the adjoint of the matrix .Tikhonov regularization emphasizes the smoothness of an image and tends to produce images with blurred edges. An alternative to Tikhonov regularization is sparse regularization, which is non-smooth and aims to find a solution with the minimum number of non-zero elements in certain transform domains[198–200]. -norm-based regularization is one of the most commonly used sparse regularization methods and has the following form[63,201]: where is a transformation matrix that can be constructed using sparse basis functions, such as the wavelet function and finite difference operators. When a finite difference operator is employed, the -norm regularization becomes total variation (TV) regularization[181], which can be written as where − 1, − 1, and − 1 are the neighboring grids of the th grid along the -axis, -axis, and -axis, respectively, and is the total number of image grid points. Sparse regularization involves non-smooth cost functions and can be solved using proximal point algorithms, such as the iterative shrinkage thresholding algorithm (ISTA) and its accelerated version fast ISTA (FISTA)[202], the iteratively reweighted least squares (IRLS)[203], and the alternating direction method of multipliers (ADMM)[204]. Specifically, the optimization problem in Eq. (118) can be solved by a proximal-gradient-descent scheme[45]: where is a proximal operator related to the regularization term and the parameter . From Eqs. (120) and (123), we know that the update process of an IR algorithm is related to the current reconstructed image , the regularization term , and the gradient term , as shown in Fig. 20.Combining different regularization strategies may help achieve improved image quality compared with using only a single type of regularization[205]. When dual regularization is imposed, the regularization term can be written as where and represent two types of regularization, and and are the corresponding coefficients. Based on this idea, Li et al. proposed non-local means and sparse-wavelet-based dual regularization and achieved image reconstruction with enhanced SNR and contrast[205].Figure 26 shows an example of IR-based image reconstruction in PACT. Figure 26(a) shows a numerical blood vessel phantom. The photoacoustic signals generated from the phantom are received by a 50-mm-diameter full-ring detector array with 64 elements enclosing the phantom. Figure 26(b) is the image reconstructed by FBP [Eq. (47)]. Severe streak artifacts are present in the reconstructed image due to the limited number of ultrasound detectors. Figure 26(c) is the image reconstructed by TV-based IR. The results show that the IR algorithm can produce images with fewer artifacts, demonstrating its superior performance under non-ideal imaging scenarios. The intensity profiles of the reconstructed images indicated by the white arrow are shown in Fig. 26(d) for comparison of the results of IR and FBP. Table 8 summarizes representative work on IR-based image reconstruction in PACT. Table 8Representative Work on IR-Based Image Reconstruction in PACT
Note: PSTD: pseudospectral time-domain; Homog: homogeneous; Heterog: heterogeneous; Dim: dimension; GD: gradient descent; LASSO: least absolute shrinkage and selection operator. 4.Deep Learning ApproachesThe aforementioned conventional approaches can achieve high-quality image reconstruction under ideal imaging conditions. However, they may face challenges under non-ideal imaging conditions, such as limited detector bandwidth, finite detector aperture, limited view angle, sparse detector arrangement, and inhomogeneous acoustic media. Inspired by the successful applications of DL across industries such as computer vision[212], natural language processing[213], and biomedical engineering[214], DL-based image reconstruction in tomography has drawn considerable attention in recent years[215–217]. DL has been successfully used for image reconstruction in CT, magnetic resonance imaging (MRI), positron emission tomography (PET), ultrasound, and other imaging modalities[215,217,218]. It can achieve tomographic image reconstruction with improved image quality and computational efficiency. In PACT, DL has been used to address a range of image reconstruction problems[48,50], such as detector bandwidth expansion[51,52], resolution enhancement[53,54], low-power/cost imaging[219,220], artifact removal[55], reconstruction acceleration[57], and reconstruction enhancement in sparse-view and limited-view imaging[44,46,47,58]. Specifically, the applications of DL in PACT image reconstruction are mainly reflected in five aspects, including signal preprocessing in the data domain, image postprocessing in the image domain, hybrid-domain processing in both the data and the image domains, learned iterative reconstruction, and direct reconstruction. 4.1.Preprocessing in the Data DomainUnder non-ideal imaging conditions, the raw photoacoustic projection data measured by a detector array may be incomplete and contain noise. Data-domain signal preprocessing attempts to transform incomplete photoacoustic projection data to nearly complete projection data using neural networks before image reconstruction, as shown in Fig. 27. Many related studies have emerged based on this idea[221]. Gutta et al. proposed a simple five-layer fully connected network to expand the bandwidth of measured photoacoustic projection data and achieved image reconstruction with improved contrast and quality[51]. Awasthi et al. developed a UNet-based model for super-resolution, denoising, and bandwidth enhancement of photoacoustic projection data in sparse-view imaging and achieved improved image reconstruction, as shown in Fig. 28[52]. Here, UNet is a specially designed U-shaped convolutional neural network, which has been widely used in various image processing tasks such as image denoising, image deconvolution, and image reconstruction. Zhang et al. designed a network based on a channel-wise attention mechanism to extract feature relations between sparse data channels and achieved full-channel projection data recovery from sparse input[222]. Zhang et al. proposed a UNet-based network to correct photoacoustic projection data with skull-induced aberration in brain imaging and demonstrated that the aberration could be effectively removed after preprocessing[223]. These studies show that preprocessing projection data in the data domain can help enhance image reconstruction quality. Table 9 lists representative work on DL-based signal preprocessing in PACT. Table 9Representative Work on DL-Based Signal Preprocessing in PACT
4.2.Postprocessing in the Image DomainIn addition to signal preprocessing in the data domain, another application of DL in PACT is image postprocessing in the image domain, which indicates that deep neural networks can be used as postprocessing tools for image enhancement (see Fig. 29). This approach is especially useful for image artifacts removal in non-ideal imaging scenarios. To mitigate the image artifacts that often appear in sparse-view or/and limited-view PACT images, a variety of DL-based postprocessing methods have been proposed. For example, Antholzer et al. proposed a UNet-based architecture to process images reconstructed by FBP in sparse-view imaging and demonstrated the effectiveness of the model in artifact removal[44]. Farnia et al. developed a UNet-based network to process images reconstructed by TR and achieved artifact suppression with limited projection data[224]. Davoudi et al. proposed a UNet-based framework for image quality recovery from sparse and limited projection data and demonstrated its performance with whole-body mouse imaging in vivo, as shown in Fig. 30[46]. In addition to the classic UNet, its variants are also commonly used as postprocessing tools in PACT. Guan et al. developed a fully dense UNet (FD-UNet) for image artifact removal in sparse-view PACT imaging and demonstrated its superiority over the conventional UNet[225]. Zhang et al. proposed a dual-domain UNet (DuDoUNet) with a specially designed information-sharing block, which takes both time-domain DAS images and k-space images as inputs, and verified its superiority with a public clinical database[226]. Guan et al. proposed a 3D modified UNet called dense dilation UNet (DD-UNet) and demonstrated its effectiveness in improving the imaging quality in 3D sparse-view and limited-view imaging[227]. Choi et al. reported a 3D progressive UNet for multi-parameter dynamic volume imaging and demonstrated that it could improve imaging speed and reduce image artifacts in limited-view PACT imaging, as shown in Fig. 31[47]. In addition to UNet and its variants, generative adversarial networks (GANs) are another deep neural network commonly used for image postprocessing in PACT[228]. GAN shows excellent image processing capabilities and has been successfully used in many applications, such as image style conversion, image enhancement, and image restoration. A GAN network consists of two sub-networks, i.e., a generator and a discriminator, which are adversarially trained until the discriminator cannot distinguish the result produced by the generator and the ground truth. Vu et al. explored the application of a Wasserstein GAN with gradient penalty (WGAN–GP) in limited-view and limited-bandwidth PACT imaging and achieved image artifact removal[229]. Lu et al. proposed a GAN-based DL approach (LV-GAN) to recover high-quality images in limited-view imaging and achieved artifact removal and high recovery accuracy under a view angle of less than 60°[58]. Shahid et al. developed a residual-network-based GAN (ResGAN) to improve image quality by denoising and removing image artifacts in sparse-view imaging[230]. The deep neural networks in the above studies generally require supervision and need a large number of paired images for training, which are often difficult to obtain, especially in experiments. To address this problem, Lu et al. proposed an unsupervised cyclic GAN network (CycleGAN) that only needs unpaired training and successfully achieved artifact removal in sparse-view and limited-view PACT images[231]. In addition to solving the image enhancement problem in sparse-view and limited-view imaging, DL-based postprocessing has also been used to address a variety of other imaging problems, such as reflection artifact removal and aberration mitigation. For example, Shan et al. incorporated a deep neural network into conventional iterative algorithms and successfully corrected reflection artifacts caused by planar echogenic structures outside imaging tissues[232]. Jeon et al. developed a hybrid deep neural network based on SegNet and UNet and achieved SOS aberration mitigation, streak artifact removal, and temporal resolution improvement in sparse-view imaging[233]. Gao et al. proposed a modified encoder-decoder UNet to learn the mapping relationship between speckle patterns and target images in thick porous media and solved the scattering problem in transcranial photoacoustic brain imaging[234]. Shijo et al. proposed a shifted-window transformer to restore artifact-free images from artifact-heavy images reconstructed using a TR algorithm[235]. Deep neural networks have also been used to improve the spatial resolution of PACT images. For example, Rajendran and Pramanik employed a fully dense U-Net termed TARES to improve the spatially variant tangential resolution in circular scanning PACT imaging systems[53]. Zhang et al. exploited a fully dense UNet named Deep-E to improve the elevational resolution of linear-detector-array-based PACT imaging[54]. Based on this work, the same research group proposed a new Deep-E combining a 2D Deep-E and a 3D Deep-E and demonstrated its performance in elevational resolution improvement and deep vessel recovery in linear-detector-array-based PACT imaging[236]. In addition, DL can also be used to reduce noise in low-fluence light emitting diode (LED)-based PACT imaging. For example, Anas et al. designed a deep neural network consisting of a CNN and a recurrent neural network (RNN) to improve the SNR of noise-corrupted images in LED-based PACT imaging[219]. Hariri et al. proposed a multi-level wavelet CNN (MWCNN) model to enhance image contrast in low-fluence PACT imaging[220]. The above work demonstrates that DL-based postprocessing can enhance photoacoustic images in different aspects, including artifact removal, contrast boost, resolution improvement, and aberration mitigation. In addition to the above problems, DL-based postprocessing methods have also been used to address other problems in PACT, such as quantitative imaging[237–242], image fusion[243], image classification and segmentation[244–246], and band-frequency separation[247]. Table 10 summarizes representative work on DL-based image postprocessing in PACT. Table 10Representative Work on DL-Based Image Postprocessing in PACT
4.3.Hybrid-Domain ProcessingData-domain preprocessing and image-domain postprocessing can only extract feature information from one domain. To achieve improved imaging results, hybrid-domain processing attempting to extract feature information from both the data domain and the image domain (Fig. 32) is also commonly used. Based on this idea, Lan et al. proposed a hybrid deep neural network termed knowledge-infused GAN (Ki-GAN) for image enhancement in sparse-view PACT imaging[253]. The Ki-GAN employs raw photoacoustic signals and DAS-reconstructed photoacoustic images as input and can produce images with better quality than conventional DAS-type algorithms and a UNet-based postprocessing method for both fully and sparsely sampled data. Based on this work, the authors further developed a new hybrid-domain DL model named Y-Net for image enhancement in limited-view PACT imaging[254]. The Y-Net consists of two encoders and one decoder. The two encoders are used to separately process raw photoacoustic signals and DAS-reconstructed photoacoustic images, while the decoder is employed to fuse the outputs of the two encoders to generate final images. Davoudi et al. developed a hybrid-domain CNN model using both time-resolved photoacoustic signals and reconstructed images as input to enhance the image quality in limited-view PACT imaging[255] and achieved excellent results as shown in Fig. 33. Moreover, Guo et al. proposed a hybrid-domain attention steered network (AS-Net) for image enhancement in sparse-view PACT imaging[256]. The AS-Net also takes raw photoacoustic signals and DAS-reconstructed images as inputs, but the photoacoustic signals are first transformed into a 3D square matrix before being fed into the network to ensure compatibility with the input of the AS-Net. The AS-Net adopts a self-attention mechanism for semantic feature extraction and fusion and is very efficient. In addition to exploiting raw projection data directly, preprocessed projection data can also be used as the input of a hybrid-domain network. For example, Li et al. proposed a hybrid-domain CNN model called PRNet to improve the quality of sparse-view photoacoustic images[257]. The PRNet uses not only raw photoacoustic projection data but also their derivatives as input. Lan et al. proposed a joint feature fusion framework (JEFF-Net) to enhance photoacoustic images from limited-view projection data[258]. The JEFF-Net uses the sub-DAS images generated from each-channel raw projection data and the whole DAS images generated from all projection data as input, as shown in Fig. 34. Table 11 lists representative work on DL-based hybrid-domain processing in PACT. Table 11Representative Work on DL-Based Hybrid-Domain Processing in PACT
4.4.Learned Iterative ReconstructionConventional DL approaches generally employ deep neural networks as independent modules for signal preprocessing, image postprocessing, or hybrid processing. In contrast, learned iterative reconstruction (IR) algorithms attempt to fuse deep neural networks into an IR model to improve the quality and efficiency of image reconstruction. Specifically, learned IR algorithms employ deep neural networks to learn the regularization term or regularizer [259,260] or learn an entire iteration process[45,261], as shown in Fig. 35. A regularizer is important for IR-based image reconstruction, especially for limited projection data. Conventional regularizers such as Tikhonov and TV may not necessarily be optimal. Data-driven DL-based regularizers were thus proposed to replace conventional regularizers. Li et al.[259] and Antholzer et al.[260] used a neural network to learn the Tikhonov regularizer to improve the image quality in sparse-view PACT imaging. However, the Tikhonov regularizer is trained before the IR loop and is not updated during iteration. Therefore, the learned Tikhonov regularizer may not be optimal for the image reconstruction problem in PACT. Wang et al. considered the entire IR process and proposed a learned IR model based on Eq. (120)[197], as shown in Fig. 36. The learned IR model takes the current reconstructed image and the gradient term as inputs and updates the regularizer and the hypermeters and in each iteration. The experimental results show that the learned IR model is effective at improving the quality of sparse-view PACT images and has a faster convergence speed than conventional IR algorithms. Moreover, Lan et al. proposed a novel untrained CNN-based compressed sensing regularizer for sparse-view PACT imaging and achieved improved image quality compared with conventional IR algorithms[262]. In addition to learning a regularizer, it is also possible to learn an entire iteration process [Eqs. (120) and (123)] using deep neural networks. For example, Hauptmann et al. proposed a deep gradient descent (DGD) algorithm to reconstruct high-resolution 3D images from restricted photoacoustic projection data in limited-view PACT imaging[45]. In the DGD algorithm, a CNN model was designed to learn the proximal operator in Eq. (123), which yields high-quality images at a fast convergence speed, as shown in Fig. 37. Similarly, Yang et al. proposed using recurrent inference machines (RIMs) to learn an iteration process in IR and achieved accelerated reconstruction[261]. Lan et al. developed a CNN network combining a variational model to learn an iteration process and achieved robust and accelerated reconstruction in limited-view PACT imaging[263]. Shan et al. proposed a simultaneous reconstruction network (SR-Net) to update the initial pressure and the sound speed at each iteration of IR reconstruction for PACT imaging in acoustically heterogeneous media[264]. Moreover, Boink et al. proposed a learned primal-dual (L-PD) framework to learn an iteration process in IR and achieved simultaneous high-quality image reconstruction and segmentation in limited-view and sparse-view PACT imaging[265]. Recently, diffusion models in DL have attracted great attention in the field of biomedical image processing because of their powerful generative capabilities[266,267]. In PACT, some studies have incorporated diffusion models into iterative optimization frameworks, which show superior performance compared with conventional IR algorithms[268–271]. Generally, compared with conventional IR algorithms, learned IR methods can improve reconstruction efficiency by reducing the number of iterations. However, they are still time-consuming because the reconstruction model [Eqs. (120) and (123)] needs to be solved repeatedly. To accelerate the convergence of learned IR algorithms, Hsu et al. proposed a fast iterative reconstruction (FIRe) algorithm that simultaneously learns the discrete forward photoacoustic imaging model in Eq. (86) and an entire iteration process to reduce the reconstruction time[57]. Results showed that the FIRe algorithm can produce images with a quality comparable to that of learned IR and conventional IR algorithms but with reconstruction time reduced by nine and 620 times, respectively. Table 12 lists representative work on DL-based IR in PACT. Table 12Representative Work on DL-Based IR in PACT
4.5.Direct ReconstructionIn the aforementioned DL-based approaches, deep neural networks generally perform only certain functions, such as preprocessing, postprocessing, and regularizer learning, and cannot independently reconstruct images from raw projection data without the use of conventional algorithms. Nevertheless, deep neural networks can perform direct image reconstruction alone by learning the mapping relationship between raw photoacoustic projection data and reconstructed images (see Fig. 38). In 2018, Waibel et al. used a UNet-like model to reconstruct images from limited-view photoacoustic projection data and demonstrated the feasibility of neural-network-based direct image reconstruction through numerical simulation[272]. Lan et al. proposed a UNet-based deep neural network (DUNet) to reconstruct PACT images from multi-frequency photoacoustic projection data and demonstrated its superiority over conventional image reconstruction algorithms[273]. Feng et al. developed a residual-block-based end-to-end UNet (Res-UNet) to reconstruct PACT images from raw photoacoustic projection data and achieved high-quality image reconstruction with sharp edges and suppressed artifacts[274]. Lan et al. designed an encoder-decoder network to transform superimposed four-channel photoacoustic projection data into reconstructed images and achieved real-time PACT imaging with a single-channel data acquisition (DAQ) system[275]. Recently, Shen et al. developed a physics-driven DL-based filtered back projection (dFBP) framework for direct image reconstruction from raw projection data[276]. The dFBP network is constructed based on the physical model of the analytical FBP algorithm [Eq. (47)] and consists of a filtering module, a back-projection module, and a fusion module, as shown in Fig. 39. The proposed dFBP is robust and flexible, can achieve direct signal-to-image transformation with enhanced accuracy, and reconstruct high-quality, artifact-suppressed images from sparse-view, limited-view, and acoustic heterogeneity-contaminated projection data. Moreover, Lan et al. designed a self-supervised deep-learning framework for image reconstruction from extremely sparse projection data. The success of the network can be mainly attributed to the adoption of a transformer-based self-attention mechanism[277]. To reduce the complexity of network training, raw photoacoustic projection data can be preprocessed to produce high-level features before being fed into an image reconstruction network. Guan et al. proposed UNet-based pixel-wise deep learning (Pixel-DL) that uses pixel-interpolated data as input and realized direct image reconstruction in limited-view and sparse-view PACT imaging[278]. Similarly, Kim et al. developed a deep convolutional network (upgUNet), which takes multi-channel features extracted from raw photoacoustic projection data as input, and achieved direct image reconstruction in limited-view PACT imaging[279], as shown in Fig. 40(a). Inspired by the principle of FBP, Tong et al. designed a feature projection network (FPNet) that takes raw photoacoustic projection data and their first-order derivative as input [Fig. 40(b)] and achieved high reconstruction quality from limited-view data with sparse measurements[280]. Recently, Dehner et al. proposed a deep neural network named DeepMB for real-time IR image reconstruction with adjustable SOS[281,282]. DeepMB learns conventional IR algorithms using domain-transformed projection data as input and can reconstruct high-quality images 3000 times (less than 10 ms per image) faster than conventional IR algorithms. Table 13 lists representative work on DL-based direct image reconstruction in PACT. Table 13Representative Work on DL-Based Direct Signal-to-Image Reconstruction in PACT
5.Performance ComparisonsThus far, we have reviewed the principles of the six classes of image reconstruction algorithms in PACT, namely, DAS, FBP, SE, TR, IR, and DL. To choose the most appropriate algorithm in practice, it is necessary to understand the characteristics and performance of each method under different imaging scenarios. In this section, we discuss and compare the six classes of algorithms, focusing on image reconstruction quality and image reconstruction speed. Table 14 first summarizes the overall characteristics and performance of each algorithm when evaluated under different circumstances, which will be discussed in detail below. Table 14Comparison of Different Image Reconstruction Algorithms in PACTa
aThe comparisons in this table are based on typical situations. The variants of some algorithms may have distinct characteristics. For example, modified FBP algorithms may be able to incorporate the SIR of a detector[284,285], modified SE algorithms can be used for image reconstruction in heterogeneous media[167], and iterative DL may be slow[45]. 5.1.Image Reconstruction Quality under Various Imaging Scenarios5.1.1.Ideal imaging conditionsTo achieve perfect image reconstruction or acoustic inversion in PACT, photoacoustic signal detection should be performed under ideal conditions: (1) the ultrasound detectors used for signal detection should have an infinite bandwidth; (2) the ultrasound detectors should have a point-like aperture; (3) the ultrasound detectors should be densely arranged in space; (4) the detector array formed by individual detectors should have a full view angle with respect to a sample; and (5) the acoustic media should be homogeneous. When these conditions are satisfied, all six classes of image reconstruction algorithms are expected to produce high-quality images. However, practical photoacoustic signal detection is unlikely to be ideal. In the following section, we discuss and compare the performances of the image reconstruction algorithms under non-ideal imaging scenarios. 5.1.2.Limited detector bandwidthThe ultrasound detectors used for photoacoustic signal detection should ideally have an infinite bandwidth so that they can respond to all frequency contents of a signal[4]. However, a practical detector always has a limited bandwidth and a non-ideal EIR, which distorts measured photoacoustic signals and blurs reconstructed images. To deal with this problem, a reconstruction algorithm should consider the non-ideal EIR. The six classes of algorithms reviewed in this paper have different characteristics in addressing this problem. It is generally difficult for DAS, FBP, SE, and TR algorithms to incorporate the non-ideal EIR of a detector into their reconstruction models but easy for IR- and DL-based algorithms, as summarized in Table 14. Specifically, IR algorithms can easily solve the non-ideal EIR problem by incorporating it into the system matrix [Eq. (110)], and DL algorithms can solve the problem by training a reconstruction network with EIR-corrected signal-image datasets. To understand how different image reconstruction algorithms behave in the case of non-ideal EIR, an example is given in Fig. 41. In this example, the photoacoustic source is a numerical blood vessel phantom distributed in the plane [Fig. 41(a)]. The detector array used for imaging is a full ring with a diameter of 50 mm and has 512 evenly distributed elements, each of which has a point-like shape. To simulate the limited bandwidth case, the detector array is assigned a center frequency of 2 MHz and is assumed to have a Gaussian-like EIR with a fractional bandwidth of 80%. Figures 41(b)–41(d) show the images reconstructed by the FBP-, TR-, and EIR-corrected IR algorithms, respectively. Figures 41(e)–41(g) are the images of Figs. 41(b)–41(d) with negative values removed (negative values in a photoacoustic image have no physical meaning). The results show that the images reconstructed by FBP and TR have apparent artifacts, while the images reconstructed by IR have much fewer artifacts due to the correction of the detector EIR. Note that since DAS and SE typically have similar performance to FBP and are commonly adopted in PACT systems with linear and planar detector arrays, they are not compared in this example. Although DAS, FBP, SE, and TR algorithms are generally not ready to incorporate the EIR of a practical detector, their input signals can be preprocessed by deconvolution or DL to correct the non-ideal EIR. In deconvolution, the EIR of a detector is first experimentally measured using a point object and then incorporated into the signal detection model [Eq. (23)] for correction. In DL-based preprocessing, the EIR of a detector can be compensated for by training a neural network using EIR-corrected photoacoustic signals. Figure 42 shows an example demonstrating that signal preprocessing helps correct the non-ideal EIR of a detector and improves image reconstruction results[51]. In this example, the photoacoustic source is a numerical blood vessel, as shown in Fig. 42(a). Photoacoustic signals were measured by a full-ring detector array consisting of 100 elements with a radius of 37.02 mm, a center frequency of 2.25 MHz, and a fractional bandwidth of 70%. Figures 42(b) and 42(c) are images reconstructed using full and limited-bandwidth photoacoustic signals, respectively. Figures 42(d) and 42(e) are images reconstructed using signals separately preprocessed by deconvolution and a deep neural network, which were both designed to compensate for the EIR effect of the detector array. Figures 42(f)–42(j) show the corresponding results for a Derenzo phantom. The simulation results show that deconvolution and DL-based signal preprocessing can effectively correct non-ideal EIRs and improve image reconstruction quality. 5.1.3.Finite detector aperture sizeIn addition to having an infinite bandwidth, an ideal ultrasound detector used for photoacoustic signal detection should also have a point-like shape so that it can respond to photoacoustic signals from all directions[4]. However, a practical detector always has a finite aperture size and a non-ideal SIR, which distorts measured photoacoustic signals and blurs reconstructed images. To address this problem, a reconstruction algorithm should consider the non-ideal SIR. The six classes of algorithms reviewed in this paper have different characteristics in addressing this problem. It is generally difficult for DAS, FBP, SE, and TR algorithms to incorporate the non-ideal SIR of a detector into their reconstruction models but easy for IR- and DL-based algorithms, as summarized in Table 14. Specifically, IR algorithms can easily solve the non-ideal SIR problem by incorporating it into the system matrix [Eq. (93)], and DL algorithms can solve this problem by training a reconstruction neural network with SIR-corrected signal-image datasets. To understand how different image reconstruction algorithms behave in the case of non-ideal SIRs, a simulation from[286] is adapted and shown in Fig. 43. In this example, the photoacoustic source has six-point absorbers located at different distances from the origin. Photoacoustic signals were recorded by a detector with a 6-mm-diameter aperture size and a center frequency of 5 MHz. The detector rotated around the photoacoustic source to record signals at different positions. Figures 43(a) and 43(b) show the images reconstructed by a model-based algorithm and DAS, respectively. The model-based algorithm uses a similar discrete imaging model [Eq. (86)] as IR and couples the SIR of the detector in its system matrix. The results show that the image reconstructed by DAS has significant distortions while the image recovered by the model-based algorithm has much fewer distortions due to the correction of the detector SIR. Although DAS, FBP, SE, and TR algorithms are generally not ready to incorporate the SIR of a practical detector, their output images can be postprocessed by deconvolution or DL to correct the non-ideal SIR. In deconvolution, the SIR of a detector is first modeled using linear acoustics[39,287,288] and then incorporated into a non-blind image deconvolution model for SIR correction. Alternatively, the SIR can be unknown and a blind deconvolution algorithm is employed to correct the non-ideal SIR. In DL-based postprocessing, the SIR of a detector can be compensated for by training a neural network using SIR-corrected photoacoustic images. Figure 44 shows an example demonstrating that DL-based image postprocessing helps correct the non-ideal SIR of a detector and improves image quality[53]. In this example, the photoacoustic source contains four-point targets. The detector used for signal detection has a finite aperture size and a center frequency of 2.25 MHz. The detector rotates around the photoacoustic source to measure signals at different positions. Figures 44(a) and 44(b) are images reconstructed using DAS without and with DL-based image postprocessing, respectively. The results show that the DAS-reconstructed image without postprocessing has significant distortions, while the image with postprocessing has much fewer distortions. This indicates that DL-based image postprocessing can effectively correct non-ideal SIRs and improve image reconstruction quality. Although here we independently discuss the EIR and SIR of a detector, they are never separable and together constitute the total impulse response (TIR) of a detector. To address the TIR of a detector, methods similar to those used for EIR and SIR correction can be used. In other words, the TIR of a detector can be experimentally measured and incorporated into IR or DL models for improved image reconstruction[289]. Alternatively, non-blind deconvolution, blind deconvolution, and DL-based image postprocessing techniques can also be adopted to correct the non-ideal TIR of a detector[290–292]. 5.1.4.Sparse-view imagingIdeally, the ultrasound detectors used for photoacoustic signal detection should be densely arranged in space to satisfy the spatial Nyquist sampling theorem[4]. However, the detectors in a practical detector array are often sparse due to high fabrication costs, leading to the ill-posed problem of image reconstruction from sparse projection data. The six classes of algorithms reviewed in this paper have different performances when dealing with this problem. Generally, analytical algorithms such as DAS, FBP, SE, and TR have similar performance and require more projection data to reconstruct an image with reasonable image quality than do IR algorithms, which solve the image reconstruction problem in the sense of least squares; IR algorithms require more projection data than DL-based algorithms, which are data-driven and can realize complicated signal-to-image mapping using sparse projection data. In other words, DL-based algorithms have better performance than IR algorithms in sparse-view PACT imaging, and IR algorithms have better performance than DAS, FBP, SE, and TR algorithms, as summarized in Table 14. To evaluate the performance of different image reconstruction algorithms in sparse-view imaging, an example is presented in Fig. 45. The simulation settings in this example are similar to those in Fig. 41 except that the circular detector array has a varying number of elements from 32 to 512 and all detector elements have infinite bandwidth. Figure 45 shows the images reconstructed by the FBP-, TR-, and TV-regularized IR algorithms using 32-, 128-, and 512-channel projection data. As expected, the image reconstruction quality of each algorithm improves with the increase of detector number. However, FBP and TR suffer from streak artifacts more significantly than TV-regularized IR in sparse-view cases (e.g., 32 and 128 views), although the three algorithms yield similar image quality when the detection views are dense (e.g., 512). Since DAS and SE typically have similar performance to FBP and are commonly adopted in PACT systems with linear and planar detector arrays, they are not compared in this example. Compared with IR algorithms, DL-based algorithms may have better image reconstruction performance in sparse-view imaging[276,293]. To illustrate this, Fig. 46 shows an example comparing the performance of conventional algorithms such as TR, back projection, and TV-based IR, and two DL-based image reconstruction algorithms, namely, Pixel-DL[278] and model-based learning (MBLr)[45], for image reconstruction using sparsely sampled projection data[293]. In this example, the imaging target is mouse cerebral vasculature [Fig. 46(a)], and photoacoustic signals were received by an 8-mm-diameter full-ring array with 32 evenly distributed detectors. Figures 46(b)–46(f) are the images reconstructed by TR, back projection, TV-regularized IR, Pixel-DL, and MBLr, respectively. The results show that the TV-regularized IR algorithm has higher image reconstruction quality than the TR and back-projection algorithms, which suffer from severe artifacts due to the sparseness of the projection data, while the two DL-based reconstruction methods, especially MBLr, yield improved quality compared with all conventional algorithms in this case. Although conventional image reconstruction algorithms suffer from artifacts in sparse-view imaging, they can be improved[221,294–296]. For example, Sandbichler et al. achieved enhanced image reconstruction using conventional algorithms by transforming sparse projection data into dense data using compressed sensing in sparse-view imaging[297]. Meng et al. developed a principal component analysis (PCA)-based method and achieved high-quality 3D image reconstruction with sparsely sampled data without involving an iterative process[298]. Hu et al. analyzed the aliasing problem caused by spatial undersampling and proposed a temporal low-pass-filtering and spatial interpolation method for aliasing mitigation and artifact suppression[299,300]. Cai et al. analyzed the relationship between streak artifacts and sparse projection data and developed an adaptive FBP algorithm named contamination-tracing back-projection (CTBP) for image artifact suppression in sparse-view imaging[301]. Hakakzadeh et al. proposed a spatial-domain factor for sparse sampling circular-view PACT and achieved artifact suppression and resolution improvement compared with conventional algorithms[302]. Moreover, Wang et al. proposed an iterative scheme combining virtually parallel projecting and spatially adaptive filtering and achieved enhanced image reconstruction in sparse-view PACT imaging[303]. 5.1.5.Limited-view imagingIdeally, the detector array used for PACT imaging should have a full view angle (4π steradians in 3D space) with respect to the sample being imaged[4]. However, a practical detector array always has a limited view angle, leading to the ill-posed problem of image reconstruction from limited-view projection data. The six classes of algorithms reviewed in this paper have different performances when dealing with this problem. Generally, analytical algorithms such as DAS, FBP, SE, and TR have similar performance and require a larger view angle to reconstruct an image with reasonable image quality than do IR algorithms, which solve the image reconstruction problem in the sense of least squares; IR algorithms require a larger view angle than DL-based algorithms, which are data-driven and can realize complicated signal-to-image mapping using limited-view projection data. In other words, DL-based algorithms have better performance than IR algorithms in limited-view PACT imaging and IR algorithms have better performance than DAS, FBP, SE, and TR algorithms, as summarized in Table 14. To evaluate the performance of different image reconstruction algorithms in limited-view imaging, an example is presented in Fig. 47. The simulation settings in this example are similar to those used in Fig. 41 except that the circular detector arrays used for imaging have limited view angles ranging from (quarter circle) to (full circle), and all detector elements have an infinite bandwidth. Figure 47 shows the images reconstructed by the FBP, TR, and TV-regularized IR algorithms when the view angles of the detector arrays are , , and . As expected, the image reconstruction quality of each algorithm improves with the increase of the view angles of the detector arrays. However, FBP and TR suffer from artifacts more significantly than does TV-regularized IR in limited-view cases (e.g., and ), although the three algorithms yield similar image quality when the view angle is . Compared with conventional algorithms, DL-based methods may have better image reconstruction performance in limited-view imaging[276]. To illustrate this, Fig. 48 shows a simulation comparing the performance of FBP and a DL-based FBP algorithm (dFBP) for image reconstruction using limited-view projection data[276]. In this example, the imaging target is a numerical zebrafish, and photoacoustic signals are collected by a circular detector array, which has a diameter of 80 mm and 512 evenly distributed detectors, as shown in Fig. 48(a). To simulate limited-view imaging scenarios, the shape of the detector array is reduced from a full circle to partial circles with central angles of , , , and . Figures 48(b) and 48(c) show the corresponding images reconstructed by FBP and dFBP under different view angles. The dFBP method can achieve better image reconstruction quality than FBP, which suffers from severe artifacts due to the incompleteness of the projection data. Although image reconstruction in limited-view imaging is challenging, the reconstruction procedure can be improved. First, limited-view raw projection data can potentially be augmented to achieve improved reconstruction. In 2004, Patch derived the consistency conditions for projection and estimated missing data from measured data[304]. Gamelin et al. employed a single-stage Wiener optimal filter to augment measured projection data by interpolation between measurement locations[305]. Second, conventional algorithms can be modified to adapt to the image reconstruction problem. For example, Paltauf et al. proposed a modified FBP algorithm that uses a weight factor to improve the image reconstruction quality in limited-view imaging[306]. Third, DL-based methods can be used as postprocessing tools for image enhancement. For example, Lu et al. proposed a GAN-based image postprocessing method and achieved high-quality image recovery from limited-view photoacoustic images[58]. 5.1.6.Heterogeneous mediaMany image reconstruction algorithms in PACT, such as DAS and FBP, are derived based on the assumption that the acoustic media are homogeneous, lossless, and non-dispersive. However, this is not true for biological tissues, in which strong acoustic heterogeneities, such as bones and air cavities, may be present[296,307]. To address this problem, an image reconstruction algorithm should consider the properties of an acoustic medium. The six classes of algorithms reviewed in this paper have different characteristics in addressing this problem. DAS and FBP algorithms can employ dual SOS[308] or jointly reconstruct initial photoacoustic pressure and SOS[309] to reduce image artifacts caused by acoustic heterogeneity. Modified SE algorithms can incorporate variable SOS into reconstruction models to account for acoustic heterogeneity[167]. In comparison, TR-, IR-, and DL-based algorithms can more easily incorporate acoustic properties of a medium into their reconstruction models, as summarized in Table 14. Specifically, TR algorithms can solve the acoustic heterogeneity problem by incorporating the properties of acoustic media such as SOS, density, dispersion, and absorption into their acoustic propagation model (see Sec. 3.4.1). IR algorithms can solve this problem by incorporating the properties of acoustic media into the system matrix by solving coupled photoacoustic wave equations (see Sec. 3.5.2). DL algorithms can solve this problem by training a reconstruction network with heterogeneity-corrected signal-image datasets. Certainly, incorporating the properties of acoustic media into a reconstruction model requires prior knowledge about the media. To understand how different image reconstruction algorithms behave in the case of acoustic heterogeneity, an example is presented in Fig. 49. The simulation settings in this example are similar to those used in Fig. 41 except that the sound speeds in the background and the ROI [the white dashed box in Fig. 49(a)] are assumed to be 1500 and 1520 m/s, respectively, and all detector elements have an infinite bandwidth. Figure 49(b) shows the image reconstructed by FBP with a constant SOS of 1505 m/s, which is the optimal SOS for reconstruction. Figures 49(c) and 49(d) show the images reconstructed by TR- and TV-regularized IR by coupling the actual SOS distribution into their reconstruction models. The images reconstructed by TR and IR with a correct SOS distribution have better image quality than that of FBP, which contains splitting artifacts at the end of the vessel. Although algorithms such as DAS, FBP, and SE are generally not ready to incorporate the properties of acoustic media into their reconstruction models, their output images can be postprocessed for image enhancement. Figure 50 shows a simulation demonstrating that image postprocessing helps correct acoustic heterogeneity and improves image reconstruction results[233]. The photoacoustic source in this example contains multiple elliptical and line absorbers and has three SOS regions with values of 1480, 1450, and 1575 m/s at different depths. The detector array used for imaging is a linear array located at the top of the image. Figure 50(a) shows the image reconstructed by a multi-stencil fast marching (MSFM) approach, which is regarded as the ideal result. The MSFM approach estimates the acoustic TOF based on the eikonal equation and the known SOS distribution and can achieve high-quality beamforming. Figure 50(b) shows the images reconstructed by a conventional Fourier beamformer using a constant SOS of 1540 m/s. Figure 50(c) shows the image in Fig. 50(b) processed by an automatic SOS selection method, which attempts to maximize the sharpness of the photoacoustic image with an optimal SOS (1480 m/s in this case). Figure 50(d) shows the image of Fig. 50(b) processed by a deep neural network called SegUNet. The results show that the image produced by a conventional Fourier beamformer has significant distortions, which can be partially corrected by the automatic SOS selection method and fully corrected by the DL-based postprocessing method. 5.1.7.Other aspectsIn addition to the qualitative reconstruction discussed above, quantitative image reconstruction is another important aspect to consider in certain imaging scenarios, such as quantitative photoacoustic imaging. Under ideal conditions, most algorithms can achieve accurate amplitude reconstruction of initial photoacoustic pressure. However, real imaging scenarios are never ideal, which makes quantitative image reconstruction very challenging. Generally, IR[41,64] and DL[293] algorithms can achieve more accurate image reconstruction compared to other methods and are more suitable for quantitative photoacoustic imaging. Additionally, negative artifacts that have no physical meanings often occur in reconstructed photoacoustic images under non-ideal conditions[137]. In this case, IR algorithms with non-negative constraints can be used to reconstruct photoacoustic images free from negative components[207]. 5.2.Image Reconstruction Speed and Memory Footprint5.2.1.Image reconstruction speedIn addition to image quality, image reconstruction speed and computational complexity are other critical indicators for measuring the performance of an algorithm. An ideal image reconstruction algorithm should be fast enough to allow a PACT imaging system to capture transient biodynamic processes such as heartbeat and blood flow in a living subject. The computational complexity of image reconstruction algorithms in PACT varies from algorithm to algorithm. For non-iterative methods, such as DAS, FBP, SE, and TR, SE is the most computationally efficient due to the application of FFT. Assuming that a 3D image to be reconstructed has a size of () and the detector number [32], SE algorithms have a computational complexity of for 3D image reconstruction and for 2D image reconstruction for the specific detection geometries listed in Table 5. To perform the same reconstruction tasks, TR algorithms have computational complexities of for 3D and for 2D[310], while back-projection-type algorithms, such as DAS and FBP, have computational complexities of for 3D and for 2D[62,310]. One may notice that TR algorithms have lower complexity than back-projection-type algorithms for 3D image reconstruction but seem to be slower in practice. This is because TR algorithms need to compute an entire acoustic field [Eqs. (69(70)(71)(72))–(73)] step by step, which is time- and memory-consuming, especially for large-scale 3D imaging models. In contrast, back-projection-type algorithms can directly reconstruct the ROI instead of a whole imaging region and can be implemented with parallel computing. Compared with non-iterative methods, IR algorithms usually have a much greater computational complexity due to the repeated calculation of the image reconstruction model in Fig. 20. The computational complexity of DL-based algorithms depends on the specific method. Generally, non-iterative DL algorithms, such as data-, image-, and hybrid-domain processing algorithms and direct reconstruction algorithms, can process an image in roughly the same amount of time as conventional non-iterative algorithms, such as FBP and TR. Iterative DL algorithms, such as learned IR, are generally faster than conventional IR algorithms because they normally require fewer iterations and because the discrete forward imaging model [Eq. (86)] can be learned by neural networks. Figure 51(a) shows a qualitative comparison of the speed of different image reconstruction algorithms. The speed of an image reconstruction algorithm in PACT can be accelerated by GPUs[42,152,311]. For example, Wang et al. developed a comprehensive GPU-based framework for PACT image reconstruction that achieved an -fold increase in speed when reconstructing extremely large-scale 3D images compared to a central processing unit (CPU)-based implementation[152]. Liu et al. reported an ultrafast GPU-based FBP implementation for PACT image reconstruction[311]. The implementation requires only 0.38 ms to reconstruct a 2D image with a size of 512 pixel × 512 pixel and 6.15 ms to reconstruct a 3D image with a size of 160 voxel × 160 voxel × 160 voxel. Wang et al. proposed GPU-based parallelization strategies to accelerate the FBP algorithm and a penalized least-square and interpolation-based IR (PLS-Int-IR) algorithm[42] and improved the image reconstruction speeds of FBP and PLS-Int-IR by factors of 1000 and 125, respectively. Although most image reconstruction algorithms can, in principle, be accelerated by GPUs, the speedup ratios may vary. Back-projection-type algorithms, such as DAS and FBP, inherently support parallel computing and thus can achieve a high speedup ratio. In contrast, TR and IR algorithms either need to compute an entire photoacoustic field [Eqs. (69(70)(71)(72))–(73)] or need to update the image reconstruction model (Fig. 20) step by step, thus offering limited acceleration performance. 5.2.2.Memory footprintMemory footprint is also a critical indicator for measuring the performance of an algorithm. Generally, the memory footprint of an image reconstruction algorithm in PACT depends on the size of the image to be reconstructed and the size of the measured photoacoustic signals (: detector number, : signal sampling length). For DAS, FBP, SE, and TR algorithms, the memory footprint can be approximately estimated as Similarly, the memory footprint of IR algorithms can be approximately estimated to be where is the size of the system matrix in Eq. (86).According to Eqs. (125)–(126), back-projection-type algorithms, such as DAS and FBP, require the least amount of memory, while IR algorithms require the most memory. Moreover, back-projection-type algorithms can reconstruct only a portion of an imaging region (i.e., ROI) rather than the whole region, which can further reduce the amount of memory needed. IR algorithms can also achieve direct image reconstruction of ROIs but the memory footprint is still high due to the extremely large size of the system matrix , which is sparse and can be compactly represented to reduce the memory footprint[36]. In contrast to that of conventional image reconstruction algorithms, the memory footprint of a DL algorithm depends on its network structure. In general, the memory footprint of non-iterative DL algorithms is greater than that of conventional non-iterative algorithms, but less than that of IR algorithms. The memory footprint of learned IR algorithms is comparable to that of conventional IR algorithms and is also very high. Figure 51(b) shows a qualitative comparison of the memory footprints of different image reconstruction algorithms. 6.Challenges and DiscussionThe six classes of algorithms reviewed in this paper, namely, DAS, FBP, SE, TR, IR, and DL, can achieve high-quality image reconstruction under ideal imaging conditions. However, they may face challenges under non-ideal imaging conditions, especially in the following scenarios: (1) the bandwidths and aperture sizes of the detectors used for imaging are limited and finite; (2) the elements and view angle of the detector array used for imaging are sparse and limited; and (3) acoustic media are strongly heterogeneous. The bandwidth and aperture size of an individual ultrasound detector in a detector array are important for high-quality photoacoustic image reconstruction. An ideal detector should have infinite bandwidth and a point-like aperture to respond to all frequency contents of a photoacoustic signal from all directions. However, practical ultrasound detectors always have limited bandwidths and finite aperture sizes, which distort measured photoacoustic signals and blur reconstructed images. Researchers typically first measure the impulse response of a detector and then couple it to IR algorithms[37,39] or use deconvolution approaches[291,292,312] to remove the impacts of detector bandwidth and aperture size. However, these methods lead to solving optimization problems, which are typically slow, ill-posed, and sensitive to noise. How to build more efficient models to compensate for the impact of detector bandwidth and aperture size is a subject worth studying. The element number and view angle of a detector array are important for high-quality photoacoustic image reconstruction. Ideally, the element number of a detector array should be large enough to satisfy the spatial Nyquist sampling criteria for perfect sampling, and the view angle should be 4π steradian (full 3D view) to record complete photoacoustic signals in 3D space. However, practical detector arrays such as linear and planar arrays typically have a limited number of elements and a limited view angle, which eventually results in the challenging problem of image reconstruction from sparse-view and limited-view projections. IR and DL are two classes of algorithms that are commonly used to address image reconstruction problems in these scenarios[211,276,278,313]. However, these methods typically involve intensive calculations and/or require huge amounts of training data that are difficult to obtain in practice. How to develop more accurate and efficient algorithms to better address the image reconstruction problem in sparse-view and limited-view imaging is also worth studying. The acoustic homogeneity of a medium is also important for high-quality photoacoustic image reconstruction. Most current image reconstruction algorithms used in PACT assume that the acoustic media are homogeneous. However, this does not hold in biological tissues, especially when strong heterogeneities, such as bones and air cavities, are present[307]. Current methods for image reconstruction in heterogeneous tissues include half-time[314] or partial-time[315] reconstruction, autofocus reconstruction by optimizing the SOS[316], joint reconstruction of optical absorption and SOS[317], and ultrasound-guided adaptive reconstruction[296]. However, these methods can only mitigate the acoustic heterogeneity problem to a certain extent and have limited improvements in reconstruction accuracy, especially in complex imaging scenarios such as transcranial imaging. An alternative method is full-waveform-based algorithms, such as TR[170] and IR[41], which are based on solutions to exact photoacoustic wave equations. These algorithms can consider the acoustic properties of the media such as SOS, absorption, dispersion, and density, and achieve accurate image reconstruction in acoustically heterogeneous media. However, these algorithms are not widely used thus far, especially in practical applications. The main reason is that they require prior information on media such as SOS maps, which can be measured using other imaging techniques such as ultrasound CT[318–320] but at the expense of introducing additional imaging facilities and computing resources. Therefore, how to better tackle the acoustic heterogeneity problem and achieve high-quality image reconstruction deserves further investigation. In addition to image reconstruction quality, image reconstruction speed is also important for an algorithm. Currently, many algorithms such as DAS, FBP, SE, and non-iterative DL algorithms can achieve real-time reconstruction when accelerated with GPUs[33,104,152,279,311], which, however, is a great challenge for IR-based algorithms. The IR algorithms in PACT need to compute a large-scale system matrix and perform image reconstruction iteratively. Therefore, they require huge amounts of computational resources, which makes real-time reconstruction difficult. This is especially true for 3D IR reconstruction, in which the computational time is typically on the order of hours[42]. DL-based IR reconstruction has been reported to help reduce the number of iterations[45] and improve computational efficiency[57]. Nevertheless, developing new strategies to improve the computational speed of IR algorithms and reduce the memory footprint still has important practical significance. DL-based image reconstruction has superior performance over conventional image reconstruction algorithms and is an emerging technique in PACT. It has been used to solve a variety of non-ideal image reconstruction problems in PACT, such as detector bandwidth expansion and IR acceleration. Nonetheless, DL-based image reconstruction faces multiple challenges. First, the performance of DL methods heavily relies on the quality and quantity of training data. The training data are usually required to be paired, and the acquisition of large amounts of paired data is difficult in practice. Training neural networks using simulated datasets and/or small-scale experimental datasets may limit the performance of the networks in terms of reconstruction accuracy, robustness, and generalizability in practical applications. Second, reference images such as full-bandwidth photoacoustic images for network training in DL methods are often difficult to obtain. In these situations, networks can only be trained using simulated datasets. Since experimental conditions are typically much more complex than simulations, training a network using only simulated datasets leads to degraded reconstruction performance and poor robustness. Third, there are currently few publicly available datasets for the comparison of the performances of different DL methods. Using private datasets to compare different DL methods may lead to biased conclusions. Finally, DL-based image reconstruction methods in PACT lack interpretability, which is also a long-standing problem in the field of DL. In response to the above issues, it is necessary to research the following aspects: (1) building large-scale publicly available datasets for the construction of networks with good generalization properties and performance comparisons of different networks; (2) developing unsupervised DL-based reconstruction methods that do not need paired data for training; (3) developing more powerful simulation platforms to generate reference data closely resembling real experimental data[321]; and (4) developing physics-informed networks to improve the interpretability of DL methods. One important goal of image reconstruction from photoacoustic projections is disease diagnosis. The current diagnostic route from photoacoustic signals to photoacoustic images to knowledge is straightforward but can potentially lead to information loss and distortion during image reconstruction, which can decrease the accuracy of disease diagnosis. Artificial intelligence (AI) can mine knowledge from vast amounts of data and offer opportunities for disease diagnosis directly from raw data[322]. The use of raw-data-based diagnostic technology instead of image-based diagnostic technology may facilitate the development of fully automated scanning and diagnostic procedures and become an important direction in the future. This review mainly discusses advanced image reconstruction algorithms in PAT for high-performance imaging. In addition to advanced algorithms, emerging technologies such as metamaterials and nanomaterials can be leveraged to enhance the imaging performance of PAT. For example, metamaterials can be appropriately designed to influence the propagation of electromagnetic waves or acoustic waves in a manner not observed in bulk materials. By tailoring the amplitude, phase, or polarization of light, optical metamaterials can emulate conventional optical lenses, waveplates, or holograms to achieve multidimensional light-field modulation in PAT[323–329]. Metamaterials can also be designed for photoacoustic signal sensing to address the sensitivity and bandwidth problem of conventional piezoelectric sensors[330–333] and provide a sensitive method for ultrasound detection in PAT. Nanomaterials that often exhibit unique optical, electronic, or physicochemical properties are another emerging technology that can be used to enhance the imaging performance of PAT. Nanoparticles are a common type of nanomaterial used as molecular contrast agents in PAT[334] and can be generally divided into two categories: inorganic nanoparticles and organic nanoparticles. Inorganic nanoparticles, such as noble metal nanoparticles, iron oxide nanoparticles, semiconductor nanoparticles, magnetic nanoparticles, and carbon nanoparticles, possess versatile properties and have been investigated for various applications in biomedical imaging and therapeutics[335,336]. Among them, noble metal nanoparticles are the most commonly used contrast agents in PAT due to their high optical absorption cross section and biocompatibility[337]. Gold nanoparticles, such as gold nanospheres, nanorods, nanoshells, and nanostars, are particularly popular because of their high extinction coefficient in the near-infrared range and their high photoacoustic conversion efficiency[334,338,339]. In addition to inorganic nanoparticles, organic nanoparticles such as polymer nanoparticles and encapsulations are also increasingly being used in PAT[334,340,341]. Polymer nanoparticles usually have a molar extinction coefficient lower than that of gold nanoparticles but higher than that of small-molecule dyes and have an absorption peak in the near-infrared range. They typically possess high structural and functional flexibility, high biodegradability and biocompatibility, and have a high translational potential. Furthermore, polymer nanoparticles usually have stable surfaces that can be modified with specific targeting and therapeutic moieties for molecular imaging and targeted therapy, which enables the imaging of biological events and functionalities at multiple scales. 7.ConclusionsIn this work, we systemically review the image formation problem in PACT over the past three decades, including the forward problem, conventional reconstruction methods, DL-based reconstruction methods, and the comparisons of different reconstruction methods. The photoacoustic forward problem involves multiple physical processes, including photoacoustic signal generation, signal propagation, and signal detection. The generation of photoacoustic signals is based on the thermoelastic effect and should satisfy the conditions of thermal confinement and stress confinement. The propagation of photoacoustic signals is governed by the photoacoustic wave equation, which can be analytically solved by the method of Green’s functions and numerically visualized by the k-Wave toolbox. The detection of photoacoustic signals typically involves the use of ultrasound detector arrays, whose properties such as detector bandwidth, aperture size, element number, and view angle impact detected photoacoustic signals and final images. The photoacoustic forward problem can be described by the spherical Radon transform. The task of image reconstruction in PACT can be regarded as finding the inverse of the spherical Radon transform. Conventional reconstruction methods in PACT typically achieve image formation via analytical derivations or numerical computations and mainly include five classes of algorithms, i.e., DAS, FBP, SE, TR, and IR. The DAS-type algorithms reconstruct an image by summing the delayed photoacoustic signals of each detector and have many variants, such as native DAS, DMAS, SLSC, and MV. The DAS-type algorithms are simple in principle and fast in speed but lack mathematical rigor. FBP-type algorithms reconstruct an image by back-projecting the filtered photoacoustic signals of each detector to the image domain and can achieve accurate image reconstruction under ideal conditions. They are rigorously deduced from the photoacoustic wave equation and can be regarded as advanced versions of DAS. The SE-type algorithms reconstruct an image by representing it using a mathematical series. They are computationally efficient and super-fast for special detection geometries such as the planar geometry. The TR-type algorithms reconstruct an image by running a forward numerical acoustic propagation model backward and can be used for any closed detection geometry. They are well suited for image reconstruction in heterogeneous media since the acoustic propagation model can couple acoustic properties of media, such as SOS, density, dispersion, and absorption of a medium. The IR-type algorithms reconstruct an image by minimizing the energy error between measured projection data and estimated projection data. They can incorporate various physical factors into reconstruction models, suppress image artifacts caused by incomplete projections, and are well-suited for non-ideal imaging scenarios. DL achieves tomographic image reconstruction in PACT by learning the photoacoustic imaging model from big data using designed networks and thus is network-based, data-driven, and learning-oriented. DL techniques can be applied to photoacoustic image reconstruction from multiple aspects, including signal preprocessing in the data domain, image postprocessing in the image domain, hybrid-domain processing, direct signal-to-image reconstruction, and IR reconstruction learning. DL-based signal preprocessing employs a network to enhance raw photoacoustic projection data in the data domain and delivers the enhanced projection data to conventional reconstruction algorithms as input. DL-based image postprocessing employs a network to enhance photoacoustic images output from conventional reconstruction algorithms for artifacts and noise suppression. DL-based hybrid-domain processing employs a network to make full use of the information of raw projection data and reconstructed images and can yield images with enhanced quality. In contrast, DL-based direct reconstruction employs a network to form photoacoustic images directly from raw projection data and does not involve any conventional image reconstruction algorithms. Training such a direct transformation network, however, requires large-scale datasets. DL-based IR reconstruction employs a network to learn the reconstruction process of conventional IR algorithms and can also achieve direct signal-to-image reconstruction. This method typically has better robustness and generalizability but consumes more time and memory than other DL methods. The image reconstruction algorithms discussed in this review have distinct characteristics. In terms of image reconstruction quality, most image reconstruction algorithms work well under ideal imaging conditions and can yield high-quality images. However, they behave differently under non-ideal imaging scenarios, such as sparse-view imaging, limited-view imaging, imaging with bandwidth-limited and/or finite-aperture detectors, and imaging in the presence of acoustic heterogeneity. Generally, the DAS, FBP, and SE algorithms have relatively poor performance in these scenarios. The TR and IR algorithms can yield improved results because they can incorporate the physical models of an imaging system, such as the SOS of media, the EIR, and the SIR of a detector. The DL algorithms can produce surprisingly good results that are unattainable for conventional algorithms due to their network-based data-driven nature. In terms of image reconstruction speed and memory footprint, DAS, FBP, SE, TR, and non-iterative DL algorithms are generally faster than IR and iterative DL methods. TR, IR, and iterative DL algorithms generally require more memory than DAS, FBP, and SE methods. This review is expected to help general readers better understand the image formation problem in PACT, provide a self-contained reference guide for beginners as well as specialists, and facilitate further developments and applications of novel image reconstruction algorithms. AcknowledgmentsThe authors would like to thank Prof. Cheng Ma, Handi Deng, Hongzhi Zuo, and Chuhua Wu for helpful discussions, and Heren Li for proofreading the manuscript. This work was supported by the National Natural Science Foundation of China (Nos. 62122072, 12174368, 61705216, 61905112, 82171989, 62235013, 62325112, U22A2023, 81771880, and 62405306), the National Key Research and Development Program of China (No. 2022YFA1404400), the Anhui Provincial Department of Science and Technology (Nos. 202203a07020020 and 18030801138), the Anhui Provincial Department of Education (No. 2022jyxm1836), the Anhui Provincial Natural Science Foundation (No. 2308085QA21), the Shanghai Clinical Research and Trial Center (No. 2022A0305-418-02), the National High Level Hospital Clinical Research Funding (No. 2022-PUMCH-C-009), the University of Science and Technology of China (Nos. YD2090002015 and 2022xjyxm027), and the Institute of Artificial Intelligence at Hefei Comprehensive National Science Center (No. 23YGXT005). ReferencesA. G. Bell,
“On the production and reproduction of sound by light,”
Am. J. Sci., s3-20 305 https://doi.org/10.2475/ajs.s3-20.118.305 AJSCAP 0002-9599
(1880).
Google Scholar
L. V. Wang,
“Multiscale photoacoustic microscopy and computed tomography,”
Nat. Photonics, 3 503 https://doi.org/10.1038/nphoton.2009.157 NPAHBY 1749-4885
(2009).
Google Scholar
L. V. Wang and S. Hu,
“Photoacoustic tomography: in vivo imaging from organelles to organs,”
Science, 335 1458 https://doi.org/10.1126/science.1216210 SCIEAS 0036-8075
(2012).
Google Scholar
C. Tian et al.,
“Spatial resolution in photoacoustic computed tomography,”
Rep. Prog. Phys., 84 036701 https://doi.org/10.1088/1361-6633/abdab9
(2021).
Google Scholar
C. Li and L. V. Wang,
“Photoacoustic tomography and sensing in biomedicine,”
Phys. Med. Biol., 54 R59 https://doi.org/10.1088/0031-9155/54/19/R01 PHMBA7 0031-9155
(2009).
Google Scholar
S.-L. Chen and C. Tian,
“Recent developments in photoacoustic imaging and sensing for nondestructive testing and evaluation,”
Vis. Comput. Ind. Biomed. Art., 4 1 https://doi.org/10.1186/s42492-020-00067-5
(2021).
Google Scholar
C. Tian et al.,
“Noninvasive chorioretinal imaging in living rabbits using integrated photoacoustic microscopy and optical coherence tomography,”
Opt. Express, 25 15947 https://doi.org/10.1364/OE.25.015947 OPEXFF 1094-4087
(2017).
Google Scholar
S. Li et al.,
“Photoacoustic imaging of peripheral vessels in extremities by large-scale synthetic matrix array,”
J. Biomed. Opt., 29 S11519 https://doi.org/10.1117/1.JBO.29.S1.S11519 JBOPFO 1083-3668
(2024).
Google Scholar
S. Liu et al.,
“Validation of photoacoustic/ultrasound dual imaging in evaluating blood oxygen saturation,”
Biomed. Opt. Express, 13 5551 https://doi.org/10.1364/BOE.469747 BOEICL 2156-7085
(2022).
Google Scholar
M. Yang et al.,
“Synovial oxygenation at photoacoustic imaging to assess rheumatoid arthritis disease activity,”
Radiology, 306 220 https://doi.org/10.1148/radiol.212257 RADLAX 0033-8419
(2023).
Google Scholar
S. Liu et al.,
“On the imaging depth limit of photoacoustic tomography in the visible and first near-infrared windows,”
Opt. Express, 32 5460 https://doi.org/10.1364/OE.513538 OPEXFF 1094-4087
(2024).
Google Scholar
S. Liu et al.,
“In vivo photoacoustic sentinel lymph node imaging using clinically-approved carbon nanoparticles,”
IEEE Trans. Biomed. Eng., 67 2033 https://doi.org/10.1109/TBME.2019.2953743 IEBEAX 0018-9294
(2019).
Google Scholar
W. Pang et al.,
“Direct monitoring of whole-brain electrodynamics via high-spatiotemporal-resolution photoacoustics with voltage-sensitive dye,”
Laser Photonics Rev., 2400165 https://doi.org/10.1002/lpor.202400165
(2024).
Google Scholar
T. Bowen,
“Radiation-induced thermoacoustic soft tissue imaging,”
in Ultrasonics Symposium,
817
(1981). Google Scholar
T. Bowen et al.,
“Some experimental results on the thermoacoustic imaging of tissue equivalent phantom materials,”
in Ultrasonics Symposium,
823
(1981). Google Scholar
A. A. Oraevsky, S. L. Jacques, and F. K. Tittel,
“Determination of tissue optical properties by piezoelectric detection of laser-induced stress waves,”
in Laser-Tissue Interaction IV,
86
(1993). Google Scholar
A. A. Oraevsky et al.,
“Laser-based optoacoustic imaging in biological tissues,”
in Laser-Tissue Interaction V and Ultraviolet Radiation Hazards,
122
(1994). Google Scholar
A. A. Oraevsky et al.,
“Lateral and z-axial resolution in laser optoacoustic imaging with ultrasonic transducers,”
in Optical Tomography, Photon Migration, and Spectroscopy of Tissue and Model Media: Theory, Human Studies, and Instrumentation,
198
(1995). Google Scholar
R. A. Kruger,
“Photoacoustic ultrasound,”
Med. Phys., 21 127 https://doi.org/10.1118/1.597367 MPHYA6 0094-2405
(1994).
Google Scholar
R. A. Kruger and P. Liu,
“Photoacoustic ultrasound: pulse production and detection in 0.5% liposyn,”
Med. Phys., 21 1179 https://doi.org/10.1118/1.597399 MPHYA6 0094-2405
(1994).
Google Scholar
R. A. Kruger et al.,
“Photoacoustic ultrasound (PAUS) reconstruction tomography,”
Med. Phys., 22 1605 https://doi.org/10.1118/1.597429 MPHYA6 0094-2405
(1995).
Google Scholar
D. Finch, S. K. Patch, and R. Rakesh,
““Determining a function from its mean values over a family of spheres,”
SIAM J. Math. Anal., 35 1213 https://doi.org/10.1137/S0036141002417814 SJMAAH 0036-1410
(2004).
Google Scholar
M. Xu and L. V. Wang,
“Universal back-projection algorithm for photoacoustic computed tomography,”
Phys. Rev. E, 71 016706 https://doi.org/10.1103/PhysRevE.71.016706
(2005).
Google Scholar
C. G. A. Hoelen et al.,
“Three-dimensional photoacoustic imaging of blood vessels in tissue,”
Opt. Lett., 23 648 https://doi.org/10.1364/OL.23.000648 OPLEDP 0146-9592
(1998).
Google Scholar
C. G. A. Hoelen et al.,
“Photoacoustic blood cell detection and imaging of blood vessels in phantom tissue,”
in Optical and Imaging Techniques for Biomonitoring III,
142
(1998). Google Scholar
M. Mozaffarzadeh et al.,
“Double-stage delay multiply and sum beamforming algorithm: application to linear-array photoacoustic imaging,”
IEEE Trans. Biomed. Eng., 65 31 https://doi.org/10.1109/TBME.2017.2690959 IEBEAX 0018-9294
(2018).
Google Scholar
M. A. Lediju Bell et al.,
“Short-lag spatial coherence beamforming of photoacoustic images for enhanced visualization of prostate brachytherapy seeds,”
Biomed. Opt. Express, 4 1964 https://doi.org/10.1364/BOE.4.001964 BOEICL 2156-7085
(2013).
Google Scholar
C.-K. Liao, M.-L. Li, and P.-C. Li,
“Optoacoustic imaging with synthetic aperture focusing and coherence weighting,”
Opt. Lett., 29 2506 https://doi.org/10.1364/OL.29.002506 OPLEDP 0146-9592
(2004).
Google Scholar
S. Paul, A. Thomas, and M. S. Singh,
“Delay-and-sum-to-delay-standard-deviation factor: a promising adaptive beamformer,”
Opt. Lett., 46 4662 https://doi.org/10.1364/OL.437394 OPLEDP 0146-9592
(2021).
Google Scholar
K. P. Köstli et al.,
“Temporal backward projection of optoacoustic pressure transients using Fourier transform methods,”
Phys. Med. Biol., 46 1863 https://doi.org/10.1088/0031-9155/46/7/309 PHMBA7 0031-9155
(2001).
Google Scholar
L. A. Kunyansky,
“A series solution and a fast algorithm for the inversion of the spherical mean Radon transform,”
Inverse Probl., 23 S11 https://doi.org/10.1088/0266-5611/23/6/S02 INPEEY 0266-5611
(2007).
Google Scholar
L. Kunyansky,
“Fast reconstruction algorithms for the thermoacoustic tomography in certain domains with cylindrical or spherical symmetries,”
Inverse Prob. Imaging, 6 111 https://doi.org/10.3934/ipi.2012.6.111
(2012).
Google Scholar
Y. Xu and L. V. Wang,
“Time reversal and its application to tomography with diffracting sources,”
Phys. Rev. Lett., 92 033902 https://doi.org/10.1103/PhysRevLett.92.033902 PRLTAO 0031-9007
(2004).
Google Scholar
P. Burgholzer et al.,
“Exact and approximative imaging methods for photoacoustic tomography using an arbitrary detection surface,”
Phys. Rev. E, 75 046706 https://doi.org/10.1103/PhysRevE.75.046706
(2007).
Google Scholar
G. Paltauf et al.,
“Iterative reconstruction algorithm for optoacoustic imaging,”
J. Acoust. Soc. Am., 112 1536 https://doi.org/10.1121/1.1501898 JASMAN 0001-4966
(2002).
Google Scholar
A. Rosenthal, D. Razansky, and V. Ntziachristos,
“Fast semi-analytical model-based acoustic inversion for quantitative optoacoustic tomography,”
IEEE Trans. Med. Imaging, 29 1275 https://doi.org/10.1109/TMI.2010.2044584 ITMID4 0278-0062
(2010).
Google Scholar
K. Wang et al.,
“Discrete imaging models for three-dimensional optoacoustic tomography using radially symmetric expansion functions,”
IEEE Trans. Med. Imaging, 33 1180 https://doi.org/10.1109/TMI.2014.2308478 ITMID4 0278-0062
(2014).
Google Scholar
K. Wang et al.,
“An imaging model incorporating ultrasonic transducer properties for three-dimensional optoacoustic tomography,”
IEEE Trans. Med. Imaging, 30 203 https://doi.org/10.1109/TMI.2010.2072514 ITMID4 0278-0062
(2011).
Google Scholar
L. Ding, X. L. Deán-Ben, and D. Razansky,
“Efficient 3-D model-based reconstruction scheme for arbitrary optoacoustic acquisition geometries,”
IEEE Trans. Med. Imaging, 36 1858 https://doi.org/10.1109/TMI.2017.2704019 ITMID4 0278-0062
(2017).
Google Scholar
C. Huang et al.,
“Full-wave iterative image reconstruction in photoacoustic tomography with acoustically inhomogeneous media,”
IEEE Trans. Med. Imaging, 32 1097 https://doi.org/10.1109/TMI.2013.2254496 ITMID4 0278-0062
(2013).
Google Scholar
K. Wang et al.,
“Accelerating image reconstruction in three-dimensional optoacoustic tomography on graphics processing units,”
Med. Phys., 40 023301 https://doi.org/10.1118/1.4774361 MPHYA6 0094-2405
(2013).
Google Scholar
L. Ding, X. L. Deán-Ben, and D. Razansky,
“Real-time model-based inversion in cross-sectional optoacoustic tomography,”
IEEE Trans. Med. Imaging, 35 1883 https://doi.org/10.1109/TMI.2016.2536779 ITMID4 0278-0062
(2016).
Google Scholar
S. Antholzer, M. Haltmeier, and J. Schwab,
“Deep learning for photoacoustic tomography from sparse data,”
Inverse Probl. Sci. Eng., 27 987 https://doi.org/10.1080/17415977.2018.1518444
(2019).
Google Scholar
A. Hauptmann et al.,
“Model-based learning for accelerated, limited-view 3-D photoacoustic tomography,”
IEEE Trans. Med. Imaging, 37 1382 https://doi.org/10.1109/TMI.2018.2820382 ITMID4 0278-0062
(2018).
Google Scholar
N. Davoudi, X. L. Deán-Ben, and D. Razansky,
“Deep learning optoacoustic tomography with sparse data,”
Nat. Mach. Intell., 1 453 https://doi.org/10.1038/s42256-019-0095-3
(2019).
Google Scholar
S. Choi et al.,
“Deep learning enhances multiparametric dynamic volumetric photoacoustic computed tomography in vivo (DL-PACT),”
Adv. Sci., 10 e2202089 https://doi.org/10.1002/advs.202202089
(2022).
Google Scholar
A. Hauptmann and B. Cox,
“Deep learning in photoacoustic tomography: current approaches and future directions,”
J. Biomed. Opt., 25 112903 https://doi.org/10.1117/1.JBO.25.11.112903 JBOPFO 1083-3668
(2020).
Google Scholar
H. Deng et al.,
“Deep learning in photoacoustic imaging: a review,”
J. Biomed. Opt., 26 040901 https://doi.org/10.1117/1.JBO.26.4.040901 JBOPFO 1083-3668
(2021).
Google Scholar
P. Rajendran, A. Sharma, and M. Pramanik,
“Photoacoustic imaging aided with deep learning: a review,”
Biomed. Eng. Lett., 12 155 https://doi.org/10.1007/s13534-021-00210-y
(2022).
Google Scholar
S. Gutta et al.,
“Deep neural network-based bandwidth enhancement of photoacoustic data,”
J. Biomed. Opt., 22 1 https://doi.org/10.1117/1.JBO.22.11.116001 JBOPFO 1083-3668
(2017).
Google Scholar
N. Awasthi et al.,
“Deep neural network-based sinogram super-resolution and bandwidth enhancement for limited-data photoacoustic tomography,”
IEEE Trans. Ultrason. Ferroelectr. Freq. Control, 67 2660 https://doi.org/10.1109/TUFFC.2020.2977210 ITUCER 0885-3010
(2020).
Google Scholar
P. Rajendran and M. Pramanik,
“Deep learning approach to improve tangential resolution in photoacoustic tomography,”
Biomed. Opt. Express, 11 7311 https://doi.org/10.1364/BOE.410145 BOEICL 2156-7085
(2020).
Google Scholar
H. Zhang et al.,
“Deep-E: a fully-dense neural network for improving the elevation resolution in linear-array-based photoacoustic tomography,”
IEEE Trans. Med. Imaging, 41 1279 https://doi.org/10.1109/TMI.2021.3137060 ITMID4 0278-0062
(2021).
Google Scholar
C. Dehner et al.,
“Deep-learning-based electrical noise removal enables high spectral optoacoustic contrast in deep tissue,”
IEEE Trans. Med. Imaging, 41 3182 https://doi.org/10.1109/TMI.2022.3180115 ITMID4 0278-0062
(2022).
Google Scholar
H. Zhao et al.,
“Deep learning enables superior photoacoustic imaging at ultralow laser dosages,”
Adv. Sci., 8 2003097 https://doi.org/10.1002/advs.202003097
(2021).
Google Scholar
K. T. Hsu, S. Guan, and P. V. Chitnis,
“Fast iterative reconstruction for photoacoustic tomography using learned physical model: Theoretical validation,”
Photoacoustics, 29 100452 https://doi.org/10.1016/j.pacs.2023.100452
(2023).
Google Scholar
T. Lu et al.,
“LV-GAN: a deep learning approach for limited-view optoacoustic imaging based on hybrid datasets,”
J. Biophotonics, 14 e202000325 https://doi.org/10.1002/jbio.202000325
(2021).
Google Scholar
X. Zhang et al.,
“Sparse-sampling photoacoustic computed tomography: deep learning vs. compressed sensing,”
Biomed. Signal Process. Control, 71 103233 https://doi.org/10.1016/j.bspc.2021.103233
(2022).
Google Scholar
P. Kuchment and L. Kunyansky,
“Mathematics of thermoacoustic tomography,”
Eur. J. Appl. Math., 19 191 https://doi.org/10.1017/S0956792508007353 0956-7925
(2008).
Google Scholar
A. Rosenthal, V. Ntziachristos, and D. Razansky,
“Acoustic inversion in optoacoustic tomography: a review,”
Curr. Med. Imaging Rev., 9 318 https://doi.org/10.2174/15734056113096660006
(2013).
Google Scholar
C. Lutzweiler and D. Razansky,
“Optoacoustic imaging and tomography: reconstruction approaches and outstanding challenges in image performance and quantification,”
Sensors, 13 7345 https://doi.org/10.3390/s130607345 SNSRES 0746-9462
(2013).
Google Scholar
J. Poudel, Y. Lou, and M. A. Anastasio,
“A survey of computational frameworks for solving the acoustic inverse problem in three-dimensional photoacoustic computed tomography,”
Phys. Med. Biol., 64 14TR01 https://doi.org/10.1088/1361-6560/ab2017 PHMBA7 0031-9155
(2019).
Google Scholar
X. L. Deán-Ben and D. Razansky,
“A practical guide for model-based reconstruction in optoacoustic imaging,”
Front. Phys., 10 1057 https://doi.org/10.3389/fphy.2022.1028258
(2022).
Google Scholar
C. Yang et al.,
“Review of deep learning for photoacoustic imaging,”
Photoacoustics, 21 100215 https://doi.org/10.1016/j.pacs.2020.100215
(2021).
Google Scholar
H. B. Yedder, B. Cardoen, and G. Hamarneh,
“Deep learning for biomedical image reconstruction: a survey,”
Artif. Intell. Rev., 54 215 https://doi.org/10.1007/s10462-020-09861-2 AIREV6
(2021).
Google Scholar
A. DiSpirito III et al.,
“Sounding out the hidden data: a concise review of deep learning in photoacoustic imaging,”
Exp. Biol. Med., 246 1355 https://doi.org/10.1177/15353702211000310 EXBMAA 0071-3384
(2021).
Google Scholar
J. Gröhl et al.,
“Deep learning for biomedical photoacoustic imaging: a review,”
Photoacoustics, 22 100241 https://doi.org/10.1016/j.pacs.2021.100241
(2021).
Google Scholar
B. Cox et al.,
“Quantitative spectroscopic photoacoustic imaging: a review,”
J. Biomed. Opt., 17 061202 https://doi.org/10.1117/1.JBO.17.6.061202 JBOPFO 1083-3668
(2012).
Google Scholar
X. Tang, J. Fu, and H. Qin,
“Microwave-induced thermoacoustic imaging with functional nanoparticles,”
J. Innov. Opt. Health Sci., 16 2230014 https://doi.org/10.1142/S1793545822300142
(2023).
Google Scholar
Q. Liu et al.,
“Biomedical microwave-induced thermoacoustic imaging,”
J. Innov. Opt. Health Sci., 15 2230007 https://doi.org/10.1142/S1793545822300075
(2022).
Google Scholar
Z. Liang et al.,
“Study on response of metal wire in thermoacoustic imaging,”
J. Innov. Opt. Health Sci., 15 2250015 https://doi.org/10.1142/S1793545822500158
(2022).
Google Scholar
X. Liang et al.,
“Investigation of artifacts by mapping SAR in thermoacoustic imaging,”
J. Innov. Opt. Health Sci., 14 2150011 https://doi.org/10.1142/S1793545821500115
(2021).
Google Scholar
L. V. Wang and H. Wu, Biomedical Optics: Principles and Imaging, John Wiley & Sons,
(2007). Google Scholar
B. T. Cox and P. C. Beard,
“Fast calculation of pulsed photoacoustic fields in fluids using k-space methods,”
J. Acoust. Soc. Am., 117 3616 https://doi.org/10.1121/1.1920227 JASMAN 0001-4966
(2005).
Google Scholar
American National Standards Institute, American National Standard for Safe Use of Lasers ANSI Z136.1-2007, Laser Institute of America,
(2007). Google Scholar
A. C. Tam,
“Applications of photoacoustic sensing techniques,”
Rev. Mod. Phys., 58 381 https://doi.org/10.1103/RevModPhys.58.381 RMPHAT 0034-6861
(1986).
Google Scholar
B. Cox and P. C. Beard,
“Modeling photoacoustic propagation in tissue using k-space techniques,”
Photoacoustic Imaging and Spectroscopy, 25 CRC Press,
(2017). Google Scholar
H. Jiang,
“Fundamentals of photoacoustic tomography,”
Photoacoustic Tomography, 1 CRC Press,
(2015). Google Scholar
L. V. Wang,
“Tutorial on photoacoustic microscopy and computed tomography,”
IEEE J. Sel. Top. Quantum Electron., 14 171 https://doi.org/10.1109/JSTQE.2007.913398 IJSQEN 1077-260X
(2008).
Google Scholar
C. Guo and C. S. Singh, Handbook of Laser Technology and Applications: Laser Applications: Medical, Metrology and Communication (volume four), CRC Press,
(2021). Google Scholar
C. Tian et al.,
“Impact of system factors on the performance of photoacoustic tomography scanners,”
Phys. Rev. Appl., 13 014001 https://doi.org/10.1103/PhysRevApplied.13.014001 PRAHB2 2331-7019
(2020).
Google Scholar
B. E. Treeby and B. T. Cox,
“k-Wave: MATLAB toolbox for the simulation and reconstruction of photoacoustic wave fields,”
J. Biomed. Opt., 15 021314 https://doi.org/10.1117/1.3360308 JBOPFO 1083-3668
(2010).
Google Scholar
B. E. Treeby et al.,
“Modeling nonlinear ultrasound propagation in heterogeneous media with power law absorption using a k-space pseudospectral method,”
J. Acoust. Soc. Am., 131 4324 https://doi.org/10.1121/1.4712021 JASMAN 0001-4966
(2012).
Google Scholar
N. N. Bojarski,
“The k-space formulation of the scattering problem in the time domain,”
J. Acoust. Soc. Am., 72 570 https://doi.org/10.1121/1.388038 JASMAN 0001-4966
(1982).
Google Scholar
M. Tabei, T. D. Mast, and R. C. Waag,
“A k-space method for coupled first-order acoustic propagation equations,”
J. Acoust. Soc. Am., 111 53 https://doi.org/10.1121/1.1421344 JASMAN 0001-4966
(2002).
Google Scholar
Z. Chenxi, T. Zhijian, and T. Chao,
“Point spread function modeling for photoacoustic tomography–I: three-dimensional detection geometries,”
Opt. Express, 32 1063 https://doi.org/10.1364/OE.499039 OPEXFF 1094-4087
(2024).
Google Scholar
C. Zhang, C. Chen, and C. Tian,
“Point spread function modeling for photoacoustic tomography–II: Two-dimensional detection geometries,”
Opt. Express, 32 1088 https://doi.org/10.1364/OE.499044 OPEXFF 1094-4087
(2024).
Google Scholar
S. R. Deans, The Radon Transform and Some of its Applications, Courier Corporation,
(2007). Google Scholar
A. C. Kak and M. Slaney,
“Algorithms for reconstruction with nondiffracting sources,”
Principles of Computerized Tomographic Imaging, 49 IEEE,
(2001). Google Scholar
N. J. Redding and G. N. Newsam,
“Radon transform and its inverse,”
Inverting the Circular Radon Transform, 2 DSTO Electronics and Surveillance Research Laboratory,
(2002). Google Scholar
N. J. Redding and T. M. Payne,
“Inverting the spherical Radon transform for 3D SAR image formation,”
in Proceedings of the International Conference on Radar (IEEE Cat. No. 03EX695),
466
(2003). Google Scholar
K. E. Thomenius,
“Evolution of ultrasound beamformers,”
in Ultrasonics Symposium,
1615
(1996). Google Scholar
J. C. Somer,
“Electronic sector scanning for ultrasonic diagnosis,”
Ultrasonics, 6 153 https://doi.org/10.1016/0041-624X(68)90277-1 ULTRA3 0041-624X
(1968).
Google Scholar
R. E. McKeighen and M. P. Buchin,
“New techniques for dynamically variable electronic delays for real time ultrasonic imaging,”
in Ultrasonics Symposium,
250
(1977). Google Scholar
V. Perrot et al.,
“So you think you can DAS? a viewpoint on delay-and-sum beamforming,”
Ultrasonics, 111 106309 https://doi.org/10.1016/j.ultras.2020.106309 ULTRA3 0041-624X
(2021).
Google Scholar
C. G. A. Hoelen and F. F. de Mul,
“Image reconstruction for photoacoustic scanning of tissue structures,”
Appl. Opt., 39 5872 https://doi.org/10.1364/AO.39.005872 APOPAI 0003-6935
(2000).
Google Scholar
D. Feng et al.,
“Microwave-induced thermoacoustic tomography: reconstruction by synthetic aperture,”
Med. Phys., 28 2427 https://doi.org/10.1118/1.1418015 MPHYA6 0094-2405
(2001).
Google Scholar
H. B. Lim et al.,
“Confocal microwave imaging for breast cancer detection: delay-multiply-and-sum image reconstruction algorithm,”
IEEE Trans. Biomed. Eng., 55 1697 https://doi.org/10.1109/TBME.2008.919716
(2008).
Google Scholar
G. Matrone et al.,
“The delay multiply and sum beamforming algorithm in ultrasound B-mode medical imaging,”
IEEE Trans. Med. Imaging, 34 940 https://doi.org/10.1109/TMI.2014.2371235 ITMID4 0278-0062
(2015).
Google Scholar
A. Alshaya et al.,
“Spatial resolution and contrast enhancement in photoacoustic imaging with filter delay multiply and sum beamforming technique,”
in IEEE International Ultrasonics Symposium (IUS),
1
(2016). Google Scholar
T. Kirchner et al.,
“Signed real-time delay multiply and sum beamforming for multispectral photoacoustic imaging,”
J. Imaging, 4 121 https://doi.org/10.3390/jimaging4100121
(2018).
Google Scholar
S. Mulani, S. Paul, and M. S. Singh,
“Higher-order correlation based real-time beamforming in photoacoustic imaging,”
J. Opt. Soc. Am. A, 39 1805 https://doi.org/10.1364/JOSAA.461323 JOAOD6 0740-3232
(2022).
Google Scholar
S. Jeon et al.,
“Real-time delay-multiply-and-sum beamforming with coherence factor for in vivo clinical photoacoustic imaging of humans,”
Photoacoustics, 15 100136 https://doi.org/10.1016/j.pacs.2019.100136
(2019).
Google Scholar
S. Paul et al.,
“Simplified delay multiply and sum based promising beamformer for real-time photoacoustic imaging,”
IEEE Trans. Instrum. Meas., 71 1 https://doi.org/10.1109/TIM.2022.3187734 IEIMAO 0018-9456
(2022).
Google Scholar
M. A. Lediju et al.,
“Short-lag spatial coherence of backscattered echoes: Imaging characteristics,”
IEEE Trans. Ultrason. Ferroelectr. Freq. Control, 58 1377 https://doi.org/10.1109/TUFFC.2011.1957 ITUCER 0885-3010
(2011).
Google Scholar
M. A. Lediju Bell et al.,
“In vivo visualization of prostate brachytherapy seeds with photoacoustic imaging,”
J. Biomed. Opt., 19 126011 https://doi.org/10.1117/1.JBO.19.12.126011 JBOPFO 1083-3668
(2014).
Google Scholar
M. T. Graham and M. A. L. Bell,
“Theoretical application of short-lag spatial coherence to photoacoustic imaging,”
in IEEE International Ultrasonics Symposium (IUS),
1
(2017). Google Scholar
M. T. Graham and M. A. L. Bell,
“Photoacoustic spatial coherence theory and applications to coherence-based image contrast and resolution,”
IEEE Trans. Ultrason. Ferroelectr. Freq. Control, 67 2069 https://doi.org/10.1109/TUFFC.2020.2999343 ITUCER 0885-3010
(2020).
Google Scholar
J. Tordera Mora et al.,
“Generalized spatial coherence reconstruction for photoacoustic computed tomography,”
J. Biomed. Opt., 26 046002 https://doi.org/10.1117/1.JBO.26.4.046002 JBOPFO 1083-3668
(2021).
Google Scholar
J. Capon,
“High-resolution frequency-wavenumber spectrum analysis,”
Proc. IEEE, 57 1408 https://doi.org/10.1109/PROC.1969.7278 IEEPAD 0018-9219
(1969).
Google Scholar
J. Mann and W. Walker,
“A constrained adaptive beamformer for medical ultrasound: Initial results,”
in Ultrasonics Symposium,
1807
(2002). Google Scholar
M. Sasso and C. Cohen-Bacrie,
“Medical ultrasound imaging using the fully adaptive beamformer,”
in IEEE International Conference on Acoustics, Speech, and Signal Processing,
ii/489
(2005). Google Scholar
J. F. Synnevag, A. Austeng, and S. Holm,
“Adaptive beamforming applied to medical ultrasound imaging,”
IEEE Trans. Ultrason. Ferroelectr. Freq. Control, 54 1606 https://doi.org/10.1109/TUFFC.2007.431 ITUCER 0885-3010
(2007).
Google Scholar
I. K. Holfort, F. Gran, and J. A. Jensen,
“Broadband minimum variance beamforming for ultrasound imaging,”
IEEE Trans. Ultrason. Ferroelectr. Freq. Control, 56 314 https://doi.org/10.1109/TUFFC.2009.1040 ITUCER 0885-3010
(2009).
Google Scholar
B. M. Asl and A. Mahloojifar,
“Minimum variance beamforming combined with adaptive coherence weighting applied to medical ultrasound imaging,”
IEEE Trans. Ultrason. Ferroelectr. Freq. Control, 56 1923 https://doi.org/10.1109/TUFFC.2009.1268 ITUCER 0885-3010
(2009).
Google Scholar
S. Park et al.,
“Adaptive beamforming for photoacoustic imaging,”
Opt. Lett., 33 1291 https://doi.org/10.1364/OL.33.001291 OPLEDP 0146-9592
(2008).
Google Scholar
M. Mozaffarzadeh et al.,
“Linear-array photoacoustic imaging using minimum variance-based delay multiply and sum adaptive beamforming algorithm,”
J. Biomed. Opt., 23 026002 https://doi.org/10.1117/1.JBO.23.2.026002 JBOPFO 1083-3668
(2018).
Google Scholar
M. Mozaffarzadeh et al.,
“Eigenspace-based minimum variance combined with delay multiply and sum beamformer: application to linear-array photoacoustic imaging,”
IEEE J. Sel. Top. Quantum Electron, 25 1 https://doi.org/10.1109/JSTQE.2018.2856584 IJSQEN 1077-260X
(2018).
Google Scholar
R. Al Mukaddim, R. Ahmed, and T. Varghese,
“Improving minimum variance beamforming with sub-aperture processing for photoacoustic imaging,”
in Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC),
2879
(2021). Google Scholar
O. L. Frost,
“An algorithm for linearly constrained adaptive array processing,”
Proc. IEEE, 60 926 https://doi.org/10.1109/PROC.1972.8817 IEEPAD 0018-9219
(1972).
Google Scholar
R. Mallart and M. Fink,
“Adaptive focusing in scattering media through sound-speed inhomogeneities: the van cittert zernike approach and focusing criterion,”
J. Acoust. Soc. Am., 96 3721 https://doi.org/10.1121/1.410562 JASMAN 0001-4966
(1994).
Google Scholar
K. W. Hollman, K. Rigby, and M. O’donnell,
“Coherence factor of speckle from a multi-row probe,”
in Ultrasonics Symposium,
1257
(1999). Google Scholar
P.-C. Li and M.-L. Li,
“Adaptive imaging using the generalized coherence factor,”
IEEE Trans. Ultrason. Ferroelectr. Freq. Control, 50 128 https://doi.org/10.1109/TUFFC.2003.1182117 ITUCER 0885-3010
(2003).
Google Scholar
Y.-H. Wang and P.-C. Li,
“SNR-dependent coherence-based adaptive imaging for high-frame-rate ultrasonic and photoacoustic imaging,”
IEEE Trans. Ultrason. Ferroelectr. Freq. Control, 61 1419 https://doi.org/10.1109/TUFFC.2014.3051 ITUCER 0885-3010
(2014).
Google Scholar
D. Wang et al.,
“Coherent-weighted three-dimensional image reconstruction in linear-array-based photoacoustic tomography,”
Biomed. Opt. Express, 7 1957 https://doi.org/10.1364/BOE.7.001957 BOEICL 2156-7085
(2016).
Google Scholar
M. Mozaffarzadeh et al.,
““Image improvement in linear-array photoacoustic imaging using high resolution coherence factor weighting technique,”
BMC Biomed. Eng., 1 10 https://doi.org/10.1186/s42490-019-0009-9
(2019).
Google Scholar
S. Paul, S. Mandal, and M. S. Singh,
“Noise adaptive beamforming for linear array photoacoustic imaging,”
IEEE Trans. Instrum. Meas., 70 1 https://doi.org/10.1109/TIM.2021.3103260 IEIMAO 0018-9456
(2021).
Google Scholar
R. A. Mukaddim and T. Varghese,
“Spatiotemporal coherence weighting for in vivo cardiac photoacoustic image beamformation,”
IEEE Trans. Ultrason. Ferroelectr. Freq. Control, 68 586 https://doi.org/10.1109/TUFFC.2020.3016900 ITUCER 0885-3010
(2021).
Google Scholar
M. Mozaffarzadeh et al.,
“Enhanced linear-array photoacoustic beamforming using modified coherence factor,”
J. Biomed. Opt., 23 026005 https://doi.org/10.1117/1.JBO.23.2.026005 JBOPFO 1083-3668
(2018).
Google Scholar
S. Shamekhi et al.,
“Eigenspace-based minimum variance beamformer combined with sign coherence factor: application to linear-array photoacoustic imaging,”
Ultrasonics, 108 106174 https://doi.org/10.1016/j.ultras.2020.106174 ULTRA3 0041-624X
(2020).
Google Scholar
X. Ma et al.,
“Multiple delay and sum with enveloping beamforming algorithm for photoacoustic imaging,”
IEEE Trans. Med. Imaging, 39 1812 https://doi.org/10.1109/TMI.2019.2958838 ITMID4 0278-0062
(2020).
Google Scholar
Q. Mao et al.,
“Improving photoacoustic imaging in low signal-to-noise ratio by using spatial and polarity coherence,”
Photoacoustics, 28 100427 https://doi.org/10.1016/j.pacs.2022.100427
(2022).
Google Scholar
M. Xu and L. V. Wang,
“Pulsed-microwave-induced thermoacoustic tomography: filtered backprojection in a circular measurement configuration,”
Med. Phys., 29 1661 https://doi.org/10.1118/1.1493778 MPHYA6 0094-2405
(2002).
Google Scholar
M. Xu and L. V. Wang,
“Time-domain reconstruction for thermoacoustic tomography in a spherical geometry,”
IEEE Trans. Med. Imaging, 21 814 https://doi.org/10.1109/TMI.2002.801176 ITMID4 0278-0062
(2002).
Google Scholar
M. Xu, Y. Xu, and L. V. Wang,
“Time-domain reconstruction algorithms and numerical simulations for thermoacoustic tomography in various geometries,”
IEEE Trans. Biomed. Eng., 50 1086 https://doi.org/10.1109/TBME.2003.816081 IEBEAX 0018-9294
(2003).
Google Scholar
K. Shen et al.,
“Negativity artifacts in back-projection based photoacoustic tomography,”
J. Phys. D: Appl. Phys., 54 074001 https://doi.org/10.1088/1361-6463/abc37d JPAPBE 0022-3727
(2020).
Google Scholar
R. Gao et al.,
“Restoring the imaging quality of circular transducer array-based PACT using synthetic aperture focusing technique integrated with 2nd-derivative-based back projection scheme,”
Photoacoustics, 32 100537 https://doi.org/10.1016/j.pacs.2023.100537
(2023).
Google Scholar
L. A. Kunyansky,
“Explicit inversion formulae for the spherical mean Radon transform,”
Inverse Probl., 23 373 https://doi.org/10.1088/0266-5611/23/1/021 INPEEY 0266-5611
(2007).
Google Scholar
D. Finch, M. Haltmeier, and Rakesh,
“Inversion of spherical means and the wave equation in even dimensions,”,”
SIAM J. Appl. Math., 68 392 https://doi.org/10.1137/070682137 SMJMAP 0036-1399
(2007).
Google Scholar
L. V. Nguyen,
“A family of inversion formulas in thermoacoustic tomography,”
Inverse Prob. Imaging, 3 649 https://doi.org/10.3934/ipi.2009.3.649
(2009).
Google Scholar
P. Burgholzer et al.,
“Temporal back-projection algorithms for photoacoustic tomography with integrating line detectors,”
Inverse Probl., 23 S65 https://doi.org/10.1088/0266-5611/23/6/S06 INPEEY 0266-5611
(2007).
Google Scholar
F. Natterer,
““Photo-acoustic inversion in convex domains,”
Inverse Prob. Imaging, 6 1 https://doi.org/10.3934/ipi.2012.6.1
(2012).
Google Scholar
V. P. Palamodov,
“A uniform reconstruction formula in integral geometry,”
Inverse Probl., 28 065014 https://doi.org/10.1088/0266-5611/28/6/065014 INPEEY 0266-5611
(2012).
Google Scholar
M. Haltmeier,
“Exact reconstruction formula for the spherical mean Radon transform on ellipsoids,”
Inverse Probl., 30 105006 https://doi.org/10.1088/0266-5611/30/10/105006 INPEEY 0266-5611
(2014).
Google Scholar
M. Haltmeier,
“Universal inversion formulas for recovering a function from spherical means,”
SIAM J. Math. Anal., 46 214 https://doi.org/10.1137/120881270 SJMAAH 0036-1410
(2014).
Google Scholar
Y. Salman,
“An inversion formula for the spherical mean transform with data on an ellipsoid in two and three dimensions,”
J. Math. Anal. Appl., 420 612 https://doi.org/10.1016/j.jmaa.2014.05.007 JMANAK 0022-247X
(2014).
Google Scholar
X. L. Deán-Ben, A. Ozbek, and D. Razansky,
“Volumetric real-time tracking of peripheral human vasculature with GPU-accelerated three-dimensional optoacoustic tomography,”
IEEE Trans. Med. Imaging, 32 2050 https://doi.org/10.1109/TMI.2013.2272079 ITMID4 0278-0062
(2013).
Google Scholar
J. Yuan et al.,
“Real-time photoacoustic and ultrasound dual-modality imaging system facilitated with graphics processing unit and code parallel optimization,”
J. Biomed. Opt., 18 86001 https://doi.org/10.1117/1.JBO.18.8.086001 JBOPFO 1083-3668
(2013).
Google Scholar
X. L. Deán-Ben, H. López-Schier, and D. Razansky,
“Optoacoustic micro-tomography at 100 volumes per second,”
Sci. Rep., 7 6850 https://doi.org/10.1038/s41598-017-06554-9 SRCEC3 2045-2322
(2017).
Google Scholar
Y. Zhang and L. Wang,
“Video-rate ring-array ultrasound and photoacoustic tomography,”
IEEE Trans. Med. Imaging, 39 4369 https://doi.org/10.1109/TMI.2020.3017815 ITMID4 0278-0062
(2020).
Google Scholar
Y. Wang and C. Li,
“Comprehensive framework of GPU-accelerated image reconstruction for photoacoustic computed tomography,”
J. Biomed. Opt., 29 066006 https://doi.org/10.1117/1.JBO.29.6.066006 JBOPFO 1083-3668
(2024).
Google Scholar
Y. Zhang et al.,
“Video-rate dual-modal wide-beam harmonic ultrasound and photoacoustic computed tomography,”
IEEE Trans. Med. Imaging, 41 727 https://doi.org/10.1109/TMI.2021.3122240 ITMID4 0278-0062
(2021).
Google Scholar
Z. Gao et al.,
“Implementation and comparison of three image reconstruction algorithms in FPGA towards palm-size photoacoustic tomography,”
IEEE Sens. J., 23 8605 https://doi.org/10.1109/JSEN.2023.3252814 ISJEAZ 1530-437X
(2023).
Google Scholar
M. Xu, G. Ku, and L. V. Wang,
“Microwave-induced thermoacoustic tomography using multi-sector scanning,”
Med. Phys., 28 1958 https://doi.org/10.1118/1.1395037 MPHYA6 0094-2405
(2001).
Google Scholar
S. J. Norton,
“Reconstruction of a two-dimensional reflecting medium over a circular domain: exact solution,”
J. Acoust. Soc. Am., 67 1266 https://doi.org/10.1121/1.384168 JASMAN 0001-4966
(1980).
Google Scholar
S. J. Norton and M. Linzer,
“Ultrasonic reflectivity imaging in three dimensions: exact inverse scattering solutions for plane, cylindrical, and spherical apertures,”
IEEE Trans. Biomed. Eng., BME-28 202 https://doi.org/10.1109/TBME.1981.324791 IEBEAX 0018-9294
(1981).
Google Scholar
Y. Xu, D. Feng, and L. V. Wang,
“Exact frequency-domain reconstruction for thermoacoustic tomography. I. planar geometry,”
IEEE Trans. Med. Imaging, 21 823 https://doi.org/10.1109/TMI.2002.801172 ITMID4 0278-0062
(2002).
Google Scholar
M. Haltmeier, O. Scherzer, and G. Zangerl,
“A reconstruction algorithm for photoacoustic imaging based on the nonuniform FFT,”
IEEE Trans. Med. Imaging, 28 1727 https://doi.org/10.1109/TMI.2009.2022623 ITMID4 0278-0062
(2009).
Google Scholar
Y. Xu, M. Xu, and L. V. Wang,
“Exact frequency-domain reconstruction for thermoacoustic tomography. II. cylindrical geometry,”
IEEE Trans. Med. Imaging, 21 829 https://doi.org/10.1109/TMI.2002.801171 ITMID4 0278-0062
(2002).
Google Scholar
M. Haltmeier et al.,
“Thermoacoustic tomography and the circular Radon transform: exact inversion formula,”
Math. Models Methods Appl. Sci., 17 635 https://doi.org/10.1142/S0218202507002054 MMMSEU 0218-2025
(2007).
Google Scholar
D. L. Colton, R. Kress, and R. Kress, Inverse Acoustic and Electromagnetic Scattering Theory, 93 Springer,
(1998). Google Scholar
R. Suda and M. Takami,
“A fast spherical harmonics transform algorithm,”
Math. Comput., 71 703 https://doi.org/10.1090/S0025-5718-01-01386-2 MCMPAF 0025-5718
(2002).
Google Scholar
M. A. Anastasio et al.,
“Application of inverse source concepts to photoacoustic tomography,”
Inverse Probl., 23 S21 https://doi.org/10.1088/0266-5611/23/6/S03 INPEEY 0266-5611
(2007).
Google Scholar
K. Wang and M. A. Anastasio,
“A simple Fourier transform-based reconstruction formula for photoacoustic computed tomography with a circular or spherical measurement geometry,”
Phys. Med. Biol., 57 N493 https://doi.org/10.1088/0031-9155/57/23/N493 PHMBA7 0031-9155
(2012).
Google Scholar
G. Beylkin,
“On representations of the Helmholtz Green’s function,”
Appl. Comput. Harmon. Anal., 70 101633 https://doi.org/10.1016/j.acha.2024.101633 ACOHE9 1063-5203
(2024).
Google Scholar
M. Agranovsky and P. Kuchment,
“Uniqueness of reconstruction and an inversion procedure for thermoacoustic and photoacoustic tomography with variable sound speed,”
Inverse Probl., 23 2089 https://doi.org/10.1088/0266-5611/23/5/016 INPEEY 0266-5611
(2007).
Google Scholar
G. Zangerl, O. Scherzer, and M. Haltmeier,
“Circular integrating detectors in photo and thermoacoustic tomography,”
Inverse Probl. Sci. Eng., 17 133 https://doi.org/10.1080/17415970802166782
(2009).
Google Scholar
G. Zangerl, O. Scherzer, and M. Haltmeier,
“Exact series reconstruction in photoacoustic tomography with circular integrating detectors,”
Commun. Math. Sci., 7 665 1539-6746
(2008).
Google Scholar
B. E. Treeby, E. Z. Zhang, and B. T. Cox,
“Photoacoustic tomography in absorbing acoustic media using time reversal,”
Inverse Probl., 26 115003 https://doi.org/10.1088/0266-5611/26/11/115003 INPEEY 0266-5611
(2010).
Google Scholar
Y. Hristova, P. Kuchment, and L. Nguyen,
“Reconstruction and time reversal in thermoacoustic tomography in acoustically homogeneous and inhomogeneous media,”
Inverse Probl., 24 055006 https://doi.org/10.1088/0266-5611/24/5/055006 INPEEY 0266-5611
(2008).
Google Scholar
B. T. Cox and B. E. Treeby,
“Artifact trapping during time reversal photoacoustic imaging for acoustically heterogeneous media,”
IEEE Trans. Med. Imaging, 29 387 https://doi.org/10.1109/TMI.2009.2032358 ITMID4 0278-0062
(2010).
Google Scholar
B. T. Cox et al.,
“K-space propagation models for acoustically heterogeneous media: application to biomedical photoacoustics,”
J. Acoust. Soc. Am., 121 3453 https://doi.org/10.1121/1.2717409 JASMAN 0001-4966
(2007).
Google Scholar
P. Stefanov and G. Uhlmann,
“Thermoacoustic tomography with variable sound speed,”
Inverse Probl., 25 075011 https://doi.org/10.1088/0266-5611/25/7/075011 INPEEY 0266-5611
(2009).
Google Scholar
J. Qian et al.,
“An efficient neumann series–based algorithm for thermoacoustic and photoacoustic tomography with variable sound speed,”
SIAM J. Imaging Sci., 4 850 https://doi.org/10.1137/100817280
(2011).
Google Scholar
S. R. Arridge et al.,
“On the adjoint operator in photoacoustic tomography,”
Inverse Probl., 32 115012 https://doi.org/10.1088/0266-5611/32/11/115012 INPEEY 0266-5611
(2016).
Google Scholar
A. Rosenthal, V. Ntziachristos, and D. Razansky,
“Optoacoustic methods for frequency calibration of ultrasonic sensors,”
IEEE Trans. Ultrason. Ferroelectr. Freq. Control, 58 316 https://doi.org/10.1109/TUFFC.2011.1809 ITUCER 0885-3010
(2011).
Google Scholar
X. L. Dean-Ben et al.,
“Accurate model-based reconstruction algorithm for three-dimensional optoacoustic tomography,”
IEEE Trans. Med. Imaging, 31 1922 https://doi.org/10.1109/TMI.2012.2208471 ITMID4 0278-0062
(2012).
Google Scholar
K. Wang et al.,
“Investigation of iterative image reconstruction in optoacoustic tomography,”
in Photons Plus Ultrasound: Imaging and Sensing,
379
(2012). Google Scholar
J. Zhang et al.,
“Effects of different imaging models on least-squares image reconstruction accuracy in photoacoustic tomography,”
IEEE Trans. Med. Imaging, 28 1781 https://doi.org/10.1109/TMI.2009.2024082 ITMID4 0278-0062
(2009).
Google Scholar
K. Wang et al.,
“Investigation of iterative image reconstruction in three-dimensional optoacoustic tomography,”
Phys. Med. Biol., 57 5399 https://doi.org/10.1088/0031-9155/57/17/5399 PHMBA7 0031-9155
(2012).
Google Scholar
R. M. Lewitt,
“Multidimensional digital image representations using generalized kaiser–bessel window functions,”
J. Opt. Soc. Am. A, 7 1834 https://doi.org/10.1364/JOSAA.7.001834 JOAOD6 0740-3232
(1990).
Google Scholar
M. Schweiger and S. R. Arridge,
“Image reconstruction in optical tomography using local basis functions,”
J. Electron. Imaging, 12 583 https://doi.org/10.1117/1.1586919 JEIME5 1017-9909
(2003).
Google Scholar
G. B. Arfken, H. J. Weber, and F. E. Harris, Mathematical Methods for Physicists: a Comprehensive Guide, Academic press,
(2011). Google Scholar
S. Matej and R. M. Lewitt,
“Practical considerations for 3-D image reconstruction using spherically symmetric volume elements,”
IEEE Trans. Med. Imaging, 15 68 https://doi.org/10.1109/42.481442 ITMID4 0278-0062
(1996).
Google Scholar
C. B. Shaw et al.,
“Least squares QR-based decomposition provides an efficient way of computing optimal regularization parameter in photoacoustic tomography,”
J. Biomed. Opt., 18 080501 https://doi.org/10.1117/1.JBO.18.8.080501 JBOPFO 1083-3668
(2013).
Google Scholar
P. R. Stepanishen,
“Transient radiation from pistons in an infinite planar baffle,”
J. Acoust. Soc. Am., 49 1629 https://doi.org/10.1121/1.1912541 JASMAN 0001-4966
(1971).
Google Scholar
J. C. Lockwood and J. G. Willette,
“High-speed method for computing the exact solution for the pressure variations in the nearfield of a baffled piston,”
J. Acoust. Soc. Am., 53 735 https://doi.org/10.1121/1.1913385 JASMAN 0001-4966
(1973).
Google Scholar
K. Mitsuhashi, K. Wang, and M. A. Anastasio,
“Investigation of the far-field approximation for modeling a transducer’s spatial impulse response in photoacoustic computed tomography,”
Photoacoustics, 2 21 https://doi.org/10.1016/j.pacs.2013.11.001
(2014).
Google Scholar
R. P. K. Jagannath and P. K. Yalavarthy,
“Minimal residual method provides optimal regularization parameter for diffuse optical tomography,”
J. Biomed. Opt., 17 106015 https://doi.org/10.1117/1.JBO.17.10.106015 JBOPFO 1083-3668
(2012).
Google Scholar
P. C. Hansen,
“The L-curve and its use in the numerical treatment of inverse problems,”
Computational Inverse Problems in Electrocardiology, 119 WIT Press,
(2001). Google Scholar
D. Calvetti et al.,
“Tikhonov regularization and the l-curve for large discrete ill-posed problems,”
J. Comput. Appl. Math., 123 423 https://doi.org/10.1016/S0377-0427(00)00414-3 JCAMDI 0377-0427
(2000).
Google Scholar
L. Bottou,
“Large-scale machine learning with stochastic gradient descent,”
in Proceedings of COMPSTAT’2010,
177
(2010). Google Scholar
L. Bottou,
“Stochastic gradient descent tricks,”
Neural Networks: Tricks of the Trade, 421 Springer,
(2012). Google Scholar
C. C. Paige and M. A. Saunders,
“LSQR: an algorithm for sparse linear equations and sparse least squares,”
ACM Trans. Math. Software, 8 43 https://doi.org/10.1145/355984.355989 ACMSCU 0098-3500
(1982).
Google Scholar
O. Axelsson,
“A generalized conjugate gradient, least square method,”
Numer. Math., 51 209 https://doi.org/10.1007/BF01396750 NUMMA7 0029-599X
(1987).
Google Scholar
T. Wang et al.,
“Learned regularization for image reconstruction in sparse-view photoacoustic tomography,”
Biomed. Opt. Express, 13 5721 https://doi.org/10.1364/BOE.469460 BOEICL 2156-7085
(2022).
Google Scholar
J. Provost and F. Lesage,
“The application of compressed sensing for photo-acoustic tomography,”
IEEE Trans. Med. Imaging, 28 585 https://doi.org/10.1109/TMI.2008.2007825 ITMID4 0278-0062
(2009).
Google Scholar
Z. Guo et al.,
“Compressed sensing in photoacoustic tomography in vivo,”
J. Biomed. Opt., 15 021311 https://doi.org/10.1117/1.3381187 JBOPFO 1083-3668
(2010).
Google Scholar
F. Lucka et al.,
“Enhancing compressed sensing 4D photoacoustic tomography by simultaneous motion estimation,”
SIAM J. Imaging Sci., 11 2224 https://doi.org/10.1137/18M1170066
(2018).
Google Scholar
S. Biton et al.,
“Optoacoustic model-based inversion using anisotropic adaptive total-variation regularization,”
Photoacoustics, 16 100142 https://doi.org/10.1016/j.pacs.2019.100142
(2019).
Google Scholar
A. Beck and M. Teboulle,
“A fast iterative shrinkage-thresholding algorithm for linear inverse problems,”
SIAM J. Imaging Sci., 2 183 https://doi.org/10.1137/080716542
(2009).
Google Scholar
I. Daubechies et al.,
“Iteratively reweighted least squares minimization for sparse recovery,”
Commun. Pure Appl. Math., 63 1 https://doi.org/10.1002/cpa.20303 CPMAMV 0010-3640
(2010).
Google Scholar
Y. Wang, W. Yin, and J. Zeng,
“Global convergence of ADMM in nonconvex nonsmooth optimization,”
J. Sci. Comput., 78 29 https://doi.org/10.1007/s10915-018-0757-z JSCOEB 0885-7474
(2019).
Google Scholar
X. Li et al.,
“Model-based optoacoustic tomography image reconstruction with non-local and sparsity regularizations,”
IEEE Access, 7 102136 https://doi.org/10.1109/ACCESS.2019.2930650
(2019).
Google Scholar
P. K. Yalavarthy et al.,
“Non-local means improves total-variation constrained photoacoustic image reconstruction,”
J. Biophotonics, 14 e202000191 https://doi.org/10.1002/jbio.202000191
(2021).
Google Scholar
J. Prakash et al.,
“Maximum entropy based non-negative optoacoustic tomographic image reconstruction,”
IEEE Trans. Biomed. Eng., 66 2604 https://doi.org/10.1109/TBME.2019.2892842 IEBEAX 0018-9294
(2019).
Google Scholar
H. Liu et al.,
“Curve-driven-based acoustic inversion for photoacoustic tomography,”
IEEE Trans. Med. Imaging, 35 2546 https://doi.org/10.1109/TMI.2016.2584120 ITMID4 0278-0062
(2016).
Google Scholar
A. Javaherian and S. Holman,
“A multi-grid iterative method for photoacoustic tomography,”
IEEE Trans. Med. Imaging, 36 696 https://doi.org/10.1109/TMI.2016.2625272 ITMID4 0278-0062
(2016).
Google Scholar
X. L. Dean-Ben, V. Ntziachristos, and D. Razansky,
“Acceleration of optoacoustic model-based reconstruction using angular image discretization,”
IEEE Trans. Med. Imaging, 31 1154 https://doi.org/10.1109/TMI.2012.2187460 ITMID4 0278-0062
(2012).
Google Scholar
A. Buehler et al.,
“Model-based optoacoustic inversions with incomplete projection data,”
Med. Phys., 38 1694 https://doi.org/10.1118/1.3556916 MPHYA6 0094-2405
(2011).
Google Scholar
A. Voulodimos et al.,
“Deep learning for computer vision: a brief review,”
Comput. Intell. Neurosci., 2018 7068349 https://doi.org/10.1155/2018/7068349
(2018).
Google Scholar
D. W. Otter, J. R. Medina, and J. K. Kalita,
“A survey of the usages of deep learning for natural language processing,”
IEEE Trans. Neural Netw. Learn. Syst., 32 604 https://doi.org/10.1109/TNNLS.2020.2979670
(2021).
Google Scholar
S. S. Nisha and N. Meeral,
“Applications of deep learning in biomedical engineering,”
Handbook of Deep Learning in Biomedical Engineering Techniques and Applications, 245 Academic Press,
(2020). Google Scholar
G. Wang, J. C. Ye, and B. De Man,
“Deep learning for tomographic image reconstruction,”
Nat. Mach. Intell., 2 737 https://doi.org/10.1038/s42256-020-00273-z
(2020).
Google Scholar
G. Wang,
“A perspective on deep imaging,”
IEEE Access, 4 8914 https://doi.org/10.1109/ACCESS.2016.2624938
(2016).
Google Scholar
G. Wang et al.,
“Deep tomographic image reconstruction: yesterday, today, and tomorrow—editorial for the 2nd special issue ‘Machine Learning for Image Reconstruction’,”
IEEE Trans. Med. Imaging, 40 2956 https://doi.org/10.1109/TMI.2021.3115547 ITMID4 0278-0062
(2021).
Google Scholar
D. Liang et al.,
“Deep magnetic resonance image reconstruction: inverse problems meet neural networks,”
IEEE Signal Process Mag., 37 141 https://doi.org/10.1109/MSP.2019.2950557
(2020).
Google Scholar
E. M. A. Anas et al.,
“Enabling fast and high quality LED photoacoustic imaging: a recurrent neural networks based approach,”
Biomed. Opt. Express, 9 3852 https://doi.org/10.1364/BOE.9.003852 BOEICL 2156-7085
(2018).
Google Scholar
A. Hariri et al.,
“Deep learning improves contrast in low-fluence photoacoustic imaging,”
Biomed. Opt. Express, 11 3360 https://doi.org/10.1364/BOE.395683 BOEICL 2156-7085
(2020).
Google Scholar
M. Yamakawa and T. Shiina,
“Artifact reduction in photoacoustic images by generating virtual dense array sensor from hemispheric sparse array sensor using deep learning,”
J. Med. Ultrason., 51 169 https://doi.org/10.1007/s10396-024-01413-3 JEMUEF
(2024).
Google Scholar
J. Zhang et al.,
“PAFormer: photoacoustic reconstruction via transformer with mask mechanism (IUS),”
1
(2022).
Google Scholar
F. Zhang et al.,
“Photoacoustic digital brain and deep-learning-assisted image reconstruction,”
Photoacoustics, 31 100517 https://doi.org/10.1016/j.pacs.2023.100517
(2023).
Google Scholar
P. Farnia et al.,
“High-quality photoacoustic image reconstruction based on deep convolutional neural network: towards intra-operative photoacoustic imaging,”
Biomed. Phys. Eng. Express, 6 045019 https://doi.org/10.1088/2057-1976/ab9a10
(2020).
Google Scholar
S. Guan et al.,
“Fully dense UNet for 2-D sparse photoacoustic tomography artifact removal,”
IEEE J. Biomed. Health. Inf., 24 568 https://doi.org/10.1109/JBHI.2019.2912935
(2020).
Google Scholar
J. Zhang et al.,
“Limited-view photoacoustic imaging reconstruction with dual domain inputs under mutual information constraint,”
(2020). Google Scholar
S. Guan et al.,
“Dense dilated UNet: deep learning for 3D photoacoustic tomography image reconstruction,”
(2021). Google Scholar
A. Creswell et al.,
“Generative adversarial networks: an overview,”
IEEE Signal Process Mag., 35 53 https://doi.org/10.1109/MSP.2017.2765202
(2018).
Google Scholar
T. Vu et al.,
“A generative adversarial network for artifact removal in photoacoustic computed tomography with a linear-array transducer,”
Commun. Pure Appl. Math., 245 597 https://doi.org/10.1177/1535370220914285 CPMAMV 0010-3640
(2020).
Google Scholar
H. Shahid et al.,
“Feasibility of a generative adversarial network for artifact removal in experimental photoacoustic imaging,”
Ultrasound Med. Biol., 48 1628 https://doi.org/10.1016/j.ultrasmedbio.2022.04.008 USMBA3 0301-5629
(2022).
Google Scholar
M. Lu et al.,
“Artifact removal in photoacoustic tomography with an unsupervised method,”
Biomed. Opt. Express, 12 6284 https://doi.org/10.1364/BOE.434172 BOEICL 2156-7085
(2021).
Google Scholar
H. Shan, G. Wang, and Y. Yang,
“Accelerated correction of reflection artifacts by deep neural networks in photoacoustic tomography,”
Appl. Sci., 9 2615 https://doi.org/10.3390/app9132615
(2019).
Google Scholar
S. Jeon et al.,
“A deep learning-based model that reduces speed of sound aberrations for improved in vivo photoacoustic imaging,”
IEEE Trans. Image Process., 30 8773 https://doi.org/10.1109/TIP.2021.3120053 IIPRE4 1057-7149
(2021).
Google Scholar
Y. Gao et al.,
“Deep learning-based photoacoustic imaging of vascular network through thick porous media,”
IEEE Trans. Med. Imaging, 41 2191 https://doi.org/10.1109/TMI.2022.3158474 ITMID4 0278-0062
(2022).
Google Scholar
V. Shijo et al.,
“SwinIR for photoacoustic computed tomography artifact reduction,”
in IEEE International Ultrasonics Symposium (IUS),
1
(2023). Google Scholar
W. Zheng et al.,
“Deep-E enhanced photoacoustic tomography using three-dimensional reconstruction for high-quality vascular imaging,”
Sensors, 22 7725 https://doi.org/10.3390/s22207725 SNSRES 0746-9462
(2022).
Google Scholar
C. Cai et al.,
“End-to-end deep neural network for optical inversion in quantitative photoacoustic imaging,”
Opt. Lett., 43 2752 https://doi.org/10.1364/OL.43.002752 OPLEDP 0146-9592
(2018).
Google Scholar
I. Olefir et al.,
“Deep learning-based spectral unmixing for optoacoustic imaging of tissue oxygen saturation,”
IEEE Trans. Med. Imaging, 39 3643 https://doi.org/10.1109/TMI.2020.3001750 ITMID4 0278-0062
(2020).
Google Scholar
C. Bench, A. Hauptmann, and B. T. Cox,
“Toward accurate quantitative photoacoustic imaging: learning vascular blood oxygen saturation in three dimensions,”
J. Biomed. Opt., 25 085003 https://doi.org/10.1117/1.JBO.25.8.085003 JBOPFO 1083-3668
(2020).
Google Scholar
C. Yang and F. Gao,
“EDA-Net: dense aggregation of deep and shallow information achieves quantitative photoacoustic blood oxygenation imaging deep in human breast,”
in Medical Image Computing and Computer Assisted Intervention–MICCAI 2019,
246
(2019). Google Scholar
C. Yang et al.,
“Quantitative photoacoustic blood oxygenation imaging using deep residual and recurrent neural network,”
in IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019),
741
(2019). Google Scholar
Z. Wang et al.,
“Extractor-attention-predictor network for quantitative photoacoustic tomography,”
Photoacoustics, 38 100609 https://doi.org/10.1016/j.pacs.2024.100609
(2024).
Google Scholar
N. Awasthi et al.,
“PA-fuse: a deep supervised approach for fusion of photoacoustic images with distinct reconstruction characteristics,”
Biomed. Opt. Express, 10 2227 https://doi.org/10.1364/BOE.10.002227 BOEICL 2156-7085
(2019).
Google Scholar
J. Zhang et al.,
“Photoacoustic image classification and segmentation of breast cancer: a feasibility study,”
IEEE Access, 7 5457 https://doi.org/10.1109/ACCESS.2018.2888910
(2019).
Google Scholar
N.-K. Chlis et al.,
“A sparse deep learning approach for automatic segmentation of human vasculature in multispectral optoacoustic tomography,”
Photoacoustics, 20 100203 https://doi.org/10.1016/j.pacs.2020.100203
(2020).
Google Scholar
B. Lafci et al.,
“Deep learning for automatic segmentation of hybrid optoacoustic ultrasound (OPUS) images,”
IEEE Trans. Ultrason. Ferroelectr. Freq. Control, 68 688 https://doi.org/10.1109/TUFFC.2020.3022324 ITUCER 0885-3010
(2021).
Google Scholar
M. G. González, M. Vera, and L. J. R. Vega,
“Combining band-frequency separation and deep neural networks for optoacoustic imaging,”
Opt. Lasers Eng., 163 107471 https://doi.org/10.1016/j.optlaseng.2022.107471
(2023).
Google Scholar
H. Shahid et al.,
“A deep learning approach for the photoacoustic tomography recovery from undersampled measurements,”
Front. Neurosci., 15 598693 https://doi.org/10.3389/fnins.2021.598693 1662-453X
(2021).
Google Scholar
G. Godefroy, B. Arnal, and E. Bossy,
“Compensating for visibility artefacts in photoacoustic imaging with a deep learning approach providing prediction uncertainties,”
Photoacoustics, 21 100218 https://doi.org/10.1016/j.pacs.2020.100218
(2021).
Google Scholar
H. Zhang et al.,
“A new deep learning network for mitigating limited-view and under-sampling artifacts in ring-shaped photoacoustic tomography,”
Comput. Med. Imaging Graph., 84 101720 https://doi.org/10.1016/j.compmedimag.2020.101720
(2020).
Google Scholar
P. Rajendran and M. Pramanik,
“High frame rate (approximately 3 Hz) circular photoacoustic tomography using single-element ultrasound transducer aided with deep learning,”
J. Biomed. Opt., 27 066005 https://doi.org/10.1117/1.JBO.27.6.066005 JBOPFO 1083-3668
(2022).
Google Scholar
P. Rajendran and M. Pramanik,
“Deep-learning-based multi-transducer photoacoustic tomography imaging without radius calibration,”
Opt. Lett., 46 4510 https://doi.org/10.1364/OL.434513 OPLEDP 0146-9592
(2021).
Google Scholar
H. Lan et al.,
“Ki-GAN: knowledge infusion generative adversarial network for photoacoustic image reconstruction in vivo,”
in Medical Image Computing and Computer Assisted Intervention–MICCAI 2019: 22nd International Conference,
273
(2019). Google Scholar
H. Lan et al.,
“Y-Net: hybrid deep learning image reconstruction for photoacoustic tomography in vivo,”
Photoacoustics, 20 100197 https://doi.org/10.1016/j.pacs.2020.100197
(2020).
Google Scholar
N. Davoudi et al.,
“Deep learning of image and time-domain data enhances the visibility of structures in optoacoustic tomography,”
Opt. Lett., 46 3029 https://doi.org/10.1364/OL.424571 OPLEDP 0146-9592
(2021).
Google Scholar
M. Guo et al.,
“AS-Net: fast photoacoustic reconstruction with multi-feature fusion from sparse data,”
IEEE Trans. Comput. Imaging, 8 215 https://doi.org/10.1109/TCI.2022.3155379
(2022).
Google Scholar
W. Li et al.,
“Deep learning reconstruction algorithm based on sparse photoacoustic tomography system,”
in IEEE International Ultrasonics Symposium (IUS),
1
(2021). Google Scholar
H. Lan, C. Yang, and F. Gao,
“A jointed feature fusion framework for photoacoustic image reconstruction,”
Photoacoustics, 29 100442 https://doi.org/10.1016/j.pacs.2022.100442
(2023).
Google Scholar
H. Li et al.,
“NETT: solving inverse problems with deep neural networks,”
Inverse Probl., 36 065005 https://doi.org/10.1088/1361-6420/ab6d57 INPEEY 0266-5611
(2020).
Google Scholar
S. Antholzer et al.,
“NETT regularization for compressed sensing photoacoustic tomography,”
in Photons Plus Ultrasound: Imaging and Sensing,
272
(2019). Google Scholar
C. Yang, H. Lan, and F. Gao,
“Accelerated photoacoustic tomography reconstruction via recurrent inference machines,”
in 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC),
6371
(2019). Google Scholar
H. Lan et al.,
“Compressed sensing for photoacoustic computed tomography based on an untrained neural network with a shape prior,”
Biomed. Opt. Express, 12 7835 https://doi.org/10.1364/BOE.441901 BOEICL 2156-7085
(2021).
Google Scholar
H. Lan, J. Gong, and F. Gao,
“Deep learning adapted acceleration for limited-view photoacoustic image reconstruction,”
Opt. Lett., 47 1911 https://doi.org/10.1364/OL.450860 OPLEDP 0146-9592
(2022).
Google Scholar
H. Shan, G. Wang, and Y. Yang,
“Simultaneous reconstruction of the initial pressure and sound speed in photoacoustic tomography using a deep-learning approach,”
Proc. SPIE, 11105 1110504 https://doi.org/10.1117/12.2529984 PSISDG 0277-786X
(2019).
Google Scholar
Y. E. Boink, S. Manohar, and C. Brune,
“A partially-learned algorithm for joint photo-acoustic reconstruction and segmentation,”
IEEE Trans. Med. Imaging, 39 129 https://doi.org/10.1109/TMI.2019.2922026 ITMID4 0278-0062
(2020).
Google Scholar
J. Ho, A. Jain, and P. Abbeel,
“Denoising diffusion probabilistic models,”
Adv. Neural Inf. Process. Syst., 33 6840 1049-5258
(2020).
Google Scholar
Z. Luo et al.,
“Image restoration with mean-reverting stochastic differential equations,”
(2023). Google Scholar
K. Guo et al.,
“Score-based generative model-assisted information compensation for high-quality limited-view reconstruction in photoacoustic tomography,”
Photoacoustics, 38 100623 https://doi.org/10.1016/j.pacs.2024.100623
(2024).
Google Scholar
S. Tong et al.,
“Score-based generative models for photoacoustic image reconstruction with rotation consistency constraints,”
(2023). Google Scholar
S. Dey et al.,
“Score-based diffusion models for photoacoustic tomography image reconstruction,”
in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),
2470
(2024). Google Scholar
X. Song et al.,
“Sparse-view reconstruction for photoacoustic tomography combining diffusion model with model-based iteration,”
Photoacoustics, 33 100558 https://doi.org/10.1016/j.pacs.2023.100558
(2023).
Google Scholar
D. Waibel et al.,
“Reconstruction of initial pressure from limited view photoacoustic images using deep learning,”
in Photons Plus Ultrasound: Imaging and Sensing,
196
(2018). Google Scholar
H. Lan et al.,
“Reconstruct the photoacoustic image based on deep learning with multi-frequency ring-shape transducer array,”
in 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC),
7115
(2019). Google Scholar
J. Feng et al.,
“End-to-end Res-Unet based reconstruction algorithm for photoacoustic imaging,”
Biomed. Opt. Express, 11 5321 https://doi.org/10.1364/BOE.396598 BOEICL 2156-7085
(2020).
Google Scholar
H. Lan et al.,
“Deep learning enabled real-time photoacoustic tomography system via single data acquisition channel,”
Photoacoustics, 22 100270 https://doi.org/10.1016/j.pacs.2021.100270
(2021).
Google Scholar
K. Shen et al.,
“Physics-driven deep learning photoacoustic tomography,”
Fundam. Res.,
(2024).
Google Scholar
H. Lan et al.,
“Masked cross-domain self-supervised deep learning framework for photoacoustic computed tomography reconstruction,”
Neural Netw., 179 106515 https://doi.org/10.1016/j.neunet.2024.106515 NNETEB 0893-6080
(2024).
Google Scholar
S. Guan et al.,
“Limited-view and sparse photoacoustic tomography for neuroimaging with deep learning,”
Sci. Rep., 10 8510 https://doi.org/10.1038/s41598-020-65235-2 SRCEC3 2045-2322
(2020).
Google Scholar
M. Kim et al.,
“Deep-learning image reconstruction for real-time photoacoustic system,”
IEEE Trans. Med. Imaging, 39 3379 https://doi.org/10.1109/TMI.2020.2993835 ITMID4 0278-0062
(2020).
Google Scholar
T. Tong et al.,
“Domain transform network for photoacoustic tomography from limited-view and sparsely sampled data,”
Photoacoustics, 19 100190 https://doi.org/10.1016/j.pacs.2020.100190
(2020).
Google Scholar
C. Dehner et al.,
“DeepMB: deep neural network for real-time model-based optoacoustic image reconstruction with adjustable speed of sound,”
(2022). Google Scholar
C. Dehner et al.,
“Deep model-based optoacoustic image reconstruction (DeepMB),”
in Photons Plus Ultrasound: Imaging and Sensing,
66
(2024). Google Scholar
D. Allman, A. Reiter, and M. A. L. Bell,
“Photoacoustic source detection and reflection artifact removal enabled by deep learning,”
IEEE Trans. Med. Imaging, 37 1464 https://doi.org/10.1109/TMI.2018.2829662 ITMID4 0278-0062
(2018).
Google Scholar
X. Luo et al.,
“Fast correction of “finite aperture effect” in photoacoustic tomography based on spatial impulse response,”
Photonics, 8 356 https://doi.org/10.3390/photonics8090356
(2021).
Google Scholar
B. Wang et al.,
“Approximate back-projection method for improving lateral resolution in circular-scanning-based photoacoustic tomography,”
Med. Phys., 48 3011 https://doi.org/10.1002/mp.14880 MPHYA6 0094-2405
(2021).
Google Scholar
M.-L. Li, Y.-C. Tseng, and C.-C. Cheng,
“Model-based correction of finite aperture effect in photoacoustic tomography,”
Opt. Express, 18 26285 https://doi.org/10.1364/OE.18.026285 OPEXFF 1094-4087
(2010).
Google Scholar
V. G. Andreev et al.,
“Detection of optoacoustic transients with a rectangular transducer of finite dimensions,”
in Biomedical Optoacoustics III,
153
(2002). Google Scholar
S. A. Ermilov et al.,
“Development of laser optoacoustic and ultrasonic imaging system for breast cancer utilizing handheld array probes,”
in Photons Plus Ultrasound: Imaging and Sensing,
28
(2009). Google Scholar
K. B. Chowdhury et al.,
“A synthetic total impulse response characterization method for correction of hand-held optoacoustic images,”
IEEE Trans. Med. Imaging, 39 3218 https://doi.org/10.1109/TMI.2020.2989236 ITMID4 0278-0062
(2020).
Google Scholar
L. Qi et al.,
“Photoacoustic tomography image restoration with measured spatially variant point spread functions,”
IEEE Trans. Med. Imaging, 40 2318 https://doi.org/10.1109/TMI.2021.3077022 ITMID4 0278-0062
(2021).
Google Scholar
D. Xie et al.,
“Spatially-variant image deconvolution for photoacoustic tomography,”
Opt. Express, 31 21641 https://doi.org/10.1364/OE.486846 OPEXFF 1094-4087
(2023).
Google Scholar
W. Dong et al.,
“Image restoration for ring-array photoacoustic tomography system based on blind spatially rotational deconvolution,”
Photoacoustics, 38 100607 https://doi.org/10.1016/j.pacs.2024.100607
(2024).
Google Scholar
K. T. Hsu, S. Guan, and P. V. Chitnis,
“Comparing deep learning frameworks for photoacoustic tomography image reconstruction,”
Photoacoustics, 23 100271 https://doi.org/10.1016/j.pacs.2021.100271
(2021).
Google Scholar
T. Wang et al.,
“Sparse-view photoacoustic image quality enhancement based on a modified U-Net,”
Laser Optoelectron. Prog., 59 0617022 https://doi.org/10.3788/LOP202259.0617022
(2022).
Google Scholar
T. Wang et al.,
“Streak artifact suppressed back projection for sparse-view photoacoustic computed tomography,”
Appl. Opt., 62 3917 https://doi.org/10.1364/AO.487957 APOPAI 0003-6935
(2023).
Google Scholar
Y. Zhao et al.,
“Ultrasound-guided adaptive photoacoustic tomography,”
Opt. Lett., 47 3960 https://doi.org/10.1364/OL.462799 OPLEDP 0146-9592
(2022).
Google Scholar
M. Sandbichler et al.,
“A novel compressed sensing scheme for photoacoustic tomography,”
SIAM J. Appl. Math., 75 2475 https://doi.org/10.1137/141001408 SMJMAP 0036-1399
(2015).
Google Scholar
J. Meng et al.,
“High-speed, sparse-sampling three-dimensional photoacoustic computed tomography in vivo based on principal component analysis,”
J. Biomed. Opt., 21 076007 https://doi.org/10.1117/1.JBO.21.7.076007 JBOPFO 1083-3668
(2016).
Google Scholar
P. Hu et al.,
“Spatiotemporal antialiasing in photoacoustic computed tomography,”
IEEE Trans. Med. Imaging, 39 3535 https://doi.org/10.1109/TMI.2020.2998509 ITMID4 0278-0062
(2020).
Google Scholar
P. Hu, L. Li, and L. V. Wang,
“Location-dependent spatiotemporal antialiasing in photoacoustic computed tomography,”
IEEE Trans. Med. Imaging, 42 1210 https://doi.org/10.1109/TMI.2022.3225565 ITMID4 0278-0062
(2022).
Google Scholar
C. Cai et al.,
“Streak artifact suppression in photoacoustic computed tomography using adaptive back projection,”
Biomed. Opt. Express, 10 4803 https://doi.org/10.1364/BOE.10.004803 BOEICL 2156-7085
(2019).
Google Scholar
S. Hakakzadeh et al.,
“A spatial-domain factor for sparse-sampling circular-view photoacoustic tomography,”
IEEE J. Sel. Top. Quantum Electron., 29 1 https://doi.org/10.1109/JSTQE.2022.3229622 IJSQEN 1077-260X
(2022).
Google Scholar
Y. Wang et al.,
“Enhancing sparse-view photoacoustic tomography with combined virtually parallel projecting and spatially adaptive filtering,”
Biomed. Opt. Express, 9 4569 https://doi.org/10.1364/BOE.9.004569 BOEICL 2156-7085
(2018).
Google Scholar
S. K. Patch,
“Thermoacoustic tomography-consistency conditions and the partial scan problem,”
Phys. Med. Biol., 49 2305 https://doi.org/10.1088/0031-9155/49/11/013 PHMBA7 0031-9155
(2004).
Google Scholar
J. K. Gamelin, A. Aguirre, and Q. Zhu,
“Fast, limited-data photoacoustic imaging for multiplexed systems using a frequency-domain estimation technique,”
Med. Phys., 38 1503 https://doi.org/10.1118/1.3533669 MPHYA6 0094-2405
(2011).
Google Scholar
G. Paltauf, R. Nuster, and P. Burgholzer,
“Weight factors for limited angle photoacoustic tomography,”
Phys. Med. Biol., 54 3303 https://doi.org/10.1088/0031-9155/54/11/002 PHMBA7 0031-9155
(2009).
Google Scholar
|