In multispectral photoacoustic imaging (PAI), the illumination spectrum inside biological tissue varies spatially, leading to poor quantification accuracy of blood oxygen saturation (SO2). The key to solving this problem is to invert light diffusion, which is extremely complicated and inaccurate due to the limited information available in PAI. Despite the great effort devoted, to date, the few available methods are all limited in terms of in vivo performance and physical insights. Here, we introduce an analytical Monte Carlo method, with which we prove that the light spectrum in biological tissue mathematically lies in a high dimensional convex cone set. The model offers new insights into the origin of the spectral deterioration, and we find it possible to calculate blood oxygen saturation (SO2) accurately by using only the photoacoustic data at a single spatial location when signal to noise ratio is sufficient. The method was demonstrated numerically, and our preliminary phantom experiment results also confirmed its effectiveness.
As an emerging optical imaging modality, photoacoustic imaging provides optical absorption contrasts and ultrasonic high resolution. Artifacts appearing in photoacoustic computed tomography (PACT) always deteriorate image quality and resolution, and result in confusion of biological information. On the basis of different causing reasons, they are roughly classified as split artifacts and streak artifacts. Here we present an innovative Feature-Coupling (FC) method to weaken split artifacts with joint reconstruction of speed of sound and a new reconstruction algorithm, termed Contamination-Tracing Back-Projection (CTBP), is proposed for the mitigation of streak artifacts. The utility, effectiveness and robustness of our methods were demonstrated using numerical, phantom, and in vivo experiments.
Photoacoustic imaging is an emerging optical imaging modality which provides optical absorption contrasts and high resolution in the optical diffusive regime. In photoacoustic computed tomography (PACT), often times the detection of the photoacoustic signal only covers a partial solid angle less than 4π, due to experimental or economic constraints. Incomplete spatial coverage always jeopardizes image quality and resolution, and results in significant artifacts and missing of image features. This problem is referred to as “limited view” and has remained unsolved for decades. In this work, we present a new machine-learning-based method that is specifically designed to compensate for the missing information due to limited view. The robustness and effectiveness of our method were demonstrated using numerical, phantom, and in vivo experiments.
Photoacoustic imaging relies on diffused photons for optical contrast, and diffracted ultrasound for high resolution. As a tomographic imaging modality, often times an inverse problem of acoustic diffraction needs to be solved to reconstruct a photoacoustic image. The inverse problem is complicated by the fact that the acoustic properties, including the speed of sound distribution, in the image field of view are unknown. During reconstruction, subtle changes of the speed of sound in the acoustic ray path may accumulate and give rise to noticeable blurring in the image. Thus, in addition to the ultrasound detection bandwidth, inaccurate acoustic modeling, especially the unawareness of the speed of sound, defines the image resolution and influences image quantification. Here, we proposed a method termed feature coupling to jointly reconstruct the speed of sound distribution and a photoacoustic image with improved sharpness, at no additional hardware cost. In vivo experiments demonstrated the effectiveness and reliability of our method.