Photoacoustic-enabled automatic vascular navigation: accurate and naked-eye real-time visualization of deep-seated vessels

Abstract. Accurate localization of blood vessels with image navigation is a key element in vascular-related medical research and vascular surgery. However, current vascular navigation techniques cannot provide naked-eye visualization of deep vascular information noninvasively and with high resolution, resulting in inaccurate vascular anatomy and diminished surgical success rates. Here, we introduce a photoacoustic-enabled automatic vascular navigation method combining photoacoustic computed tomography with augmented and mixed reality, for the first time, to our knowledge, enabling accurate and noninvasive visualization of the deep microvascular network within the tissues in real time on a real surgical surface. This approach achieves precise vascular localization accuracy (<0.89  mm) and tiny vascular relocation latency (<1  s) through a zero-mean normalization idea-based visual tracking algorithm and a curved surface-fitting algorithm. Further, the subcutaneous vessels of minimum diameter (∼0.15  mm) in rabbit thigh and the maximum depth (∼7  mm) in human arm can be vividly projected on the skin surface with a computer vision-based projection tracking system to simulate preoperative and intraoperative vascular localization. Thereby, this strategy provides a way to visualize deep vessels without damage on the surgical surface and with precise image navigation, opening an avenue for the application of photoacoustic imaging in surgical operations.


Introduction
Accurate localization of the vascular trajectory with preoperative image navigation is essential for vascular-related surgery, especially in cases where congenital abnormalities may accompany the anatomical position of blood vessels. 1 Identifying these abnormalities through preoperative imaging and planning vascular surgery in advance can reduce the damage to blood vessels and surrounding tissues and avoid complications. For example, in the perforator flap transplantation process, high variability in the vascular anatomy is a major challenge. Therefore, preoperative planning is critical to rapidly and accurately finding perforators in an effort to minimize the sacrifice of muscle tissue around the perforators and enhance the efficiency of the surgery. 2,3 As another example, in coronary intervention surgery, using the radial artery as the catheter entrance has the advantages of high success rate and few complications.
As anatomical abnormalities of the radial artery will affect the success rate and operation time, 4,5 locating and identifying anatomical abnormalities of the radial artery through preoperative images is vital. However, preoperative images are often presented on a 2D display screen; the combination of preoperative digital images and intraoperative patient surgical surface requires a high degree of physician experience, resulting in inefficient and unsafe image navigation.
With the development of augmented-reality and mixedreality technology, a series of augmented-reality and mixedreality devices have emerged to combine preoperative images with real surgical surfaces, greatly improving the efficiency of surgery and reducing surgical risks. For example, using the Microsoft HoloLens for perforator flap transplantation, computed tomography angiography (CTA) 6,7 was used to image the complete vascular anatomy, and the HoloLens was used to combine the real surgical surface with CTA vascular images to accurately locate the perforators. [8][9][10] In another case, researchers proposed directly projecting the preoperative CTA vascular image on the surgical surface to locate perforators. [11][12][13] However, in these cases, combined with augmented reality or mixed reality, the preoperative imaging method using CTA cannot perform noninvasive detection of blood vessels. CTA involves ionizing radiation and requires intravenous contrast media, which may lead to serious complications, such as an allergy, to the contrast media and impaired renal function. 14,15 Some researchers have proposed using transmission mode near-infrared (NIR) imaging, which is noninvasive, combined with augmented reality, to rapidly locate blood vessels. 16,17 However, this method of NIR imaging has a shallow imaging depth of 5.5 mm in phantoms 18 and 3 mm in tissue. 19 Doppler ultrasound 20 is also a common modality for locating perforators and radial arteries. 21,22 However, Doppler ultrasound has a low sensitivity for imaging small vessels. 23 Photoacoustic imaging (PAI) [24][25][26][27][28] utilizes the specific absorption properties of hemoglobin to achieve direct imaging of blood vessels with high sensitivity and deep penetration. Noninvasive imaging of blood vessels can be performed by PAI to provide high-resolution vascular imaging for preoperative planning. Photoacoustic computed tomography (PACT) 29,30 is an embodiment of PAI. Compared with ultrasound imaging, PACT has rich internal and external optical contrast, and it has advantages in high-resolution imaging of subcutaneous microvessels. 31,32 Additionally, in contrast to CTA, PACT does not require intravenous injection of contrast agents, and the method is free of radiation. Notably, the imaging depth of PACT is much greater than that of transmission mode NIR imaging. Wang et al. demonstrated that the imaging depth of PACT in vivo is up to 4 cm. [33][34][35] However, there is no reported use of PACT in combination with augmented reality and mixed reality for vascular localization in vascular surgery.
Based on the above, we propose a photoacoustic-enabled automatic vascular navigation method that combines PACT with augmented reality and mixed reality for noninvasive and accurate localization of blood vessels. In this navigation strategy, PACT was used to noninvasively reconstruct 2D and 3D vascular images. 3D surface reconstruction technology was used to reconstruct a 3D surface model of the surgical surface. With the assistance of 3D point cloud registration, the 3D vascular image and 3D surface model were fused to augment the interactivity between the 3D vascular image and surgical surface on the computer screen for vascular navigation. In addition, high-resolution 2D vascular images were modulated with a miniaturization-projector-based spatial light modulator (SLM). By means of a robot operation system-based visual localization and tracking technology, the 2D vascular images were precisely superimposed on the real surgical surface, enabling the deep vessels of the real surgical site to be visualized on the surgical surface in real time. Moreover, a curved surface fitting algorithm and a zero-mean normalization idea-based visual tracking algorithm were proposed to enhance the accuracy of vascular localization. This approach provides reliable assistance for locating blood vessels noninvasively and accurately by means of augmented reality and mixed reality, promising to improve the safety and success rate of vascular operations.

Photoacoustic-Enabled Automatic Vascular Navigation
As shown in Fig. 1(a), the experimental facility for photoacousticenabled automatic vascular navigation consists of a PAI system, a computer vision-based projection tracking system (VPTS), and a computer. The PAI system is based on our previous work, 36 which involved a PACT system with a hyperbolic-array transducer that consists of 128 elements with a central frequency of 5.4 MHz and nominal bandwidth of 65%. The PACT system is used to perform preoperative imaging of blood vessels; then the VPTS is used to accurately overlay preoperative photoacoustic (PA) vascular images on the surgical surface in real time. The VPTS includes an RGBD (R: Red, G: Green, B: Blue, D: Depth) camera (Intel Realsense D435i, Intel, United States) and a projector, where the projector is designed and manufactured based on an SLM (PLUTO-NIR-011, HOLOEYE, Germany). The SLM has the advantages of stable phase delay, a multifocal plane, aberration calibration by software, a simple optical engine, and high optical efficiency, 37 which has been proven to be used in making miniaturized high-resolution projectors. 38,39 During projection, the preoperative images are registered with the real surgical surface using visual localization technology, and the RGBD camera monitors the target in real time during the surgery. The pose transformation can be estimated when the target moves involuntarily; then the preoperative images are transformed and reprojected to the surgical surface so that the vascular images can still be projected on the surgical surface in situ after the target moves. All vision-based image registration and tracking calculations are performed on a computer. The specific optical path of the projector is shown in Fig. 1(b). We use a small size common light emitting diode (LED; CL-P10, LIGITEK, China) with red, green, white, and blue colors as the light source to increase the integration. The whole optical path is divided into four parts.
1. LED lighting part. A four-color array LED with an external heat dissipation device is used to provide 10 W incoherent light to suppress the zero-order diffraction of SLM modulation.
2. Beam control part. The beam sequentially passes through a 4F beam expansion system consisting of lens L1, a pinhole, and lens L2 (f 1 ¼ 25.4 mm, f 2 ¼ 30 mm), then passes through a polarization filter and a half-wave plate (HWF) to adjust the relative phase delay of the beam on the SLM. Among them, the incident light passes through the polarizer and HWF in sequence in front of the SLM so that the light can have a specified polarization state (such as 30 deg) to maximize the SLM modulation efficiency. 40 3. SLM modulation part. This part receives LED light from the beam control part. By changing the voltage of conductive elements under the line-up liquid crystal on silicon, one is able to adjust the reflection direction of the incident laser with the resolution of the SLM. 41 To make the structure of the projector smaller, a reflector is used to reflect the light source at 45 deg. Another polarizer regarded as a polarization analyzer is employed after the SLM in the beam path to screen stray light generated by reflections at the surfaces of the SLM. 4. Beam focusing part. Due to the space limitation of the integration, only one focusing lens L3 (f 3 ¼ 150 mm) is used in this part to focus light on the target.
The data flow diagram of the whole system is shown in Fig. 1(c). The whole process is divided into two parallel parts, with two sensors as inputs. The first input is the PAI system. The PAI system is used to perform preoperative imaging of the target and reconstruct 2D and 3D PA vascular images. The second input is the images of the RGBD camera, which are used to reconstruct the 3D surface model of the target. At the same time, the RGBD camera is also used to locate the target in real time and estimate the pose transform when the target moves involuntarily. Once the data obtained from the two inputs are ready, the augmented reality and mixed reality start working.

Augmented Reality and Mixed Reality
The specific implementation of augmented reality and mixed reality is shown in Fig. 2. The algorithm flow chart of the whole system is shown in Fig. 2(a). 2D and 3D PA vascular images as well as RGBD camera images are the two inputs of the whole system. First, unifying the coordinate is needed. Once the homography matrix H CP between the projector and camera is solved, the calibration is complete. The specific calibration principle is shown in Fig. 2(d). To make the calibration results have the physical characteristics of the imaging system and thus make the accurate vascular localization after the PA vascular image projection, we use the imaging data of the PA imaging system for calibration directly. After the calibration, the system enters into the concrete implementation stage, which is divided into two parts: augmented reality and mixed reality.
In the augmented-reality part, RGBD images are first used to reconstruct the 3D surface of the target, 42  to align their direction. After that, the iterative closest point (ICP) algorithm is used to fuse the 3D surface model of the target and the 3D PA vascular image, and the fused augmented-reality image is finally displayed on the screen, as shown in Fig. 2(i). The use of PCA can reduce the number of data dimensions and reduce the amount of computation, while improving the accuracy and success of ICP registration. It can be seen that the 3D PA image does not perfectly overlap with the 3D surface model because there is an error in 3D surface reconstruction using vision.
In the mixed-reality part, the goal is to precisely project the preoperative PA vascular images on the surgical surface, which is divided into two parts: (1) preoperative image registration and (2) intraoperative image tracking.
1. For preoperative image registration, we utilize the VPTS to identify any posture of the surgical surface after the end of the preoperative imaging procedure. First, the camera takes a picture of the surgical surface and transforms it into the projection coordinate system using the calibration result. Then, feature points are extracted from the PA image and the surgical surface image of the projection coordinate system, respectively, assuming that s 1;i is a feature point of the PA image and s 2;i is a feature point of the surgical surface. If the preoperative image is successfully registered with the surgical surface, then there is the following relationship: (1) where R ∈ SOð3Þ is a rotation matrix and t is a translation vector. After defining the error, the extracted feature points are used to construct a least squares problem to solve R and t. Supposing that there are n feature points, there is the following optimization relationship: arg min After solving the transformation relationship for R and t, the preoperative PA image s 1 in the projection coordinate system can be transformed to the projected image. However, when the target is a curved surface, a projection error will occur if the 2D images are directly projected to the curved surface; therefore, it is necessary to design an algorithm for curved surface fitting. The proposed curved-surface-fitting algorithm is specifically shown in Fig. 3. After curved-surface fitting, the final projected images can be projected on the surgical surface by the projector and registered with the real surgical surface. The specific implementation steps of preoperative image registration are shown in Fig. 2 2. Considering the involuntary movement of the patient during surgery, target pose estimation is used to transform the projected images to reregister them with the surgical surface after the patient moves. This process is called intraoperative image tracking. During the surgery, the camera captures the surgical surface images at a frame rate of 30 fps, and the pose transformation after the target moves is estimated. Then, the projected images can be transformed and reprojected in situ on the surgical surface to realize naked-eye real-time visualization, as shown in Fig. 2(c). This figure shows the detailed principle. Before explaining it, we need to clarify several symbol definitions: T CT is the pose transformation of the target relative to the camera and includes rotation R and translation t, where T CT ∈ SEð3Þ; Π is the projection function from the 3D world to the camera plane; and Π −1 is the back projection function from the camera plane to the 3D world.
In the camera coordinate system, suppose that x 1 is the pixel point of the 3D point P 1 before the target moves and x 2 is the pixel point of the 3D point P 2 after the target moves. p 1 and p 2 are the pixel points of the 3D points P 1 and P 2 in the projection coordinate system, respectively, and in the 3D space P 1 and P 2 have the following relationship: The pixel relationship between two images can be obtained from the camera projection function, Based on the photometric invariance principle, a point x 2 in the image I 2 after the target moves that is the most similar to x 1 in I 1 before the target moves can be found. The error is defined as: Assuming that the target has N points and a least squares problem is constructed, the following pose optimization equation is obtained: When the textures of the surgical surface are weak, or the environmental light changes during the surgery, the pose solution will not be accurate enough or even impossible to solve. 43,44 To cope with these problems, the idea of traditional templatematching is added. The zero-mean normalization function zero-mean normalization cross correlation (ZNCC) is used to calculate the cross-correlation coefficient of the two points in the two images, and then the coefficient is used as a weight to optimize the optimization function. The final optimization equation is where α is the weight, which can be adjusted according to the actual situation and experience. In the actual experimental process of this system, α is set to 1.2. In addition, we use numerical optimization in the program, taking into account the treatment of convex and nonconvex functions to converge the problem to global minima. After the pose transformation T CT is solved, and combined with the calibration parameter H CP , the vascular images can be accurately reprojected. In other words, the relationship between p 1 and p 2 is If a projected PA vascular image S pre is transformed, then the transformed projected image S proj is expressed as where S proj ðiÞ and S pre ðiÞ represent a pixel point in the projected image and the PA image before transformation, respectively. The reason for dividing these two processes is that the problem of weak texture on the surgical surface makes it difficult to extract enough feature points on the surgical surface. Fortunately, the preoperative image registration process has the promise of finding feature points on the static surgical surface. Even if extraction fails, the pose of the surgical surface can be changed until success. However, during the intraoperative image-tracking process, it is more difficult to extract feature points on the moving surgical surface, which will lead to image registration failure.

Curved-Surface-Fitting Algorithm
To solve the problem of projection error when projecting 2D PA images on a curved surface, a curved-surface-fitting algorithm was proposed. A hemisphere phantom composed of agar was used to verify the proposed algorithm. Six groups of tungsten wires with a certain radian were covered on the hemisphere surface to simulate blood vessels. The length of the tungsten wires was ∼13 mm, and the tungsten wires had three different diameters of 0.5, 0.4, and 0.3 mm. The 2D PA image reconstructed after imaging is shown in Fig. 3(a). This 2D PA image is the projection of the 3D image on the same plane, as shown in Fig. 3(b). Suppose that c 0 1 c 0 2 is the 2D projection of a 3D PA image on the highest plane of the target, then p 0 1 and p 0 2 are the points of the 2D PA image in the projection plane, p 0 is the center of the projector, o is the highest point of the 3D surface, and p 1 and p 2 are the projection points of p 0 1 and p 0 2 , respectively. In the physical world, there are oc 0 1 ¼ op 1 and oc 0 2 ¼ op 2 . As can be seen, if a 2D image is projected on a 3D curved surface, projection points p 1 and p 2 do not coincide with real points c 1 and c 2 on the real surface. To solve this problem, the camera c 0 is introduced, as shown in Fig. 3(c). The camera is used to reconstruct a 3D surface model of the target. Then the real physical coordinates of each point on the 3D surface can be obtained, and the error e c of the 3D surface reconstruction can be calibrated by calculating the point coordinates of the surface markers. The coordinates of o, c 1 , c 2 in the 3D surface model can be used to calculate the constants a 1 , a 2 , and b. With these variables, the curved edges oc 1 and oc 2 can be fitted by the approximate ellipse circumference equation, namely, Then, c 0 1 and c 0 2 are transformed into c ″ 1 and c ″ 2 , respectively. Thus, there are: oc ″ 1 ¼ oc 1 and oc ″ 2 ¼ oc 2 . At the same time, p 0 1 and p 0 2 in the PA image on the projection plane are transformed into p ″ 1 and p ″ 2 , which can be obtained from c ″ 1 and c ″ 2 respectively; the relationships are Finally, p ″ 1 and p ″ 2 are projected on the surface as p 1 and p 2 , which coincide with c 1 and c 2 . At this point, curved-surface fitting is completed. The same methods are used to fit the entire surface. For different skin surfaces, different regions can be divided according to different shapes; then the proposed curved surface fitting algorithm can be used for each region. Figures 3(d) and 3(e) show the results of curved-surface fitting. It can be seen that there was a projection error (∼1.91 mm) when the PA image was projected on the surface before the fitting in Fig. 3(d), and the projection error was significantly reduced after the fitting, as shown by the white arrow in Fig. 3(e). However, due to the inevitable error in 3D surface reconstruction and the approximate surface-fitting method, there was still a small error after surface fitting (∼0.16 mm).

Localization Accuracy Verification with Phantom Experiments
To verify the localization accuracy of blood vessels during mixed reality, phantom verification experiments were conducted. In the first experiment, a blood vessel-like network was designed and fabricated using tungsten wires, which were randomly placed in agar at different heights. The tungsten wires had three different diameters of 0.5, 0.3, and 0.18 mm to simulate blood vessels of different sizes, as shown in Fig. 4(a). The size of the overall vascular-like network was 30 mm × 40 mm. The imaging time of the phantom was ∼60 s using PACT; the reconstructed 2D PA image is shown in Fig. 4(b). Surface fitting was first conducted, and then localization accuracy of blood vessels for preoperative image registration was performed. The results of nonregistered and registered are shown in Figs. 4(c1) and 4(c2), respectively. To keep the target within the projection area (40 mm × 45 mm) as much as possible, movement of 6 mm in the x direction and 3 mm in the y direction were chosen. The image reprojection results after target movement during intraoperative image tracking are shown in Figs. 4(d1) and 4(d2), respectively. The above steps were repeated 10 times, and the image projection errors were quantified. As shown in Fig. 4(e), the projection errors of both the preoperative image registration and the intraoperative image tracking did not exceed 0.8 mm in the projection area.
In the second experiment, we adopted the hemispherical phantom shown in Fig. 3 and verified the accuracy of vessel localization by repeating the same procedure 10 times and calculating the projection errors in the preoperative image registration and intraoperative image-tracking process. The final results are shown in Fig. S1 in the Supplementary Material. To better evaluate the ability to locate blood vessels in mixed reality, comprehensive statistics of the two validation experiments, a box diagram was plotted to more intuitively represent the error distribution. As shown in Fig. 4(f), the deviation of mean projection errors between the two experiments was less than 0.1 mm, indicating a stable localization performance. It is obvious that the maximum error of the two validation experiments did not exceed 0.75 mm.
The detailed error calculation method for the two phantom experiments is shown in Fig. S2 in the Supplementary Material. Detailed demonstration videos of phantom experiments 1 and 2 are available in Videos 1 and 2. In addition, light sources of different colors can be switched to adapt to different environments so that the best projection effect can be achieved. The projection results of the two phantom experiments after switching the light source to green are shown in Fig. S3 in the Supplementary Material.

Validation Experiments of Vascular Localization in Vivo
We further verified vascular localization ability in vivo. An area of 30 mm × 38 mm on the thigh of a living rabbit was first selected, as shown in the white dashed box in Fig. 5(a). Two copper wire marks, P1 and P2, were randomly placed in the selected area for registration; they will move simultaneously with the surgical surface. Then pose transformation of the entire surgical surface can be obtained by solving the pose transformation of the copper wire marks directly. In addition, the copper wire marks can be useful for judging the accuracy of vascular localization. The marks were 7 mm in length and 0.5 mm in diameter. PACT was performed in the selected area. A 1064 nm laser (VIBRANT, OPOTEK Inc, United States) with pulse width from 8 to 10 ns and a repetition rate of 10 Hz was used to excite PA signals. The 1064-nm wavelength was chosen because it has a deeper penetration depth than the 532-nm wavelength, which enhances the imaging depth. The laser fluence (∼20 mJ∕cm 2 ) used in the in vivo experiments was well within the American National Standards Institutes safety limit for laser exposure (100 mJ∕cm 2 at 1064 nm at a 10-Hz pulse repetition rate). 45 All in vivo experiments complied with the ethical review of South China Normal University (review number: SCNU-BIP-2022-044). To visualize smaller blood vessels, a small line spot was chosen; the size of the laser beam focus on the tissue surface was 35 mm × 2 mm. After 60 s of imaging, a 2D PA image was reconstructed, as shown in Fig. 5(b). During image reconstruction, 10 sets of acquired radio frequency (RF) data for each B-scan were averaged to reduce image artifacts due to respiratory jitter, and the imaging speed can be increased by reducing the number of RF data acquired by each B-scan, but the imaging quality may reduce. The skin and tissue signals were removed from this PA image, with the aim of removing information that was not of interest to us. We could calculate that the maximum imaging depth was ∼2.9 mm from the tomography image along the white dashed line in Fig. 5(b), as shown in Fig. 5(c). According to the previous section, curved surface fitting is first required. Then, preoperative image registration can be carried out using marks P1 and P2. This process takes only 10 s if the feature extraction goes well; otherwise the pose of the surgical surface needs to be changed until the feature extraction is successful. According to the data from multiple experiments, the process time remains within 1 min. The results of nonregistered and registered on the surface are shown in Figs. 5(d) and 5(e), in which the part indicated by the white arrow represents the projected PA vascular image that was accurately registered with the visible blood vessel on the rabbit thigh. When the position of the thigh moved, the PA vascular images were reprojected in real time and still registered with the visible blood vessels and marks on the thigh, as shown in Figs. 5(f1)-5(f3). The white dashed lines show movement up, down, and left. The specific demonstration video is given in Video 3. The white dashed box in Fig. 5(f3) shows the missing blood vessels; this is due to the image loss caused by the rotation and translation of the 2D image after the target moved. The missing area can be identified to be outside the projection area, but this did not affect the accurate visualization of blood vessels on the surgical surface when the target returned to the projection area. The projection area of the current experiment is set to 40 mm × 45 mm by the software. This projection area can be changed; the size of the projection area depends mainly on how large an area corresponding to the PA image is used for calibration, but this requires a new camera-projector calibration. Through this mixed-reality method, the subcutaneous blood vessels can be directly visualized on the surgical surface in real time by the naked eye, which can be very convenient for noninvasive and accurate localization of blood vessels. As shown in Fig. 5(g), approximately 2.9-mm-deep vessels and ∼0.15 mm diameter vessels could be directly visualized on the surface of the rabbit thigh.
To further quantify the localization accuracy of blood vessels in vivo, we calculated the vascular localization accuracy during the entire mixed reality demonstration on the rabbit thigh. Since the deep blood vessels were projected on the surgical surface, we could not calculate the localization error of the deep blood vessels. Therefore, we calculated whether the marks randomly placed on the surgical surface coincided with their projection images as the error calculation standard. As shown in Fig. 5(g),  four endpoints, B 1 , B 2 , B 3 , and B 4 , on the real marks and four endpoints, A 1 , A 2 , A 3 , and A 4 , on its projection image were selected to calculate the root mean square error (RMSE) and convert it into actual errors, respectively. The demonstration video (Video 3) on the rabbit thigh is 85 s. It was divided into 425 pictures to calculate their errors and finally form an error statistical graph, as shown in Fig. 5(h). As seen in the green dashed boxes of the statistical graph, there were large errors at the beginning of the experiment when the images were not registered. After the registration, the errors quickly dropped. In the rabbit thigh demonstration, the movement occurred after 17 s; after the target moved, vascular image reprojection was performed. The pink dotted box indicates the relocalization errors after the movement. According to the statistical graph, the relocation time was within 1 s. The average RMSE in the demonstration video of the rabbit thigh was calculated to be 8.6 pixels, with an actual average error of 0.72 mm.
In addition, we also demonstrated the use of mixed reality to locate blood vessels in real time on the skin surface of the human arm; the results are shown in Fig. S4 in the Supplementary Material and Video 4. After calculation, the average RMSE in the demonstration video of the human arm was 11.7 pixels and the actual average error was 0.89 mm. To analyze the statistical significance of the results, we performed a mixed ANOVA for the localization accuracy of blood vessels in the rabbit thigh and arm, with a significance threshold of p-value ¼ 0.005, as shown in Fig. 5(i). As can be seen, the average values of the two sets are close, indicating that our proposed method is applicable to both animals and human beings, and can still work at different parts. The marked *** symbols indicate statistical significance (p < 0.001) between the experimental conditions. After statistical analysis, our p-value was 5.75 × 10 −7 . Therefore, it shows that our calculated vascular localization accuracy is statistically significant. The results show very high vascular localization accuracy, which is better than the 3.47-mm vascular localization error in perforator flap surgery using the Microsoft HoloLens previously reported. 9 This is also better than the minimum error of 1.35 mm in the previously reported experiment using the HoloLens to verify the vascular localization error in the phantom model. 46 Additionally, it is better than the 1.7 mm error already reported for vascular localization using the VascuLens, 13 even within the clinically acceptable vascular localization accuracy range of 5 mm for perforator flap surgery. 10 The specific vascular localization accuracy comparison is shown in Table 1, where case 1 and case 2 are cases of vascular localization using the combination of CTA and augmented reality, and case 3 is a case of vascular localization using the combination of CTA and mixed reality. This table shows that our method has high performance in vascular localization.

Ability of Augmented Reality to Assist Vascular Localization
In this work, we proposed combining augmented reality and mixed reality to provide reliable help for rapid and accurate localization of blood vessels. The above experiments have all verified the vascular localization performance in mixed reality, but the ability of augmented reality to assist vascular localization cannot be ignored. Therefore, a region of 30 mm × 38 mm on a human arm was selected, as shown in region of interest (ROI) A in Fig. 6(b). Two copper wire marks, F1 and F2, were randomly placed in the area for registration. The length and diameter of the marks were 6 and 0.5 mm, respectively. PACT was first performed on the selected area, with an imaging time of 60 s. Different from the previous experiment on the rabbit thigh, the size of the laser beam focus on the tissue surface was 35 mm × 5 mm to a greater depth. The reconstructed 3D vascular image is shown in Fig. 6(c). As seen from the 3D vascular image, the blood vessels 7 mm under the skin can be clearly visualized. Then the RGBD camera was used to reconstruct the 3D surface model of the arm, which was fused with the 3D PA vascular image into an augmented-reality model in a 3D point cloud space. The fused model was finally displayed on the computer screen in the form of dense point cloud using PCL libraries and C++ programming, as shown in Fig. 6(a). The final augmented-reality model can be rotated and scaled in 3D space; the detailed demonstration result is shown in Video 5. ROI B was selected to visualize the results of rotation and scaling, as shown in Figs. 6(d)-6(g). Moreover, the coordinates of each point of the augmented-reality model in the 3D point cloud space can be easily obtained, which means that the position and structure relationships between the 3D vascular image and the 3D surface model can be calculated with the help of 3D coordinate information. As shown in Fig. 6(e), we can calculate the coordinates of points in 3D space to measure the relationship. Moreover, the subcutaneous vessels can be directly visualized through the skin by enlarging the augmented-reality model to provide perspective, which can conveniently provide reliable information for preoperative planning. However, we do not calculate the accuracy of vascular localization in the augmented-reality part, but in the mixed-reality part.

Discussion
This work verified the accuracy of vessel localization in phantoms and in vivo. The localization error was <0.75 mm in the phantoms, and the average localization error was <0.89 mm in vivo, which is sufficient to prove the excellent performance of our vascular localization strategy. In this work, the imaging area for in vivo experiments is 30 mm × 38 mm, but this area is not fixed: a larger imaging area can be selected to cover the clinically required imaging area. However, our PACT system was limited by the laser energy, so the maximum imaging depth was 7 mm in this work, while PACT has been demonstrated to have a maximum imaging depth of 4 cm. [33][34][35] If we improve the hardware to allow deeper and more vascular information to be visualized on the surgical surface, then the proposed method will be more helpful for rapid and accurate locating blood vessels in clinical surgery. The proposed method showed excellent vascular localization performance in demonstrations. Even if the living tissue produced slight nonrigid deformation, this method could still stably and accurately visualize the vascular images on the body surface in real time. However, when the nonrigid deformation of the tissue on the surgical surface increased, the projection error also increased. As shown in the error statistics in Fig. 5(h), from 34 to 38 s, the error increased due to the nonrigid deformation of the tissue. Nevertheless, this did not affect the accuracy of vascular localization after the restoration of the nonrigid deformation. Introducing a mechanical model or using deep learning to predict nonrigid deformation to solve the increased errors should be the next area of study. In addition, vision-based 3D reconstruction has inevitable errors due to depth measurement or depth estimation problems; after measurement and comparison with the real size, the inevitable error was ∼1.3 mm. This error will exist in the whole system, even if some error corrections had been made in engineering, such as error calibration of the reconstructed 3D surface model. This error was considered in the construction of the approximate elliptical model to improve the accuracy of curved-surface fitting in 2D projection images. However, there was still a projection error (∼0.16 mm) after surface fitting. Therefore, more accurate and robust algorithms to further reduce this error and improve the accuracy of vascular localization need to be designed in the future. In this work, we adopted the scheme of imaging before real-time projection instead of real-time PA imaging, which is a decision we made, aiming at the target of clinical application. The reason is that PA imaging needs coupling. Real-time PA imaging will block the projection. Moreover, in the clinical process, it needs to leave time for doctors to evaluate. Therefore, the current method of imaging before real-time projection is a solution that fits the application scenario. As SLM is a wavelength-sensitive device, it cannot modulate different wavelengths of light simultaneously. Therefore, we cannot project a depth-encoded image to carry the depth information of the blood vessels. By using the time-division multiplexing, one is able to project vascular image with depth information, which may be implemented in our future work.

Conclusion
Our proposed and experimentally demonstrated photoacousticenabled automatic vascular navigation method can noninvasively and accurately locate blood vessels under the navigation of PA images on the surgical surface. This is the first study to utilize PACT in conjunction with augmented and mixed reality for accurate and naked-eye real-time visualization of deepseated vessels. The PACT used in this system can specifically identify blood vessels with high resolution and sufficient depth. In this work, we used PACT to noninvasively obtain vessels in the thigh of rabbit with a minimum diameter of ∼0.15 mm and a maximum depth of ∼2.9 mm, which is not possible with ultrasound, CTA and NIR imaging. In addition, we proposed a curved-surface-fitting algorithm and zero-mean normalization idea-based visual-tracking algorithm to achieve high-precision and low-latency vessel localization. With these two algorithms, the average error of vessel localization was within 0.89 mm, and the vessel relocation latency was within 1 s. Moreover, the use of augmented reality gives doctors the ability to perform perspective in the constructed 3D space, providing reliable information such as the position and structure information between blood vessels and surgical surface for preoperative planning. In addition, the mixed reality allows doctors to perform perspective in the real world by directly projecting deep vessels on the surgical surface.