PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 11319, including the Title Page, Copyright information, Table of Contents, Author and Conference Committee lists.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Photoacoustic imaging (PAI) is a non-ionizing modality that provides high spatiotemporal resolution and resolves optical absorbers at depth, allowing for potential simultaneous therapy monitoring with quantitative molecular imaging. Liposome-encapsulated J-aggregates of indocyanine green (Lipo-JICG) are a PAI agent that can be antibody-targeted and provide high-contrast imaging. Mice with ovarian tumors were injected with either targeted or untargeted Lipo-JICGs and imaged on the MSOT inVision PAI system before injection and immediately, 30-min, and 1-hour post-injection. Lipo-JICG contrast in the tumor was statistically significantly higher in the targeted mice, indicating success in molecular targeting and unmixing accuracy of Lipo-JICGs in vivo.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In cases of child abuse, the leading cause of death is head trauma. Due to the developing skull and brain, the vast majority of victims are less than one year old. An estimated 4 cases of abusive head trauma (AHT) occurs for every 10,000 babies born each year, with a 25% mortality rate. In standard cases, infants undergo X-ray (CT) or magnetic resonance (MRI) imaging when AHT is suspected. Repeated imaging of these patients is desired, since the symptoms associated with abusive head trauma change over time. By measuring the rate of change the precise time and likely cause of the trauma can be more easily identified. CT and MRI are not attractive options for the desired repeated imaging, due to the radiation, high cost, and unavailability at the bedside. Diagnosis of head trauma using imaging is based on several factors, including the presence of skull fractures, subdural hematoma, and hydrocephalus. Thermoacoustic combined tomography (TACT), which synergistically integrates ultrasound tomography (UT), photoacoustic tomography (PAT), and radio frequency acoustic tomography (RAT) is uniquely suited for imaging these symptoms at point-of-care. To verify this hypothesis, in silico experiments were performed on clinical MRI data of infants using our custom developed TACT simulation platform. Speed of sound measurements made using ultrasound are able to identify fractures in the skull, while photoacoustics is sensitive to the molecular contrast of blood, and is able to image subdural hematomas. Radio frequency acoustic tomography is most sensitive to cerebrospinal fluid, necessary for imaging hydrocephalus. Since this technology may be utilized for repeated imaging at the bedside, these multimodal TACT images can provide crucial information for the timing, and therefore cause of infant head trauma, as well as monitor the effect of treatment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Protoacoustics, the measurement of the pressure waves emitted by thermal expansion resulting from proton dose deposition, may be used to obtain the depth of the Bragg-peak (BP) by measuring the time-of-flight of the pressure wave. However, using the method in the clinic has a drawback since numerous measured signals were averaged to identify the accurate signal peak from the indistinguishable noises. We proposed a wavelet-based denoising method to significantly reduce noise in collected protoacoustic signals and improve the BP identification accuracy with fewer signal averages. The average 1024 signal, which has been used in the published study, was used as the reference to identify the accurate acoustic location. We used Daubechies (db) 4 wavelet transform to decompose the collected signals to recover the useful protoacoustic signals. Our approach was able to identify the BP signal up to the average 8 signals that correspond to the dose (<1.5 Gy). This denoising technique would be useful for future 2D/3D protoacoustic imaging and make protoacoustic clinically useful for proton range verification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Doppler ultrasound based methods are widely used in investigating the vascular hemodynamics and to quantize the blood flow dynamics. But conventional Doppler ultrasound techniques are limited to the investigation of axial flows as it is not sensitive to the transverse flow component. When it comes to non-invasive imaging of the human vasculature, as most of the blood vessels are parallel to the skin surface, the flow is transverse and hence conventional Doppler estimates are not reliable. To tackle this, various methods like cross beam Doppler, cross-correlation methods and multi-transmit multi-receive schemes have been developed in the recent past. Most of the strategies in this regard use plane waves which are electronically steered at different angles for insonification. In this work, the performance of a triangulation based algorithm with non-steered plane wave transmit is investigated for transverse flows. The algorithm makes a best fit out of the triangulation estimates with different receive angles so that the variability in vector estimation with the receive angles are reduced. The performance of the developed algorithm is evaluated with extensive simulations using Field II. The algorithm is tested with different flow profiles at different velocities and the estimates are shown to be promising and the accuracy is comparable with that of other vector flow imaging methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
B-mode ultrasound displays hyperechoic and hypoechoic targets as larger and smaller, respectively, compared to the true structure. A method to correct for this distortion would enable B-mode to better represent the true structure. For this work, we investigated training DNN beamformers to reduce this B-mode sizing distortion. Aperture domain DNN beamformers were trained using training data generated from simulated anechoic cysts. The DNN beamformers were trained to suppress signals originating from inside the cyst and to preserve signals originating from outside the cyst. The results suggest that DNN beamformers can be trained to reduce B-mode sizing distortions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work, we present a machine learning method to guide an ultrasound operator towards a selected area of interest. Unlike other automatic medical imaging methods, ultrasound imaging is one of the few imaging modalities where the operator’s skill and training are critical in obtaining high quality images. Additionally, due to recent advances in affordability and portability of ultrasound technology, its utilization by non-experts has increased. Thus, there is a growing need for intelligent systems that have the ability to assist ultrasound operators in both clinical and non-clinical scenarios. We propose a system that leverages machine learning to map real time ultrasound scans to transformation vectors that can guide a user to a target organ or anatomical structure. We present a unique training system that passively collects supervised training data from an expert sonographer and uses this data to train a deep regression network. Our results show that we are able to recognize anatomical structure through the use of ultrasound imaging and give the user guidance toward obtaining an ideal image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Background: The differential diagnosis of benign and malignant thyroid nodules from ultrasound (US) images remained challengeable in clinical practice. We aimed to develop and validate a highly automatic and objective diagnostic model named deep learning Radiomics of thyroid (DLRT) for the differential diagnosis of benign and malignant thyroid nodules from US images. Methods: We retrospectively enrolled US images and corresponding fine-needle aspiration biopsies from 1645 thyroid nodules. A basic convolutional neural network (CNN) model, a transfer learning model, and a newly designed model named deep learning Radiomics of thyroid (DLRT) were used for the investigation. Their diagnostic accuracy was further compared with human observers (one senior and one junior US radiologist). Results: AUCs of DLRT were 0.96 (95% confidence interval [CI]: 0.94-0.98) and 0.95 (95% confidence interval [CI]: 0.93-0.97) in the training and validation cohort, respectively, for the differential diagnosis of benign and malignant thyroid nodules, which were significantly better than other deep learning models (P < 0.05) and human observers (P < 0.05). Conclusions: DLRT shows the best overall performance comparing with other deep learning models and human observers. It holds great promise for improving the differential diagnosis of benign and malignant thyroid nodules.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Accurate modelling of the right ventricle of the human heart is important for both diagnosis and treatment planning. The right ventricle (RV) has a compound convex-concave shape with several sharp edges. While the RV has previously been modeled using the Doo-Sabin method, these models require several extra control nodes to accurately reproduce the relatively sharp edges. The current paper proposes a modified Doo-Sabin method which introduces weighting of vertices and edges rather than extra nodes to control sharpness. This work compares standard vs sharp Doo-Sabin models on modeling the RV from 16 3D ultrasound scans, compared to a ground truth mesh model manually drawn by a cardiologist. The modified, sharp Doo-Sabin method came closer to the ground truth RV model in 11 out of 16 cases and on average showed an 11.54 % improvement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We developed the deep learning Radiomics of elastography (DLRE) which adopted Convolutional Neural Network (CNN) based on transfer learning as a noninvasive method to assess liver fibrosis stages, which is essential for prognosis, surveillance of chronic hepatitis B (CHB) patients. Methods: 297 patients were prospectively enrolled from 4 hospitals, and finally 1485 images were included into analysis randomly. DLRE adopted the Convolutional Neural Network (CNN) based on transfer learning, one of the deep learning radiomic techniques, for the automatic analysis of 2D-SWE images. This study was conducted to assess the accuracy of DLRE in comparison with 2D-SWE, transient elastography (TE), transaminase-to-platelet ratio index (APRI), and fibrosis index based on the four factors (FIB-4), by using liver biopsy as the gold standard. Results: AUCs of DLRE were both 0.98 for cirrhosis (95% confidence interval [CI]: 0.95-0.99) and advanced fibrosis (95% CI: 0.94-0.99), which were significantly better than other methods, as well as 0.76 (95% CI: 0.72-0.81) for significance fibrosis (significantly better than APRI and FIB-4). Conclusions: DLRE shows the best overall performance in predicting liver fibrosis stages comparing with 2D-SWE, TE, and serological examinations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A routine 3D transrectal ultrasound (TRUS) volume is usually captured with large slice thickness (e.g., 2-5mm). Such ultrasound images with low out-of-slice resolution affect contouring and needle/seed detection in prostate brachytherapy. The purpose of this study is to develop a deep-learning-based method to construct high-resolution images from routinely captured prostate ultrasound images for brachytherapy. We propose to integrate a deeply supervised attention model into a Generative Adversarial Network (GAN)-based framework to improve ultrasound image resolution. Deep attention GANs are introduced to enable end-to-end encoding-and-decoding learning. Next, an attention model is used to retrieve the most relevant information from the encoder. The residual network is used to learn the difference between low- and highresolution images. This technique was validated with 20 patients. We performed a leave-one-out cross-validation method to evaluate the proposed algorithm. Our reconstructed, high-resolution TRUS images from down-sampled images were compared with the original image to evaluate the performance quantitatively. The mean absolute error (MAE) and peak signal-to-noise ratio (PSNR) of image intensity profiles between reconstructed and original images were 6.5 ± 0.5 and 38.0 ± 2.4dB.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
3D Transmission with 360 degree compounded reflection ultrasound has been shown effective as a basis for quantitative assessment of breast density on a continuous scale that is compatible with existing FDA approved methods. Breast density is an important risk factor in several breast cancer risk models. Unfortunately, methods utilizing projections (e.g. mammography) or even tomosynthesis do not fully represent the true topological diversity and complexity of the human breast. Presently, the use of the reflection image is important in delineation of the breast volume from the water bath. However, the reflection data and/or image may not be available in some scenarios due to scanner design or equipment malfunction. Furthermore, other data (such as levels of data) may be missing or not collected for specific, perhaps economic, reasons. The Spearman Rank coefficient for correlation of the 3D transmission and reflection ultrasound based quantitative breast density (QBD) was 93% which decreased to 91.5% when reflection image/data were removed. The Spearman r increased again to 95% when smoothing was applied to the speed and attenuation images. The results indicate that even without the reflection data information, the 3D transmission ultrasound characterization of the tissue yields QBD values commensurate with FDA approved methods. This may make the construction of certain quantitative breast estimator devices more economical and useful.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Left ventricular ejection fraction (LVEF) assessment is instrumental for cardiac health diagnosis, patient management, and patient eligibility for participation in clinical studies. Due to its non-invasiveness and low operational cost, ultrasound (US) imaging is the most commonly used imaging modality to image the heart and assess LVEF. Even though 3D US imaging technology is becoming more available, cardiologists dominantly use 2D US imaging to visualize the LV blood pool and interpret its area changes between end-systole and end-diastole. Our previous work showed that LVEF estimates based on area changes are significantly lower than the true volume-based estimates by as much as 13%,1 which could lead to unnecessary and costly therapeutic decisions. Acquiring volumetric information about the LV blood pool necessitates either time-consuming 3D reconstruction or 3D US image acquisition. Here, we propose a method that leverages on a statistical shape model (SSM) constructed from 13 landmarks depicting the LV endocardial border to estimate a new patient’s LV volume and LVEF. Two methods to estimate the 3D LV geometry with and without size normalization were employed. The SSM was built using the 13 landmarks from 50 training patient image datasets. Subsequently, the Mahalanobis distance (with size normalization) or the vector distance (without size normalization) between an incoming patient’s LV landmarks and each shape in the SSM were used to determine the weights each training patient contributed to describing the new, incoming patient’s LV geometry and associated blood pool volume. We tested the pro- posed method to estimate the LV volumes and LVEF for 16 new test patients. The estimated LVEFs based on Mahalanobis distance and vector distance were within 2.9% and 1.1%, respectively, of the ground truth LVEFs calculated from the 3D reconstructed LV volumes. Furthermore, the viability of using fewer principal components (PCs) to estimate the LV volume was explored by reducing the number of PCs retained when projecting landmarks onto PCA space. LVEF estimated based on 3 PCs, 5 PCs, and 10 PCs are within 6.6%, 5.4%, and 3.3%, respectively, of LVEF estimates using the full set of 39 PCs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a refraction-corrected sound speed reconstruction technique for layered media based on the angular coherence of plane waves. Previous work has successfully shown that sound speed estimation and refraction- corrected image reconstruction can be achieved using the coherence of full-synthetic aperture channel data. However, methods for acquiring the full-synthetic aperture dataset require a large number of transmissions, which can confound sound speed estimation due to the scatterer motion between transmit events, especially for in-vivo application. Furthermore, sound speed estimation requires producing full-synthetic aperture coherence images for each trial sound speed, which can make the overall computational cost quite burdensome. The angular coherence beamformer, initially devised as a quicker alternative to the more conventional spatial coherence beamformer, measures coherence between fully-beamformed I/Q channel data for each plane wave as opposed to the receive channel data prior to receive beamforming. As a result, angular coherence beamforming can significantly reduce the computation time needed to reconstruct a coherence image by taking advantage of receive beamforming. Previous work has used the coherence maximization of full-synthetic aperture channel data to perform sound speed estimation. By replacing spatial coherence with angular coherence, we apply a similar methodology to channel data from plane-waves to significantly reduce the computational cost of sound speed estimation. This methodology has been confirmed by both simulated and experimental channel data from plane waves.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Breast density is now recognized as one of the most important independent risk factors of breast cancer. Current means to assess breast density primarily utilize mammograms which represent a series of projection images, making it difficult to estimate the true volume of the fibroglandular tissue. We present 3D transmission ultrasound as a method to visualize and differentiate fibroglandular tissue within the breast and use an unsupervised learning-based method to quantitatively assess the respective breast density. The method includes initial separation of breast from the surrounding water bath followed by segmentation of the whole breast into fibroglandular tissue and fat using fuzzy C-mean (FCM) classification. We apply these methods to both tissue phantoms (in vitro) and clinical breast images (in vivo). In the case of tissue phantoms, the agreement between the theoretical (geometric density) and experimentally calculated values was better than 90%. For density calculation in a sample size of 50 cases, the results correlate well (Spearman r = 0.93, 95% CI: 0.88-0.96, p<0.0001) with an FDA-cleared breast density assessment software, VolparaDensity. We also discuss the advantage of using FCMbased tissue classification over threshold-based tissue segmentation within the paradigm of iterative image inversion/reconstruction and show that the former method is less sensitive to variation in assessment of breast density as a function of iteration count and thus, less dependent on convergence criteria. These results imply that breast density as assessed by 3D transmission ultra-sound can be of significant clinical utility and play an important role in breast cancer risk assessment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ultrasound Imaging and Image Guidance: Joint Session with Conferences 11315 and 11319
Accurate and automatic multi-needle detection in three-dimensional (3D) ultrasound (US) is a key step of treatment planning for US-guided brachytherapy. However, most current studies are concentrated on single-needle detection by only using a small number of images with a needle, regardless of the massive database of US images without needles. In this paper, we propose a workflow of multi-needle detection via considering the images without needles as auxiliary. Specifically, we train position-specific dictionaries on 3D overlapping patches of auxiliary images, where we developed an enhanced sparse dictionary learning method by integrating spatial continuity of 3D US, dubbed order-graph regularized dictionary learning (ORDL). Using the learned dictionaries, target images are reconstructed to obtain residual pixels which are then clustered in every slice to determine the centers. With the obtained centers, regions of interest (ROIs) are constructed via seeking cylinders. Finally, we detect needles by using the random sample consensus algorithm (RANSAC) per ROI and then locate the tips by finding the sharp intensity drops along the detected axis for every needle. Extensive experiments are conducted on a prostate data set of 70/21 patients without/with needles. Visualization and quantitative results show the effectiveness of our proposed workflow. Specifically, our approach can correctly detect 95% needles with a tip location error of 1.01 mm on the prostate dataset. This technique could provide accurate needle detection for US-guided high-dose-rate prostate brachytherapy and facilitate the clinical workflow.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We developed a reliable and repeatable process to create hyper-realistic, kidney phantoms with tunable image visibility under ultrasound (US) and CT imaging modalities. A methodology was defined to create phantoms that could be produced for renal biopsy evaluation. The final complex kidney phantom was devised containing critical structures of a kidney: kidney cortex, medulla, and ureter. Simultaneously, some lesions were integrated into the phantom to mimic the presence of tumors during biopsy. The phantoms were created and scanned by ultrasound and CT scanners to verify the visibility of the complex internal structures and to observe the interactions between material properties. The result was a successful advancement in knowledge of materials with ideal acoustic and impedance properties to replicate human organs for the field of image-guided interventions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Current standard workflows of ultrasound (US)-guided needle insertion require physicians to use their both hands: holding the US probe to locate interested areas with the non-dominant hand and the needle with the dominant hand. This is due to the separation of functionalities for localization and needle insertion. This requirement does not only make the procedure cumbersome, but also limits the reliability of guidance given that the positional relationship between the needle and US images is interpreted with their experience and assumption. Although the US-guided needle insertion may be assisted through navigation systems, the recovery of the positional relationship between the needle and US images requires the usage of external tracking systems and image-based tracking algorisms that may involve the registration inaccuracy. Therefore, there is an unmet need for the solution that provides a simple and intuitive needle localization and insertion to improve the conventional US-guided procedure. In this work, we propose a new device concept based on the ring-arrayed forward-viewing (RAF) ultrasound imaging system. The proposed system is comprised with ring-arrayed transducers and an open whole inside the ring where the needle can be inserted. The ring array provides forward-viewing US images, where the needle can be visualized at the center of the reconstructed image without any registration. As the proof of concept, we designed several ring-arrayed configurations and visualized point targets using the forward-viewing US imaging through simulations and phantom experiments. The results demonstrated the successful target visualization and indicates the ring-arrayed US imaging has a potential to improve the US-guided needle insertion procedure to be simpler and more intuitive.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The performance of a reconstruction method in ultrasound computed tomography (CT) ideally should be evaluated using various kinds of phantoms at a wide range of speeds of sound when inclusions are made. However, generating real phantoms is more time consuming than generating simulated ones. In our previous study, we developed an oil-gel-based phantom by including water or salt water. In this study, we designed an evaluation method including various contrast conditions using the oil-gel-based phantom by changing the liquid and temperature. The phantom including water or salt water in 10-, 7-, 5-, or 3-mm holes was measured using our prototype ultrasound CT at temperatures of 15, 17.5, 20, 22.5, 25, 27.5, and 30°C, making the number of measurements 14. For these conditions, the difference (= contrast) in the speed of sound between the inclusions and the oil gel was −37 to 92 [m/s]. The filtered back projection (FBP) and full waveform inversion (FWI) were evaluated. The mean error of the speeds of sound in inclusions with the FBP and FWI were 17.1 ± 14.9 and 8.8 ± 10.1 [m/s], respectively. The mean percentage error of the sizes of the phantom (51 mm) and inclusions with the FBP and FWI were 22.5 ± 22.5% and 3.9 ± 4.3%, respectively. A single oil-gel-based phantom provided various contrast conditions after the temperature and liquid were changed. This kind of phantom can be used for comprehensive quantitative evaluations of the reconstruction method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ultrasound transmission tomography is a promising modality for breast cancer diagnosis. For image reconstruc- tion approximations to the acoustic wave equation such as straight or bent rays are commonly used due to their low computational complexity. For sparse apertures the coverage of the volume by rays is very limited, thereby requiring strong regularization in the inversion process. The concept of fat rays reduces the sparseness and includes the contributions to the measured signal originating from the first Fresnel zone. In this work we investi- gate the application of the fat ray concept to ultrasound transmission tomography. We implement a straight ray, bent ray and fat ray forward model. For the inversion process a least squares solver (LSQR), a simultaneous al- gebraic reconstruction technique (SART) and a compressive sensing based total variation minimization (TVAL3) is applied. The combination of forward models and inversion processes has been evaluated by synthetic data. TVAL3 outperforms SART and LSQR, especially for sparse apertures. The fat ray concept is able to decrease the error with respect to the ground truth compared to the bent ray method especially for SART and LSQR inversion, and especially for very sparse apertures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We are performing clinical studies on breast cancer examinations at Hokkaido University Hospital with an ultrasound computed tomography (USCT) system. Our studies have revealed that some reflection images exhibit intensity inhomogeneity because ultrasound waves, shot by a 1-D ring array transducer, go non-vertically into the object surface. This trend significantly increases the burden of interpretation. Therefore, we developed a calibration method to remove this heterogeneity based on the distribution of the incident angle of waves that are estimated from the slope of the subject surface morphologically extracted from multi-slice reflection images. Results showed that applying this correction method to clinical images enabled the image contrast and uniformity to be successfully recovered.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years, Ultrasound computed tomography (USCT) has important clinical application prospect in breast cancer screening and early diagnosis. In this paper, the biomedical image denoising technique based on variational mode decomposition (VMD) method for USCT is investigated. The VMD method allows decomposition of data into a finite number of intrinsic mode functions (IMFs) after a sifting pre-process. Removing the noise components and refactoring the remaining IMFs, the processed data can be used for USCT image reconstruction. It can provide images with less noises, higher resolution and better contrast compared to the traditional B-mode imaging method. The validation of VMD method for USCT is presented through the breast phantom experiment. The radio-frequency (RF) data of the breast phantom were captured by the USCT system developed in the Medical Ultrasound Laboratory. The main components of USCT system are data acquisition module and a 1024-element ring array with center frequency 2.5MHz. Graphics processing units (GPUs) have been highly applied to image reconstruction considering its high parallel computation ability. Experimental results show that the reconstructed image of breast phantom by the VMD method get the higher signal to noise ratio (SNR) and more homogeneous background compared to the delay and sum (DAS) method. The contrast ratio (CR) could be enhanced from 0.96 dB to 1.01dB and 88.38 dB to 99.53 dB at different regions of interest (ROI). The contrast to noise ratio (CNR) enhance from 0.09dB to 0.13dB at hypoechoic area and from 8.01 to 13.03 at hyperechoic area.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Intense pulsed light (IPL) is a high-intensity treatment for skin disorders and ageing. As this treatment regime is often poorly regulated and inadequately studied, we investigate IPL as a cosmetic device and its effects on dermal collagen components of the skin. Biopsies from the back-neck folds of a 4-week-old, 25 kg large white pig were irradiated with intense pulsed light (IPL) (l= 584 nm) at an increased radiation dose of 40 J/cm2 once, thrice and ten times. Samples were cryo-sectioned (10 μm) and stained with picro sirrus red. Ex-vivo biopsies were assessed with polarized light microscopy (PLM), atomic force microscopy (AFM) and scanning acoustic microscopy. Customized software was used to map the sound speed and attenuation on the ultrasonic images Differences in collagen structure were observed between all three levels of irradiation progressing depth-wise into the epidermis. Ex-vivo porcine tissue demonstrated loss of D-banding and gelatinization with increasing dermal depth with higher intensities. Acoustic microscopy demonstrated a significant decrease in sound speed and attenuation that relates to the number of exposures. Sound speed decreases at much faster rates than attenuation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Current molecular imaging modalities face barriers to clinical implementation, providing a need for an improved clinical molecular imaging approach. We hypothesize that EGFR-targeted perfluorocarbon nanodroplets can label metastatic cells; they can then be activated (i.e., converted to a microbubble) and imaged using ultrasound in order to provide a molecular contrast agent that can inform treatment. Pulse sequences were developed for the Verasonics Vantage 128 system to activate and image dodecafluoropentane and dodecafluorohexane nanodroplets. Dodecafluoropentane nanodroplets provided 28-dB enhancement when imaged with pulse-inversion US in a tissue-mimicking environment, while dodecafluorohexane nanodroplets showed activation and subsequent recondensation, allowing for super-resolution imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We developed a methodology for 3D assessment of ciliary body of the eye, an important, but understudied tissue, using our new 3D ultrasound biomicroscopy (3D-UBM) imaging system. The ciliary body produces aqueous humor, which if not drained properly, can lead to increased intraocular pressure and glaucoma, a leading cause of blindness. Most medications and some surgical procedures for glaucoma target the ciliary body. Ciliary body is also responsible for focusing-accommodation by muscle contraction and relaxation. UBM is the only imaging modality which can be used to visualize structures behind the opaque iris, such as ciliary body. Our 3D-UBM acquires several hundred high resolutions (50 MHz) 2D-UBM images and creates a 3D volume, enabling heretofore unavailable en face visualizations and quantifications. In this study, we calculated unique 3D biometrics from automated segmentation using deep learning (UNet). Our results show accuracy of 0.93 ± 0.01, sensitivity of 0.79 ± 0.07 and dice score of 0.72 ± 0.07 on deep learning segmentation of ciliary muscle. For an eye, volume of ciliary body was 67.87 mm3, single ciliary process volumes were 0.234 ± 0.093 mm3 with surface areas adjacent to aqueous humor of 3.02 ± 1.07 mm2. Automated and manual measurements of ciliary muscle volume and cross-sectional area are compared which show overestimation in volume measurement but higher agreeability in cross-sectional area measurements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Female pelvic floor dysfunction may manifest as pelvic organ prolapse (POP), urinary or fecal incontinence, pelvic pain or chronic constipation. POP is the descent of the pelvic organs into the vaginal cavity, affecting up to 50% of the female population. Diagnostic evaluation of POP is often performed via clinical examination (i.e. palpation). However, clinical examination is inefficient to assess structural abnormalities. There has been an increasing interest in the applications of ultrasound imaging for pelvic floor imaging to better understand the pathophysiology of pelvic floor dysfunction and POP. This is in part due to the recent developments in three-dimensional (3D) and 4D ultrasound imaging. However, despite its wide application in research, pelvic floor 3D ultrasound has not been employed in the clinic for the assessment of POP, which is likely due to the high cost of 3D ultrasound imaging systems. In this work, a cost-effective technique for acquiring 3D pelvic floor ultrasound images using a conventional 2D curvilinear probe is presented and compared against commercial 3D probes. This is achieved by a hand-held, mechanically-assisted 3D ultrasound scanner. This system has potential to decrease the cost of 3D pelvic floor ultrasound imaging and increase its application for POP assessment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ability to conduct ‘partial field of view’ scans in QT ultrasound transmission ultrasound is investigated. The standard tomographic data acquisition is typically conducted in a full 360 degree aperture, which may limit the possibility of real time intervention and/or patient positioning for medical imaging. Transmission ultrasound has many advantages over other imaging modalities such as, it does not emit ionizing radiation, does not require contrast agent, etc. A partial field of view data acquisition in this context is therefore attractive. Three scenarios are investigated: breast, whole body, and orthopedic imaging. A full suite of 180 views at 2 degree intervals and 192 mm in the vertical direction was collected from the QT ultrasound™ scanner in the orthopedic and whole body scenarios. The vertical extent of the breast image varied with breast size. Subsequent reconstructions were carried out with a subset of the views incorporated. The contiguous sector of missing angles varied from 8, 16, 32, 40, 60 and 92 degrees. The difference between the corresponding images was quantitatively and qualitatively analyzed and compared. It was found that a large lacuna (gap) of contiguous data did not significantly (clinically) degrade image quality, and the quantitative values, where relevant. The open acquisition scenario allows us to carry out medical intervention as well as potentially decrease patient anxiety. The quantitative nature of the degradation is noted and correlated with the missing sector angles for clinical scenarios, and the implications discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Muscular Dystrophy (Duchenne) and osteoarthritis are important diseases requiring quantitative tissue measurement for accurate monitoring. MRI with Dixon Sequences, Nakagami statistics, elastography and radiography have been used in semi-quantitative modes to infer myopathy, articular cartilage damage, and exercise induced muscle damage. We establish the high resolution quantitative accuracy of transmission ultrasound imaging using both fresh and cadaver knee tissue. The use of fresh tissue obviates the possibility that cadaverous tissue has different speed of sound (SOS) values due to fixation. We show also that the fixation procedure changes the SOS values by about 0.1% to 0.4% fibroglandular tissue, fat and skin. The use of multiple transverse sections at varying distances from the tibiofemoral space ameliorates bias. Using a 6 mm diameter ROI at 20 successive levels, the SOS measured for the Vastus medialis was 1573 m/s with a 95% CI of +/-1.8 m/s. The average standard deviation (SD) for the ROI’s altogether was 25.9 m/s. The accepted SOS value for muscle from the IT’IS Foundation, Zurich, is 1588.4, SD = 21.6 m/s. Similar results for 6 mm diameter ROI’s in fat at successive transverse sections yields an average of 1438.7 m/s, the ROI SD’s averaged 20.5 m/s. The IT’IS values are 1440 m/s with SD 21.4 m/s. Cartilage gives similar agreement: literature value is 1660 m/s, and we measure 1655.4 m/s (SD 14.6) in 4 mm diameter ROI. We show regions that have no MR response yet are quantitatively imaged in 3D transmission ultrasound.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Diagnostic ultrasound is ubiquitous in clinical practice because it is safe, portable, inexpensive, has high spatial resolution and is real time. Therefore, improving the capabilities of diagnostic ultrasound is a highly significant clinically. In this talk we will discuss different applications of quantitative ultrasound (QUS) imaging and how QUS approaches have evolved over time. Specifically, we will discuss the use of spectral-based approaches to estimate the backscatter coefficient (BSC) and attenuation slope and the use of envelope statistics to describe underlying tissue microstructure. These QUS approaches have been successful at classifying tissue state, monitoring focused ultrasound therapy, detecting early response of breast cancer to neoadjuvant chemotherapy and the automatic detection of nerves in the imaging field. We will demonstrate how QUS approaches can be incorporated on breast tomography machines, which allow an expansion of the tradeoff between spatial resolution and the variance of QUS estimates. One of the ongoing issues with QUS is the inability to properly account for losses in tissues that affect the estimates of the backscatter coefficient. We will demonstrate new calibration procedures that can improve the ability to account for tissue losses. Finally, we will discuss how machine learning approaches can further improve QUS techniques by eliminating the need for models and in some cases eliminating the need for a reference scan.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image Analysis in Ultrasound and OCT: Joint Session with Conferences 11313 and 11319
Automated 3D breast ultrasound (ABUS) has substantial potential in breast imaging. ABUS appears to be beneficial because of its outstanding reproducibility and reliability, especially for screening women with dense breasts. However, due to the high number of slices in 3D ABUS, it requires lengthy screening time for radiologists, and they may miss small and subtle lesions. In this work, we propose to use a 3D Mask R-CNN method to automatically detect the location of the tumor and simultaneously segment the tumor contour. The performance of the proposed algorithm was evaluated using 25 patients’ data with ABUS image and ground truth contours. To further access the performance of the proposed method, we quantified the intersection over union (IoU), Dice similarity coefficient (DSC), and center of mass distance (CMD) between the ground truth and segmentation. The resultant IoU 96% ± 2%, DSC 84% ± 3%, and CMD 1.95 ± 0.89 mm respectively, which demonstrated the high accuracy of tumor detection and 3D volume segmentation of the proposed Mask R-CNN method. We have developed a novel deep learning-based method and demonstrated its capability of being used as a useful tool for computer-aided diagnosis and treatment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Accurate motion tracking of the left ventricle is critical in detecting wall motion abnormalities in the heart after an injury such as a myocardial infarction. We propose an unsupervised motion tracking framework with physiological constraints to learn dense displacement fields between sequential pairs of 2-D B-mode echocardiography images. Current deep-learning motion-tracking algorithms require large amounts of data to provide ground-truth, which is difficult to obtain for in vivo datasets (such as patient data and animal studies), or are unsuccessful in tracking motion between echocardiographic images due to inherent ultrasound properties (such as low signal-to-noise ratio and various image artifacts). We design a U-Net inspired convolutional neural network that uses manually traced segmentations as a guide to learn displacement estimations between a source and target image without ground- truth displacement fields by minimizing the difference between a transformed source frame and the original target frame. We then penalize divergence in the displacement field in order to enforce incompressibility within the left ventricle. We demonstrate the performance of our model on synthetic and in vivo canine 2-D echocardiography datasets by comparing it against a non-rigid registration algorithm and a shape-tracking algorithm. Our results show favorable performance of our model against both methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose an approach based on a weekly supervised method for MR-TRUS image registration. Inspired by the viscous fluid physical model, we made the first attempt at combining convolutional neural network (CNN) and long short-term memory (LSTM) Neural Network to perform deep learning-based dense deformation field prediction. Through the integration of convolutional long short-term memory (ConvLSTM) Neural Network and weakly supervised approach, we achieved accurate results in terms of Dice similarity coefficient (DSC) and target registration error (TRE) without using conventional intensity-based image similarity measures. Thirty-six sets of patient data were used in the study. Experimental results showed that our proposed ConvLSTM neural network produced a mean TRE of 2.85±1.72 mm and a mean Dice of 0.89.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The use of Synthetic Aperture Focusing Technique (SAFT) for reflectivity ultrasound tomography has limita- tions due to the low signal-to-noise ratio. To overcome this fact in reflectivity ultrasound tomography, some studies have been showing improvements, propounding to take advantage of ultrasound transmission phenom- ena. Nevertheless, they do not take into account the characteristic divergence of ultrasound propagation. To contribute to a solution, we propose an improvement called weighted-SAFT, and test it with simulated media. In this study, the following were used: k-Wave toolbox for data generation on heterogeneous medium (sound speed, density, and attenuation); numerical phantoms with different combination, size, geometry, and location of the simulated objects; Fast Marching Method for phase aberration correction and time of flight, based on Refraction tomography, to calculated the SAFT weighting. The data set was generated using 192 simulated single-element transducers (1 MHz) uniformly distributed along the perimeter of a circular area. The SAFT reconstructions were made using the raw received signal and, separately, the envelope of the signal. SAFT-reconstructions using the signal envelope shows a diffuse appearance, and the edges of the objects in these regions can not be delimited. It is observed how the weighting SAFT leads to a more balanced distribution of intensity values throughout the image, partially compensating the ultrasound divergence effect and improving the image contrast.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a novel approach to obtain time-of-flight measurements between transducer pairs in an Ultrasound computed tomography (USCT) scanner by applying the interferometry principle, which has been used success- fully in seismic imaging to recover the subsurface velocity structure from ambient noise recordings. To apply this approach to a USCT aperture, random wavefields are generated by activating the emitting transducers in a random sequence. By correlating the random signals recorded by the receiving transducers, we obtain an approximation of the Green’s functions between all receiver pairs, where one is acting as a virtual source. This eliminates specific source imprints, and thus avoids the need for reference measurements and calibration. The retrieved Green’s functions between any two measurement locations can then be used as new data to invert the sound speed map. On the basis of the cross-correlation travel times a ray-based time-of-flight tomography is developed and solved with an iterative least-squares method. As a proof of concept, the algorithm is tested on numerical breast phantoms in a synthetic 2D study.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Accurate and automatic multi-needle detection in three-dimensional (3D) ultrasound (US) is a key step of treatment planning for US-guided prostate high dose rate (HDR) brachytherapy. In this paper, we propose a workflow for multineedle detection in 3D ultrasound (US) images with corresponding CT images used for supervision. Since the CT images do not exactly match US images, we propose a novel sparse model, dubbed Bidirectional Convolutional Sparse Coding (BiCSC), to tackle this weakly supervised problem. BiCSC aims to extract the latent features from US and CT and then formulate a relationship between them where the learned features from US yield to the features from CT. Resultant images allow for clear visualization of the needle while reducing image noise and artifacts. On the reconstructed US images, a clustering algorithm is employed to find the cluster centers which correspond to the true needle position. Finally, the random sample consensus algorithm (RANSAC) is used to model a needle per ROI. Experiments are conducted on prostate image datasets from 10 patients. Visualization and quantitative results show the efficacy of our proposed workflow. This learning-based technique could provide accurate needle detection for US-guided high-dose-rate prostate brachytherapy, and further enhance the clinical workflow for prostate HDR brachytherapy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents the design, fabrication, and characterization of a novel all-optical fiber ultrasound imaging system based on the photoacoustic (PA) ultrasound generation principle and Fabry-Perot interferometer principle for biomedical imaging applications. This system consists of a fiber optic ultrasound generator and a Fabry–Perot (FP) fiber sensor receiver. A carbon black polydimethylsiloxane (PDMS) material was utilized as the photoacoustic material for the fiber optic ultrasound generator. The black PDMS material was coated on the tip of a 1000 μm core size multimode fiber (MMF) to generate the ultrasound signal. Two layers of gold, PDMS and a single mode fiber (SMF) were used to build the FP fiber sensor receiver. The system verification test proves the ultrasound sensing capability. The biomedical imaging test validates the ultrasound imaging capability. There are many advantages of this all-optical fiber ultrasound imaging system, such as small size, light weight, ease of use, and immunity to electromagnetic interference. This research has revealed valuable knowledge for the further study of biomedical imaging in a limited space, e.g., catheter based intravascular imaging, tissue characterization, tissue identification and related biomedical applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
High-resolution ultrasonic imaging has been increasingly used in dermatology as a complementary technique for cutaneous lesions assessment. The clinical application of ultra-high frequency ultrasound can reduce the number of invasive procedures, such as biopsies, and assist in surgical planning. The success of ultrasonic methods depends on the ability of modern imaging systems to deliver reliable and interpretable information. We present a new handheld high-resolution ultrasonic imager designed for dermatological use. The device operates at 50-100 MHz and provides B-scan images with up to 4 mm penetration depth and 40μm axial resolution. Adaptive signal processing algorithms allow highlighting anatomical and pathological features of the skin tissue. The primary skin layers, as well as the skin appendages, are clearly detectable on the obtained acoustical images. High-resolution characterization of skin morphology allows assessing the overall condition of the skin and can be used for diagnostic purposes. The new device will be useful for the number of dermatological applications: surgery planning and image-guided intervention, assessment of wound healing and skin grafts, vascular anomalies, inflammatory diseases, and numerous cosmetic complications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Quickly developed medical market of high-resolution skin imaging requires suitable phantoms simulating characteristics of skin pathologies to develop new ultrasonic imaging systems and image processing algorithms. The phantom should mimic important acoustic properties of skin to provide a realistic and responsive environment. The most significant ultrasonic parameters here are sound speed, attenuation, and backscattering. The purpose of this study was to develop multilayered phantoms imitating mechanical and acoustic properties of healthy skin, benign nevus, and melanoma lesions that can be used for high-frequency ultrasound (50MHz). Phantoms were fabricated and tested using scanning acoustic microscopes (Honda 50-SI, Tessonics AM) and hand-held skin imager. Physical and acoustic properties of the phantom materials can be controlled by varying the material composition. Evaluated acoustic parameters were found to be close to values of human skin. The phantoms can be used for an extended period (6 months) without altering its properties.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.