The synthetic aperture (SA) technique can be used for achieving real-time volumetric ultrasound imaging using 2-D row-column addressed transducers. This paper investigates SA volumetric imaging performance of an in-house prototyped 3 MHz λ/2-pitch 62+62 element piezoelectric 2-D row-column addressed transducer array. Utilizing single element transmit events, a volume rate of 90 Hz down to 14 cm deep is achieved. Data are obtained using the experimental ultrasound scanner SARUS with a 70 MHz sampling frequency and beamformed using a delay-and-sum (DAS) approach. A signal-to-noise ratio of up to 32 dB is measured on the beamformed images of a tissue mimicking phantom with attenuation of 0.5 dB cm-1 MHz-1, from the surface of the probe to the penetration depth of 300λ. Measured lateral resolution as Full-Width-at-Half-Maximum (FWHM) is between 4λ and 10λ for 18% to 65% of the penetration depth from the surface of the probe. The averaged contrast is 13 dB for the same range. The imaging performance assessment results may represent a reference guide for possible applications of such an array in different medical fields.
Many modern high-end scanners use some form for coherent synthesis of image lines by combining beams acquired with different transmissions, such as retrospective dynamic transmit focusing (Acuson / Siemens), nSIGHT (Philips), and Zone imaging (Zonare). There are two major strategies described in the literature to uniformly focus both transmit and receive beams throughout the field of view - using virtual sources, and by applying spatial matched filtration. The virtual source model is precise, when the transmit is either strongly focused (f-number ~ 1, 2) or images are formed using circular or spherical waves. The spatial matched filtration can be used also with weakly focused transmissions, but requires the measurement and storage of the response of point targets within the limits of the transmit beam.
This paper presents a semi-analytic model for the transmitted field, which can be applied to synthetic transmit imaging. The model is more precise than the virtual source concept, does not require the measurement of the transmit field as matched filtration methods do, and can be applied to both strongly and weakly focused transmissions. Furthermore, the model is applicable to tissue harmonic and contrast enhanced ultrasound imaging.
The paper presents the development of the model using the principles of diffraction, and its validation using computer simulations and measurements on a phantom. Finally, the model is demonstrated for synthetic aperture tissue harmonic in-vivo imaging.
Synthetic aperture (SA) imaging can be used to achieve real-time volumetric ultrasound imaging using 2-D array transducers. The sensitivity of SA imaging is improved by maximizing the acoustic output, but one must consider the limitations of an ultrasound system, both technical and biological. This paper investigates the in vivo applicability and sensitivity of volumetric SA imaging. Utilizing the transmit events to generate a set of virtual point sources, a frame rate of 25 Hz for a 90° × 90° field-of-view was achieved. data were obtained using a 3.5 MHz 32 × 32 elements 2-D phased array transducer connected to the experimental scanner (SARUS). Proper scaling is applied to the excitation signal such that intensity levels are in compliance with the U.S. Food and Drug Administration regulations for in vivo ultrasound imaging. The measured Mechanical Index and spatial-peak-temporal-average intensity for parallel beam-forming (PB) are 0.83 and 377.5mW/cm2, and for SA are 0.48 and 329.5mW/cm2. A human kidney was volumetrically imaged with SA and PB techniques simultaneously. Two radiologists for evaluation of the volumetric SA were consulted by means of a questionnaire on the level of details perceivable in the beam-formed images. The comparison was against PB based on the in vivo data. The feedback from the domain experts indicates that volumetric SA images internal body structures with a better contrast resolution compared to PB at all positions in the entire imaged volume. Furthermore, the autocovariance of a homogeneous area in the in vivo SA data, had 23.5% smaller width at the half of its maximum value compared to PB.
Rapid estimation of blood velocity and visualization of complex flow patterns are important for clinical use of diagnostic ultrasound. This paper presents real-time processing for two-dimensional (2-D) vector flow imaging which utilizes an off-the-shelf graphics processing unit (GPU). In this work, Open Computing Language (OpenCL) is used to estimate 2-D vector velocity flow in vivo in the carotid artery. Data are streamed live from a BK Medical 2202 Pro Focus UltraView Scanner to a workstation running a research interface software platform. Processing data from a 50 millisecond frame of a duplex vector flow acquisition takes 2.3 milliseconds seconds on an Advanced Micro Devices Radeon HD 7850 GPU card. The detected velocities are accurate to within the precision limit of the output format of the display routine. Because this tool was developed as a module external to the scanner's built-in processing, it enables new opportunities for prototyping novel algorithms, optimizing processing parameters, and accelerating the path from development lab to clinic.
Improvement of ultrasound images should be guided by their diagnostic value. Evaluation of clinical image
quality is generally performed subjectively, because objective criteria have not yet been fully developed and
accepted for the evaluation of clinical image quality. Based on recommendation 500 from the International
Telecommunication Union - Radiocommunication (ITU-R) for such subjective quality assessment, this work
presents equipment and a methodology for clinical image quality evaluation for guiding the development of new
and improved imaging. The system is based on a BK-Medical 2202 ProFocus scanner equipped with a UA2227
research interface, connected to a PC through X64-CL Express camera link. Data acquisition features subject
data recording, loading/saving of exact scanner settings (for later experiment reproducibility), free access to all
system parameters for beamformation and is applicable for clinical use. The free access to all system parameters
enables the ability to capture standardized images as found in the clinic and experimental data from new processing or beamformation methods. The length of the data sequences is only restricted by the memory of the external PC. Data may be captured interleaved, switching between multiple setups, to maintain identical transducer, scanner, region of interest and recording time on both the experimental- and standardized images. Data storage is approximately 15.1 seconds pr. 3 sec sequence including complete scanner settings and patient information, which is fast enough to get sufficient number of scans under realistic operating conditions, so that statistical evaluation is valid and reliable.
Delay-and-sum array beamforming is an essential part of signal processing in ultrasound imaging. Although the principles
are simple, there are many implementation details to consider for obtaining a reliable and computational efficient
beamforming. Different methods for calculation of time-delays are used for different waveforms. Various inter-sample
interpolation schemes such as FIR-filtering, polynomial, and spline interpolation can be chosen. Apodization can be
any preferred window function of fixed size applied on the channel signals or it can be dynamic with an expanding and
contracting aperture to obtain a preferred constant F-number. An effective and versatile software toolbox for off-line
beamformation designed to address all of these issues has been developed. It is capable of exploiting parallelization of
computations on a Linux cluster and is written in C++ with a MATLAB(MathWorks Inc.) interface. It is an aid to support
simulations and experimental investigation of 3D imaging, synthetic aperture imaging, and directional flow estimation. A
number of parameters are necessary to fully define the spatial beamforming and some parameters are optional. All spatial
specifications are given in 3D space such as the physical positions of the transducer elements during transmit and receive
and the positions of the points to beamform. The points of focus are defined as a collection of lines each having an origin, a
direction, a distance between points and a length. The transducer, the points to beamform, and the apodization are defined
as individual objects and a combination of these define the actual beamforming. Once the beamforming is defined, the
time-delays and apodization values for every combination of transmit elements, receive elements and focus points can be
calculated and stored in lookup-tables (LUT). Parametric beamforming can also be applied where calculations are done by
demand, thus, reducing the storage demand dramatically. On a standard PC with a Pentium 4, 2.66 GHz processor running
Linux the toolbox can beamform 100,000 points in lines of various directions in 20 seconds using a transducer of 128
elements, dynamic apodization and 3rd order polynomial interpolation. This is a decrease in computation time of at least
a factor of 15 compared to an implementation directly in MATLAB of a similar beamformer.
This paper presents a recursive approach for parametric delay calculations for a beamformer. The suggested calculation procedure is capable of calculating the delays for any image line defined by an origin and arbitrary direction. It involves only add and shift operations making it suitable for hardware implementation. One delaycalculation unit (DCU) needs 4 parameters, and all operations can be implemented using fixed-point arithmetics. An N-channel system needs N+ 1 DCUs per line - one for the distance from the transmit origin to the image point and N for the distances from the image point to each of the receivers. Each DCU recursively calculates the square of the distance between a transducer element and a point on the beamformed line. Then it finds the approximate square root. The distance to point i is used as an initial guess for point i + 1. Using fixed-point calculations with 36-bit precision gives an error in the delay calculations on the order of 1/64 samples, at a sampling frequency of fs = 40 MHz. The circuit has been synthesized for a Virtex II Pro device speed grade 6 in two versions - a pipelined and a non-pipelined producing 150 and 30 million delays per second, respectively. The non-pipelined circuit occupies about 0.5 % of the FPGA resources and the pipelined one about 1 %. When the square root is found with a pipelined CORDIC processor, 2 % of the FPGA slices are used to deliver 150 million delays per second.
Most modern ultrasound scanners use the so-called pulsed-wave
Doppler technique to estimate the blood velocities. Among the
narrowband-based methods, the autocorrelation estimator and the
Fourier-based method are the most commonly used approaches. Due to
the low level of the blood echo, the signal-to-noise ratio is low,
and some averaging in depth is applied to improve the estimate.
Further, due to velocity gradients in space and time, the spectrum
may get smeared. An alternative approach is to use a pulse
with multiple frequency carriers, and do some form of averaging in
the frequency domain. However, the limited transducer bandwidth
will limit the accuracy of the conventional Fourier-based
estimator; this method is also known to have considerable
variance. More importantly, both the mentioned methods suffer from
the maximum axial velocity bound, vzmax = cfprf/4fc, where c is the speed of propagation. In this paper, we propose a nonlinear least squares (NLS) estimator. Typically, NLS estimators are computationally cumbersome, in general requiring the minimization of a multidimensional and often multimodal cost function. Here, by noting that the unknown velocity will result in a common known frequency distorting function, we reformulate the NLS estimator as an one-dimensional minimization problem confirmed by extensive simulations. The results show that the NLS method not only works better than both the autocorrelation estimator and Periodogram method for high velocities, it will also not suffer from the maximum velocity
This paper investigates the concept of virtual source elements. It suggests a common framework for increasing the resolution, and penetration depth of several imaging modalities by applying synthetic aperture focusing (SAF). SAF is used either as a post focusing procedure on the beamformed data, or directly on the raw signals from the transducer elements. Both approaches increase the resolution. The paper shows that in one imaging situation, there can co-exist different virtual sources for the same scan line - one in the azimuth plane, and another in the elevation. This property is used in a two stage beamforming procedure for 3D ultrasound imaging. The position of the virtual source, and the created waveform are investigated with simulation, and with pulse-echo measurements. There is good agreement between the estimated wavefront and the theoretically fitted one. Several examples of the use of virtual source elements are considered. Using SAF on data acquired for a conventional linear array imaging improves the penetration depth for the particular imaging situation from 80 to 110 mm. The independent use of virtual source elements in the elevation plane decreases the respective size of the point spread function at 100 mm below the transducer from 7mm to 2 mm.
Synthetic transmit aperture ultrasound (STAU) imaging can create images with as low as 2 emissions, making it attractive for 3D real-time imaging. Two are the major problems to be solved: (1) complexity of the hardware involved, and (2) poor image quality due to low signal to noise ratio (SNR). We have solved the first problem by building a scanner capable of acquiring data using STAU in real-time. The SNR is increased by using encoded signals, which make it possible to send more energy in the body, while reserving the spatial and contrast resolution. The performance of temporal, spatial and spatio-temporal encoding was investigated. Experiments on wire phantom in water were carried out to quantify the gain from the different encodings. The gain in SNR using an FM modulated pulse is 12 dB. The penetration depth of the images was studied using tissue mimicking phantom with frequency dependent attenuation of 0.5 dB/(cm MHz). The combination of spatial and temporal encoding have highest penetration depth. Images to a depth of 110 mm, can successfully be made with contrast resolution comparable to that of a linear array image. The in-vivo scans show that the motion artifacts do not significantly influence the performance of the STAU.