KEYWORDS: Image segmentation, Ultrasonography, Signal to noise ratio, Signal attenuation, 3D image processing, Expectation maximization algorithms, Breast, 3D modeling, Tissues, Magnetic resonance imaging
This paper examines three Bayesian statistical segmentation techniques with an innovative attenuation compensation on synthetic data and breast ultrasound medical images. All use expectation maximization for estimating the Gaussian model parameters and segment the data using a three-dimensional (3-D) Markov random field pixel neighborhood. This paper compares three Bayesian segmentation techniques: maximum a posteriori simulated annealing (MAP-SA), MAP iterated conditional modes (MAP-ICM), and maximization of posterior marginals (MPM). We conclude that because of the high speckle noise and adverse attenuation challenges of breast ultrasound, the MPM algorithm has the best performance. This is due to better localized segmentation than the other MAP techniques. We present results first with synthetic images then with breast ultrasound. Our new contributions for a 3-D breast ultrasound produce improved results using a model of the noise, in which the Gaussian mean is proportional to the image attenuation with depth, combined with a new prior probability model.
3D imaging systems are currently being developed using liquid lens technology for use in medical devices as well as in
consumer electronics. Liquid lenses operate on the principle of electrowetting to control the curvature of a buried
surface, allowing for a voltage-controlled change in focal length. Imaging systems which utilize a liquid lens allow
extraction of depth information from the object field through a controlled introduction of defocus into the system. The
design of such a system must be carefully considered in order to simultaneously deliver good image quality and meet the
depth of field requirements for image processing. In this work a corrective model has been designed for use with the
Varioptic Arctic 316 liquid lens. The design is able to be optimized for depth of field while minimizing aberrations for a
3D imaging application. The modeled performance is compared to the measured performance of the corrected system
over a large range of focal lengths.
A new method for capturing 3D video from a single imager and lens is introduced. The benefit of this method is that it
does not have the calibration and alignment issues associated with binocular 3D video cameras. It also does not require
special ranging transmitters and sensors. Because it is a single lens/imager system, it is also less expensive than either
the binocular or ranging cameras. Our system outputs a 2D image and associated depth image using the combination of
microfluidic lens and Depth from Defocus (DfD) algorithm. The lens is capable of changing the focus to obtain two
images at the normal video frame rate. The Depth from Defocus algorithm uses the in focus and out of focus images to
infer depth. We performed our experiments on synthetic and on the real aperture CMOS imager with a microfluidic lens.
On synthetic images, we found an improvement in mean squared error compared to the literature on a limited test set. On
camera images, our research showed that DfD combined with edge detection and segmentation provided subjective
improvements in the resulting depth images.
Conference Committee Involvement (2)
Digital Photography and Mobile Imaging XI
9 February 2015 | San Francisco, California, United States
Digital Photography X
3 February 2014 | San Francisco, California, United States