|
1.IntroductionAccording to the American Cancer Society, about 246,660 breast cancer cases were diagnosed in 2016, which is the largest population among all cancers. The estimated total death from breast cancer was almost 40,450 for 2016.1 Early detection of breast cancer could save lives and increase treatment options. X-ray mammography is widely used for breast cancer screening; however, it misses about 10% of the cancers, especially in patients with dense breast.2 Ultrasound (US) is used as an adjunct to mammography to differentiate solid from cystic lesions; however, it does not always provide needed contrast between benign and malignant solid lesions.3 MRI is frequently used for screening high-risk patients, but its overall performance is not satisfactory due to high false positive rates.2 Diffuse optical tomography (DOT) is a noninvasive technique that uses near-infrared (NIR) light to map tissue optical properties. Because water absorption in the NIR spectrum is low, the light can penetrate several centimeters inside soft tissue, for example, breast and brain. Reflected or transmitted light measured at the tissue surface is used to reconstruct tomographic images.4,5 DOT has demonstrated huge potential in cancer diagnosis and treatment monitoring by mapping hemoglobin concentration that is related to vasculature content and tumor angiogenesis. Using multiple wavelengths, it is possible to measure oxygenated, deoxygenated, and total hemoglobin concentrations. It also provides information regarding oxygen saturation, lipid, and water concentration. These measurements could be effectively used to diagnose cancers versus benign lesions and monitor treatment response because malignant tumors typically have higher hemoglobin content as compared with benign lesions and the hemoglobin changes differ between treatment responders and nonresponders.6–10 However, DOT suffers from intensive light scattering inside the tissue, and scattering causes uncertainty in reconstructed target location and inaccuracy of target quantification. These problems can be largely overcome using other imaging techniques to guide the DOT for localization and reconstruction. US, mammography, and MRI-guided DOT10–12 have been investigated, and promising results have been reported. US-guided DOT has been developed by our group, and its utility in cancer diagnosis and treatment monitoring has been demonstrated from several clinical studies.10,13,14 In the US-guided DOT approach, coregistered US images are captured and measurements of size and depth are then incorporated in DOT reconstruction as a region of interest (ROI). A dual-zone mesh image reconstruction15 is used to segment the ROI and background region with fine and coarse mesh sizes. This scheme effectively reduces the total number of voxels with unknown optical absorption for imaging reconstruction. Additionally, a total absorption of each voxel is reconstructed, and the total is then divided by the voxel size to provide absorption distributions. Because lesion absorption is higher than background in general, the total absorption, which is the product of voxel size and lesion absorption, of a smaller voxel is about the same scale of total absorption as background in a larger voxel. Therefore, the inversion is better conditioned and converges in fewer iterations compared with conventional methods that do not use a dual-mesh approach. Thus, US identified ROI is critical to guiding dual-zone mesh DOT reconstruction. Extraction of tumor size and location from US images has been done manually, which requires experienced users to make these measurements and slows down the DOT reconstruction. Similar to other medical imaging techniques, automatic US image segmentation is a challenging task because US image contrast is low and boundaries are often not clear due to speckle. Researchers have explored several methods to obtain a reliable segmentation from medical images. These methods include operator-assisted region growing techniques,16 rule-based segmentation in which some known image primitives are used for an unsupervised segmentation,17 atlas-based image segmentation in which a known structure is searched in the image for segmentation,18 and neural network and c-mean clustering,19 which generates statistical models to classify pixels into different segments. In this article, we introduce a simple adaptive threshold-based method20 that is fast on data processing and easy for implementation; moreover, it also provides comparable accuracy for DOT reconstruction as compared with manual processing. This method utilizes an image histogram to obtain an adaptive threshold for each input image. For some US images, the posterior shadow of a tumor extends to the chest wall and makes the segmentation difficult. To avoid this problem, Hough transform21-based line detection is used to determine the chest wall location and use it as the deep boundary of the tumor. Twenty patients (10 benign and 10 malignant cases) are used to evaluate the performance of the segmentation method. Reconstructed absorption images are compared with a manual processing method, and similar results are obtained. To the best of our knowledge, this is the first report of an automated segmentation method using US image to guide DOT image reconstruction. The method can be modified and implemented into MRI or x-ray-guided DOT imaging reconstruction. 2.Methods2.1.Patient Data and ExperimentsPatient data were acquired from a US-guided DOT system.13 The study was approved by the local Institutional Review Boards and was compliant with the Health Insurance Portability and Accountability Act. All patients signed the informed consent. Data used in this study have been deidentified. Based on biopsy results, 10 patients had benign lesions and 10 patients had cancers. Specific type and US measurements of radius in (depth) and (spatial dimension) (cm) of tumor data by an experience user are given in Table 1. Table 1Type and size of 10 malignant and 10 benign tumors.
Our data acquisition system consists of a commercial US system and an NIR imager. Briefly, the optical imager delivers light of 740-, 780-, 808-, and 830-nm wavelengths to the tissue sequentially. Light is modulated at 140-MHz carrier frequency. Each wavelength is multiplexed to nine positions on a hand-held probe, and 14 photomultiplier detectors detect reflected light via light guides. The detected signals are demodulated to 20-kHz output. A custom-made analog-to-digital board is used to collect all signals and stores the data in a laptop. Each data set takes 3 to 4 s to acquire, which is fast enough to acquire multiple sets of measurements from each patient at both lesion and contralateral normal breasts for reference. Coregistered US images are captured from the video output of the US system before and after each NIR data set. The detailed system description and data acquisition procedure can be found in Refs. 13 and 22. 2.2.Extract Tumor Size and LocationTo automatically detect lesion size and location for DOT reconstruction, an adaptive threshold-based segmentation method is used. For some cases, posterior shadow of the tumor is extended to the chest wall in the US images. In those cases, it is difficult to determine the tumor size because the deeper boundary of the tumor cannot be accurately determined. Under these circumstances, locations of the chest wall are determined and used as estimates of deeper boundary of the tumor. To determine the chest wall, Hough transform is used together with an edge detection method. 2.2.1.PreprocessingA typical coregistered US image acquired by an image capture card is given in Fig. 1(a). For reference, the vertical axis is marked as -axis and the horizontal axis is marked as -axis. Measurement in the -axis is considered the same as the -axis, assuming that lesions are symmetric in the - and -axes. Since the pixel intensity is the key information needed in the segmentation algorithm, the US grayscale image is automatically cropped first from the captured image before using the Hough transform and Sobel23 edge detection method. Figure 1(b) shows the cropped US image. Depth marker detection is the next step before applying the segmentation procedure because the markers vary with depth range that depends on the user selection from the front panel of the US machine. To determine the depth markers, a binary image is generated using a fixed pixel intensity of 150 out of 256 grayscale levels as the threshold. Since the depth markers are mainly white, this pixel intensity will help to separate them from background. Then all of the white regions consisting of 3 to 50 pixels and located outside the right border of the US image are marked as depth markers. These pixel ranges are obtained by examining the available US images collected from different manufacturers. This depth marker detection procedure detects horizontal ticks along with numbers that make it suitable to use for images collected from a wide range of US machines. Figure 2 shows the captured image with automatically detected depth markers. When positions of those depth markers are known, the difference between two markers in the -axis provides the number of pixels per centimeter, which are then used to convert the measured tumor size in depth into centimeter. 2.2.2.Adaptive threshold-based segmentationTo extract the required information from the US image, the first step is to segment the lesion from the rest of the image. Then the radius and center of the lesion can be measured from the segmented lesion. A single threshold point is used to separate the two zones, i.e., lesion and background. This threshold point is determined adaptively for each input image. Because US images have speckle noise, some complex segmentation techniques, such as fuzzy c-mean clustering and active contour model,24 do not provide any improvement while demanding computation resources due to complex processing. Moreover, DOT does not require precise segmented information. Thus, instead of using a complex segmentation algorithm, threshold-based segmentation is used here to obtain tumor information. Lesions in breast US images usually appear as hypoechoic masses that separate them from the background tissue. To segment a hypoechoic mass, a threshold point is set to separate the tumor from the rest of the image. US images usually have very low contrast. Histogram equalization is applied on the grayscale image. Histogram equalization stretches the input histogram over the available range, which is from 0 to 255 in grayscale, and thus increases the contrast. Then a simple procedure is followed to detect the threshold point adaptively. Since the intensity varies significantly among different images, it is best to use adaptive threshold point for every input image. This adaptive threshold point detection procedure starts with obtaining the histogram of the US image. Figure 3(a) shows the histogram of an input image. The histogram shows a peak and a hump with a notch between them as indicated in the figure. This histogram shape is obtained from all US images after histogram equalization because of the presence of a significant amount of black (provides the peak) and gray pixels (the hump) in a US image. This notch shows the threshold for separating gray background from black tumor. To detect this point automatically, the slope of the histogram is calculated. Pixel intensity of the point when the sign of the slope has changed is considered the threshold value. In the next step, this threshold value is used to generate a binary image. After obtaining the threshold for the US image, a binary image is generated where tumor region is marked as black and background is white. However, tumor is not the only black zone in the binary image. To remove the unwanted black regions, the user needs to insert a seed in the approximate tumor location by clicking the tumor in the US image as shown in Fig. 3(b). If there exist multiple tumors, then multiple seeds are required to be inserted in the probable locations. Any region that does not contain a seed is discarded. Finally, only the tumor region survives. Then the MATLAB® function “regionprop” is used to automatically measure the tumor center and radius. This information is then passed to the optical reconstruction code. The flow diagram in Fig. 4 shows the steps for the entire procedure. 2.2.3.Chest wall detection using Hough transformDetection of chest wall depth is not essential to obtaining tumor location and size. However, for some cases when the posterior shadow extends to the bottom of the US image, it is difficult to define the bottom of the tumor. In such cases, the chest wall location is considered the bottom of the tumor. We defined chest wall depth as the distance from the skin to the top layer of chest wall muscle. An automated chest wall depth detection method was developed and applied to the coregistered US images. Detection of the chest wall is based on the fact that chest wall muscles appear as line structures in US images [see Fig. 5(a)].25 Therefore, line detection algorithms could be used for automatic detection. We chose Hough transform21 as a line detection method because it is simple and robust when combined with any edge detection method. Here, the Canny edge detection26 method is used as an edge detection method. The binary image generated by the Canny edge detection is shown in Fig. 5(b). It is clear from Fig. 5(b) that, if Hough transform is applied to the edge detected image without any restriction, it will detect several unnecessary structures. For example, due to subcutaneous fat and breast tissue interfaces, some linear structures appear at the top of the US image. There are other linear structures also visible in the image. Hough transform detects all of these linear structures. To avoid these unnecessary line structures, we modeled the chest wall as a linear structure that is mainly horizontal with a small slope and it should appear at the lower half of the image. After applying Hough transform and the above-mentioned restrictions, the survived linear structures are marked in green lines as shown in Fig. 5(c). Finally, the mean value of all of the points of these detected lines is considered the chest wall depth. A flow diagram of the entire procedure is given in Fig. 6. More details on the chest wall detection method and evaluation of this method can be found in Ref. 27. 3.Optical ReconstructionThe absorption map of each wavelength was reconstructed using the dual-mesh approach with lesion parameters obtained from coregistered US. Because the spatial resolution of diffused light is poorer than that of US, the ROI is chosen to be at least two to three times larger than that seen using US in the dimensions. In addition, because the depth localization of diffused light is very poor, a tighter ROI in the depth dimension is set using coregistered US. The weight matrix was computed using fitted optical properties of each patient’s normal contralateral breast. The scattered field measured from the lesion area was related to the internal total absorption coefficients using the following equation: where is the total number of source–detector pairs and is the weight matrix related to the sensitivity of voxels inside the medium. The number of amplitude and phase measurements is 252 () for 9 sources and 14 detectors. However, the number of voxels varies from to based on the size of the tumor. To obtain the unknown absorption information, the conjugate gradient method was used to solve the inverse problem formulated as minimize , where is the Euclidean norm. Since this is an ill-posed problem mainly due to the correlated diffused scattering field, the dual-mesh technique utilizes the tumor location and size information extracted from coregistered US images for reconstruction.15 After applying the dual-mesh technique to minimize the number of unknowns, reconstruction speed improves and convergence is reached in to 4 iterations.4.ResultsThe proposed US segmentation method is evaluated in two steps. First, the US segmented reconstruction results are obtained and deviation is calculated against manually segmented reconstruction results. Second, both automated and manually segmented results are used to generate absorption maps and the corresponding hemoglobin concentration maps and are compared. 4.1.Validation of Ultrasound SegmentationTo evaluate the performance of the US segmentation algorithm, the tumor boundary for all 20 cases was delineated by an experienced US imager. These readings are taken as standard in this study. Then the experimental results were compared with those manual measurements. Two input images with manually marked tumor boundaries are presented in Figs. 7(a) and 8(a). In Figs. 7(b) and 8(b), segmented images using the proposed method are presented. It is clear from these figures that the segmented tumor by the proposed algorithm is comparable to the manual measurement. To obtain a quantitative evaluation of the US segmentation procedure, US images from 10 benign and 10 malignant cases are collected. Then center coordinates of the tumor and radius in both axes are measured manually. The same information is also collected from the proposed segmentation method. Then deviation is calculated between the two methods for 20 images. Comparison of the average measurements from these 20 images is given in Table 2. From the table, we found that manual measurements are slightly smaller than the proposed measurements. However, deviation from different measurements never exceeds 0.25 cm, which is the resolution of the optical reconstruction, so optical reconstruction will not be affected by this small deviation. Table 2Comparison between manually and semiautomatically extracted information from US images.
To evaluate the repeatability of the proposed US segmentation algorithm, we measured four parameters (lesion depth, -radius, -radius, and -center) from three different sets of US images and reconstructed the corresponding total hemoglobin maps. For each case, these images were collected from the same lesion location; however, some deviation was expected because the operator intended to hold the probe still for each data set and may move a little between different data sets to obtain the best US images. For each case, mean and standard deviation are given in Table 3. This deviation for depth is and for the other three spatial measurements is smaller than 0.25 cm (image grid size) and thus does not have any major effect on optical reconstruction. As shown in the table, the maximum deviation obtained from benign cases is and for malignant case is . Table 3Evaluation of the repeatability of the proposed method.
4.2.Validation of Optical ReconstructionThe ultimate goal of the US segmentation algorithm is to assist DOT reconstruction. In this section, performance of optical reconstruction is evaluated using tumor information extracted from both manual and proposed segmentation processes. Optical data of the same 20 patients were used to generate absorption maps for four different wavelengths. Then the absorption information is used to obtain hemoglobin concentration. Both manual and semiautomatic features are used to generate different absorption maps. In Figs. 9 and 10, absorption maps for a benign case are compared, and a malignant case is presented in Figs. 11 and 12. It is clear from these figures that the reconstructed map is almost similar. The average maximum absorption from 20 cases is compared in Table 4. From the table, we can see that the mean optical absorption obtained from manual measurement was for malignant and for benign cases, where for the proposed method it was for malignant and for benign tumors. Table 4Average absorption coefficient using manual and automatically segmented tumor information.
Finally, Fig. 13 shows boxplots for oxygenated, deoxygenated, and total hemoglobin concentrations for the same 20 cases for both manual and proposed automated procedures. We can see from this figure that results for both techniques are almost similar. For benign cases, mean total hemoglobin concentration for all 10 cases is from manual segmentation and for the proposed automated segmentation. For malignant cases, this measurement is from manual segmentation and for the automated segmentation. Values for oxygenated hemoglobin for benign cases is for manual segmentation method and for proposed segmentation method. For malignant cases, this measurement is and for manual and proposed segmentation methods, respectively. Deoxygenated hemoglobin concentration is and for benign cases using manual and proposed segmentation methods, respectively. For malignant cases, this measurement increases to and for manual and proposed segmentation methods, respectively. Thus, the performance of the proposed feature extraction technique is quite acceptable. 5.Discussion and SummaryIn this work, a simple and effective US segmentation algorithm designed for assisting DOT image reconstruction is presented. This algorithm extracts tumor size and location from breast US images with minimum user interaction. It provides sufficiently accurate ROI for the DOT reconstruction; at the same time, it is very easy to implement and does not require much computational resources, thus making it ideal for real-time DOT reconstruction. Along with the threshold-based segmentation, Hough transform-based line detection is combined in the algorithm to detect chest wall location. Chest wall location is only needed when the tumor acoustic attenuation shadows the deeper boundary of the tumor. However, this algorithm will not be able to extract information from some noisy US images when the contrast between tumor and background is very low. This work also requires limited input from users; thus, it is not fully automated. In the future, it will move toward automatic segmentation by applying a search algorithm based on the local mean28 or similar seed generation method. In conclusion, this work is one-step closer toward real-time DOT reconstruction. It eliminates the requirement for training an experienced user to provide the tumor location and size from the US images. It provides the required tumor information for the dual-mesh reconstruction with necessary accuracy. Another important feature of this proposed algorithm is that it only utilized the pixel intensities and, thus, is applicable as a segmentation approach for other imaging modalities, such as MRI-guided DOT and x-ray-guided DOT. DisclosuresThe authors have no relevant financial interests in this article and no potential conflicts of interest to disclose. AcknowledgmentsThe authors thank the funding support of this work from the National Institutes of Health (No. R01EB002136) and the Connecticut Innovation Bioscience fund. References, “Cancer facts and figures 2016,”
(2016) https://www.cancer.org/research/cancer-facts-statistics/all-cancer-facts-figures/cancer-facts-figures-2016.html Google Scholar
B. Zheng et al.,
“Abstract P4-02-06: improving efficacy of applying breast MRI to detect mammography-occult breast cancer,”
Cancer Res., 76
(4), P4-02-06
(2016). http://dx.doi.org/10.1158/1538-7445.SABCS15-P4-02-06 CNREA8 0008-5472 Google Scholar
W. A. Berg et al.,
“Ultrasound as the primary screening test for breast cancer: analysis from ACRIN 6666,”
J. Natl. Cancer Inst., 108
(4), djv367
(2016). http://dx.doi.org/10.1093/jnci/djv367 JNCIEQ Google Scholar
D. A. Boas et al.,
“Imaging the body with diffuse optical tomography,”
IEEE Signal Process. Mag., 18
(6), 57
–75
(2001). http://dx.doi.org/10.1109/79.962278 ISPRE6 1053-5888 Google Scholar
X. Wu et al.,
“Fast and efficient image reconstruction for high density diffuse optical imaging of the human brain,”
Biomed. Opt. Express, 6
(11), 4567
–4584
(2015). http://dx.doi.org/10.1364/BOE.6.004567 BOEICL 2156-7085 Google Scholar
T. Durduran et al.,
“Diffuse optics for tissue monitoring and tomography,”
Rep. Prog. Phys., 73
(7), 076701
(2010). http://dx.doi.org/10.1088/0034-4885/73/7/076701 RPPHAG 0034-4885 Google Scholar
F. Larusson et al.,
“Parametric estimation of 3D tubular structures for diffuse optical tomography,”
Biomed. Opt. Express, 4
(2), 271
–286
(2013). http://dx.doi.org/10.1364/BOE.4.000271 BOEICL 2156-7085 Google Scholar
B. Chance et al.,
“Breast cancer detection based on incremental biochemical and physiological properties of breast cancers: a six-year, two-site study,”
Acad. Radiol., 12
(8), 925
–933
(2005). http://dx.doi.org/10.1016/j.acra.2005.04.016 Google Scholar
G. Quarto et al.,
“Estimate of tissue composition in malignant and benign breast lesions by time-domain optical mammography,”
Biomed. Opt. Express, 5
(10), 3684
–3698
(2014). http://dx.doi.org/10.1364/BOE.5.003684 BOEICL 2156-7085 Google Scholar
Q. Zhu et al.,
“Assessment of functional differences in malignant and benign breast lesions and improvement of diagnostic accuracy by using US-guided diffuse optical tomography in conjunction with conventional US,”
Radiology, 280 387
–397
(2016). http://dx.doi.org/10.1148/radiol.2016151097 RADLAX 0033-8419 Google Scholar
B. E. Schaafsma et al.,
“Optical mammography using diffuse optical spectroscopy for monitoring tumor response to neoadjuvant chemotherapy in women with locally advanced breast cancer,”
Clin. Cancer Res., 21
(3), 577
–584
(2015). http://dx.doi.org/10.1158/1078-0432.CCR-14-0736 Google Scholar
B. J. Tromberg et al.,
“Assessing the future of diffuse optical imaging technologies for breast cancer management,”
Med. Phys., 35
(6), 2443
–2451
(2008). http://dx.doi.org/10.1118/1.2919078 MPHYA6 0094-2405 Google Scholar
C. Xu et al.,
“Ultrasound-guided diffuse optical tomography for predicting and monitoring neoadjuvant chemotherapy of breast cancers—recent progress,”
Ultrason. Imaging, 38 5
–18
(2015). http://dx.doi.org/10.1177/0161734615580280 ULIMD4 0161-7346 Google Scholar
H. Vavadi and Q. Zhu,
“Automated data selection method to improve robustness of diffuse optical tomography for breast cancer imaging,”
Biomed. Opt. Express, 7
(10), 4007
–4020
(2016). http://dx.doi.org/10.1364/BOE.7.004007 BOEICL 2156-7085 Google Scholar
M. Huang and Q. Zhu,
“A dual-mesh optical tomography reconstruction method with depth correction using a priori ultrasound information,”
Appl. Opt., 43
(8), 1654
–1662
(2004). http://dx.doi.org/10.1364/AO.43.001654 APOPAI 0003-6935 Google Scholar
S.-Y. Wan and W. E. Higgins,
“Symmetric region growing,”
IEEE Trans. Image Process., 12
(9), 1007
–1015
(2003). http://dx.doi.org/10.1109/TIP.2003.815258 IIPRE4 1057-7149 Google Scholar
Y. Xia et al.,
“Automatic segmentation of the caudate nucleus from human brain MR images,”
IEEE Trans. Med. Imaging, 26
(4), 509
–517
(2007). http://dx.doi.org/10.1109/TMI.2006.891481 ITMID4 0278-0062 Google Scholar
B. Fischl, M. I. Sereno and A. M. Dale,
“Cortical surface-based analysis: II: inflation, flattening, and a surface-based coordinate system,”
NeuroImage, 9
(2), 195
–207
(1999). http://dx.doi.org/10.1006/nimg.1998.0396 NEIMEF 1053-8119 Google Scholar
J. C. Bezdek, L. O. Hall and L. P. Clarke,
“Review of MR image segmentation techniques using pattern recognition,”
Med. Phys., 20
(4), 1033
–1048
(1993). http://dx.doi.org/10.1118/1.597000 MPHYA6 0094-2405 Google Scholar
E. R. Davies, Machine Vision: Theory, Algorithms, Practicalities, Elsevier, Amsterdam
(2004). Google Scholar
D. H. Ballard,
“Generalizing the Hough transform to detect arbitrary shapes,”
Pattern Recognit., 13
(2), 111
–122
(1981). http://dx.doi.org/10.1016/0031-3203(81)90009-1 Google Scholar
H. Vavadi et al.,
“Preliminary results of miniaturized and robust ultrasound guided diffuse optical tomography system for breast cancer detection,”
Proc. SPIE, 10059 100590F
(2017). http://dx.doi.org/10.1117/12.2250034 PSISDG 0277-786X Google Scholar
R. Maini and H. Aggarwal,
“Study and comparison of various image edge detection techniques,”
Int. J. Image Process. (IJIP), 3
(1), 1
–11
(2009). Google Scholar
D. Withey and Z. Koles,
“A review of medical image segmentation: methods and available software,”
Int. J. Bioelectromagnetism, 10
(3), 125
–148
(2008). Google Scholar
J. H. Youk et al.,
“Imaging findings of chest wall lesions on breast sonography,”
J. Ultrasound Med., 27
(1), 125
–138
(2008). http://dx.doi.org/10.7863/jum.2008.27.1.125 JUMEDA 0278-4297 Google Scholar
J. Canny,
“A computational approach to edge detection,”
IEEE Trans. Pattern Anal. Mach. Intell., PAMI-8 679
–698
(1986). http://dx.doi.org/10.1109/TPAMI.1986.4767851 ITPIDJ 0162-8828 Google Scholar
F. Zhou, A. Mostafa and Q. Zhu,
“Improving breast cancer diagnosis by reducing chest wall effect in diffuse optical tomography,”
J. Biomed. Opt., 22
(3), 036004
(2017). http://dx.doi.org/10.1117/1.JBO.22.3.036004 iJBOPFO 1083-3668 Google Scholar
M. Xian, Y. Zhang and H. D. Cheng,
“Fully automatic segmentation of breast ultrasound images based on breast characteristics in space and frequency domains,”
Pattern Recognit., 48
(2), 485
–497
(2015). http://dx.doi.org/10.1016/j.patcog.2014.07.026 Google Scholar
BiographyAtahar Mostafa is a PhD student in the Biomedical Engineering Department at Washington University in St. Louis. He received his BS degree in electrical and electronics engineering from Bangladesh University of Engineering and Technology in Bangladesh and his MS degree in electrical engineering from the University of Saskatchewan in Canada. His research is focused on ultrasound-guided diffuse optical tomography. Hamed Vavadi received his PhD in biomedical engineering at the University of Connecticut. Prior his PhD, he has done his BS degree in electrical engineering and his MSc degree in biomedical engineering with expertise in vital signal processing. He has worked in near-infrared spectroscopy and optical imaging for cancer diagnosis with funding support from the National Institute of Health. He was also awarded Third Bridge Grant from Connecticut Innovation through UCONN School of Engineering for developing a handheld NIR optical diagnosis device. His research interest includes, optical imaging, NIR spectroscopy, cancer detection, wearable devices, and vital signal processing. K. M. Shihab Uddin is a PhD student in the Biomedical Engineering Department at Washington University in St. Louis. He received his bachelor’s degree in electrical and electronics engineering from Bangladesh University of Engineering and Technology in Bangladesh. His research is focused on ultrasound-guided diffuse optical tomography. Quing Zhu is a professor in Biomedical Engineering and Radiology Department at Washington University in St. Louis. She has been named a fellow of OSA, SPIE, and is an associate editor of IEEE Photonics Society and editorial board member of Photoacoustic and Biomedical Optics. Her research is focused on ultrasound-guided diffuse optical tomography for breast cancer diagnosis and treatment monitoring, coregistered ultrasound, and photoacoustic tomography for ovarian cancer diagnosis, optical coherent tomography, and photoacoustic microscopy. |