Open Access
19 December 2017 Diffuse optical tomography using semiautomated coregistered ultrasound measurements
Author Affiliations +
Abstract
Diffuse optical tomography (DOT) has demonstrated huge potential in breast cancer diagnosis and treatment monitoring. DOT image reconstruction guided by ultrasound (US) improves the diffused light localization and lesion reconstruction accuracy. However, DOT reconstruction depends on tumor geometry provided by coregistered US. Experienced operators can manually measure these lesion parameters; however, training and measurement time are needed. The wide clinical use of this technique depends on its robustness and faster imaging reconstruction capability. This article introduces a semiautomated procedure that automatically extracts lesion information from US images and incorporates it into the optical reconstruction. An adaptive threshold-based image segmentation is used to obtain tumor boundaries. For some US images, posterior shadow can extend to the chest wall and make the detection of deeper lesion boundary difficult. This problem can be solved using a Hough transform. The proposed procedure was validated from data of 20 patients. Optical reconstruction results using the proposed procedure were compared with those reconstructed using extracted tumor information from an experienced user. Mean optical absorption obtained from manual measurement was 0.21±0.06  cm−1 for malignant and 0.12±0.06  cm−1 for benign cases, whereas for the proposed method it was 0.24±0.08  cm−1 and 0.12±0.05  cm−1, respectively.

1.

Introduction

According to the American Cancer Society, about 246,660 breast cancer cases were diagnosed in 2016, which is the largest population among all cancers. The estimated total death from breast cancer was almost 40,450 for 2016.1 Early detection of breast cancer could save lives and increase treatment options. X-ray mammography is widely used for breast cancer screening; however, it misses about 10% of the cancers, especially in patients with dense breast.2 Ultrasound (US) is used as an adjunct to mammography to differentiate solid from cystic lesions; however, it does not always provide needed contrast between benign and malignant solid lesions.3 MRI is frequently used for screening high-risk patients, but its overall performance is not satisfactory due to high false positive rates.2

Diffuse optical tomography (DOT) is a noninvasive technique that uses near-infrared (NIR) light to map tissue optical properties. Because water absorption in the NIR spectrum is low, the light can penetrate several centimeters inside soft tissue, for example, breast and brain. Reflected or transmitted light measured at the tissue surface is used to reconstruct tomographic images.4,5 DOT has demonstrated huge potential in cancer diagnosis and treatment monitoring by mapping hemoglobin concentration that is related to vasculature content and tumor angiogenesis. Using multiple wavelengths, it is possible to measure oxygenated, deoxygenated, and total hemoglobin concentrations. It also provides information regarding oxygen saturation, lipid, and water concentration. These measurements could be effectively used to diagnose cancers versus benign lesions and monitor treatment response because malignant tumors typically have higher hemoglobin content as compared with benign lesions and the hemoglobin changes differ between treatment responders and nonresponders.610

However, DOT suffers from intensive light scattering inside the tissue, and scattering causes uncertainty in reconstructed target location and inaccuracy of target quantification. These problems can be largely overcome using other imaging techniques to guide the DOT for localization and reconstruction. US, mammography, and MRI-guided DOT1012 have been investigated, and promising results have been reported. US-guided DOT has been developed by our group, and its utility in cancer diagnosis and treatment monitoring has been demonstrated from several clinical studies.10,13,14

In the US-guided DOT approach, coregistered US images are captured and measurements of size and depth are then incorporated in DOT reconstruction as a region of interest (ROI). A dual-zone mesh image reconstruction15 is used to segment the ROI and background region with fine and coarse mesh sizes. This scheme effectively reduces the total number of voxels with unknown optical absorption for imaging reconstruction. Additionally, a total absorption of each voxel is reconstructed, and the total is then divided by the voxel size to provide absorption distributions. Because lesion absorption is higher than background in general, the total absorption, which is the product of voxel size and lesion absorption, of a smaller voxel is about the same scale of total absorption as background in a larger voxel. Therefore, the inversion is better conditioned and converges in fewer iterations compared with conventional methods that do not use a dual-mesh approach. Thus, US identified ROI is critical to guiding dual-zone mesh DOT reconstruction.

Extraction of tumor size and location from US images has been done manually, which requires experienced users to make these measurements and slows down the DOT reconstruction. Similar to other medical imaging techniques, automatic US image segmentation is a challenging task because US image contrast is low and boundaries are often not clear due to speckle. Researchers have explored several methods to obtain a reliable segmentation from medical images. These methods include operator-assisted region growing techniques,16 rule-based segmentation in which some known image primitives are used for an unsupervised segmentation,17 atlas-based image segmentation in which a known structure is searched in the image for segmentation,18 and neural network and c-mean clustering,19 which generates statistical models to classify pixels into different segments. In this article, we introduce a simple adaptive threshold-based method20 that is fast on data processing and easy for implementation; moreover, it also provides comparable accuracy for DOT reconstruction as compared with manual processing. This method utilizes an image histogram to obtain an adaptive threshold for each input image. For some US images, the posterior shadow of a tumor extends to the chest wall and makes the segmentation difficult. To avoid this problem, Hough transform21-based line detection is used to determine the chest wall location and use it as the deep boundary of the tumor.

Twenty patients (10 benign and 10 malignant cases) are used to evaluate the performance of the segmentation method. Reconstructed absorption images are compared with a manual processing method, and similar results are obtained. To the best of our knowledge, this is the first report of an automated segmentation method using US image to guide DOT image reconstruction. The method can be modified and implemented into MRI or x-ray-guided DOT imaging reconstruction.

2.

Methods

2.1.

Patient Data and Experiments

Patient data were acquired from a US-guided DOT system.13 The study was approved by the local Institutional Review Boards and was compliant with the Health Insurance Portability and Accountability Act. All patients signed the informed consent. Data used in this study have been deidentified. Based on biopsy results, 10 patients had benign lesions and 10 patients had cancers. Specific type and US measurements of radius in z (depth) and x (spatial dimension) (cm) of tumor data by an experience user are given in Table 1.

Table 1

Type and size of 10 malignant and 10 benign tumors.

MalignantBenign
Tumor typeRadius (z-axis, x-axis)Tumor typeRadius (z-axis, x-axis)
Ductal carcinoma in situ(0.77, 0.89)Breast tissue with mild chronic inflammation(1.23, 1.67)
Invasive ductal carcinoma(2.24, 2.25)Cyst(1.62, 3.10)
Lobular carcinoma(1.60, 2.40)Fibroadenoma(1.56, 3.11)
Infiltrating ductal carcinoma(0.92, 1.27)Proliferative breast lesions(0.5, 0.57)
Invasive ductal carcinoma(0.93, 0.57)Cyst(0.36, 1.00)
Invasive ductal carcinoma(1.15, 1.57)Cyst(0.61, 0.83)
Invasive ductal carcinoma(0.55, 0.57)Fibrocystic change(0.77, 1.41)
Invasive ductal carcinoma(1.68, 2.00)Intraductal hyperplasia(0.31, 0.63)
Invasive ductal carcinoma(0.83, 0.82)Chronic inflammation(2.06, 2.69)
Invasive ductal carcinoma(0.78, 1.07)Papillary intraductal hyperplasia(0.46, 0.56)

Our data acquisition system consists of a commercial US system and an NIR imager. Briefly, the optical imager delivers light of 740-, 780-, 808-, and 830-nm wavelengths to the tissue sequentially. Light is modulated at 140-MHz carrier frequency. Each wavelength is multiplexed to nine positions on a hand-held probe, and 14 photomultiplier detectors detect reflected light via light guides. The detected signals are demodulated to 20-kHz output. A custom-made analog-to-digital board is used to collect all signals and stores the data in a laptop. Each data set takes 3 to 4 s to acquire, which is fast enough to acquire multiple sets of measurements from each patient at both lesion and contralateral normal breasts for reference. Coregistered US images are captured from the video output of the US system before and after each NIR data set. The detailed system description and data acquisition procedure can be found in Refs. 13 and 22.

2.2.

Extract Tumor Size and Location

To automatically detect lesion size and location for DOT reconstruction, an adaptive threshold-based segmentation method is used. For some cases, posterior shadow of the tumor is extended to the chest wall in the US images. In those cases, it is difficult to determine the tumor size because the deeper boundary of the tumor cannot be accurately determined. Under these circumstances, locations of the chest wall are determined and used as estimates of deeper boundary of the tumor. To determine the chest wall, Hough transform is used together with an edge detection method.

2.2.1.

Preprocessing

A typical coregistered US image acquired by an image capture card is given in Fig. 1(a). For reference, the vertical axis is marked as z-axis and the horizontal axis is marked as x-axis. Measurement in the y-axis is considered the same as the x-axis, assuming that lesions are symmetric in the x- and y-axes. Since the pixel intensity is the key information needed in the segmentation algorithm, the US grayscale image is automatically cropped first from the captured image before using the Hough transform and Sobel23 edge detection method. Figure 1(b) shows the cropped US image.

Fig. 1

(a) A typical US image captured in coregistration mode and (b) cropped US image.

JBO_22_12_121610_f001.png

Depth marker detection is the next step before applying the segmentation procedure because the markers vary with depth range that depends on the user selection from the front panel of the US machine. To determine the depth markers, a binary image is generated using a fixed pixel intensity of 150 out of 256 grayscale levels as the threshold. Since the depth markers are mainly white, this pixel intensity will help to separate them from background. Then all of the white regions consisting of 3 to 50 pixels and located outside the right border of the US image are marked as depth markers. These pixel ranges are obtained by examining the available US images collected from different manufacturers. This depth marker detection procedure detects horizontal ticks along with numbers that make it suitable to use for images collected from a wide range of US machines. Figure 2 shows the captured image with automatically detected depth markers. When positions of those depth markers are known, the difference between two markers in the z-axis provides the number of pixels per centimeter, which are then used to convert the measured tumor size in depth into centimeter.

Fig. 2

Depth markers detected on US image.

JBO_22_12_121610_f002.png

2.2.2.

Adaptive threshold-based segmentation

To extract the required information from the US image, the first step is to segment the lesion from the rest of the image. Then the radius and center of the lesion can be measured from the segmented lesion. A single threshold point is used to separate the two zones, i.e., lesion and background. This threshold point is determined adaptively for each input image. Because US images have speckle noise, some complex segmentation techniques, such as fuzzy c-mean clustering and active contour model,24 do not provide any improvement while demanding computation resources due to complex processing. Moreover, DOT does not require precise segmented information. Thus, instead of using a complex segmentation algorithm, threshold-based segmentation is used here to obtain tumor information.

Lesions in breast US images usually appear as hypoechoic masses that separate them from the background tissue. To segment a hypoechoic mass, a threshold point is set to separate the tumor from the rest of the image. US images usually have very low contrast. Histogram equalization is applied on the grayscale image. Histogram equalization stretches the input histogram over the available range, which is from 0 to 255 in grayscale, and thus increases the contrast. Then a simple procedure is followed to detect the threshold point adaptively. Since the intensity varies significantly among different images, it is best to use adaptive threshold point for every input image.

This adaptive threshold point detection procedure starts with obtaining the histogram of the US image. Figure 3(a) shows the histogram of an input image. The histogram shows a peak and a hump with a notch between them as indicated in the figure. This histogram shape is obtained from all US images after histogram equalization because of the presence of a significant amount of black (provides the peak) and gray pixels (the hump) in a US image. This notch shows the threshold for separating gray background from black tumor. To detect this point automatically, the slope of the histogram is calculated. Pixel intensity of the point when the sign of the slope has changed is considered the threshold value. In the next step, this threshold value is used to generate a binary image.

Fig. 3

(a) Histogram of a US image, threshold is marked with an arrow and (b) inserted seed on the cropped image by user.

JBO_22_12_121610_f003.png

After obtaining the threshold for the US image, a binary image is generated where tumor region is marked as black and background is white. However, tumor is not the only black zone in the binary image. To remove the unwanted black regions, the user needs to insert a seed in the approximate tumor location by clicking the tumor in the US image as shown in Fig. 3(b). If there exist multiple tumors, then multiple seeds are required to be inserted in the probable locations. Any region that does not contain a seed is discarded. Finally, only the tumor region survives. Then the MATLAB® function “regionprop” is used to automatically measure the tumor center and radius. This information is then passed to the optical reconstruction code. The flow diagram in Fig. 4 shows the steps for the entire procedure.

Fig. 4

Flow diagram of the tumor boundary detection procedures.

JBO_22_12_121610_f004.png

2.2.3.

Chest wall detection using Hough transform

Detection of chest wall depth is not essential to obtaining tumor location and size. However, for some cases when the posterior shadow extends to the bottom of the US image, it is difficult to define the bottom of the tumor. In such cases, the chest wall location is considered the bottom of the tumor. We defined chest wall depth as the distance from the skin to the top layer of chest wall muscle. An automated chest wall depth detection method was developed and applied to the coregistered US images. Detection of the chest wall is based on the fact that chest wall muscles appear as line structures in US images [see Fig. 5(a)].25 Therefore, line detection algorithms could be used for automatic detection. We chose Hough transform21 as a line detection method because it is simple and robust when combined with any edge detection method. Here, the Canny edge detection26 method is used as an edge detection method. The binary image generated by the Canny edge detection is shown in Fig. 5(b).

Fig. 5

(a) Breast US image with chest wall marked with arrows, (b) edge detected binary image from (a), and (c) detected chest wall location on the original input image. The yellow and red stars indicate the separation points between line pieces. Green lines indicate the detected linear structures after restriction applying.

JBO_22_12_121610_f005.png

It is clear from Fig. 5(b) that, if Hough transform is applied to the edge detected image without any restriction, it will detect several unnecessary structures. For example, due to subcutaneous fat and breast tissue interfaces, some linear structures appear at the top of the US image. There are other linear structures also visible in the image. Hough transform detects all of these linear structures. To avoid these unnecessary line structures, we modeled the chest wall as a linear structure that is mainly horizontal with a small slope and it should appear at the lower half of the image. After applying Hough transform and the above-mentioned restrictions, the survived linear structures are marked in green lines as shown in Fig. 5(c). Finally, the mean value of all of the points of these detected lines is considered the chest wall depth. A flow diagram of the entire procedure is given in Fig. 6. More details on the chest wall detection method and evaluation of this method can be found in Ref. 27.

Fig. 6

Flow diagram of the chest wall detection method.

JBO_22_12_121610_f006.png

3.

Optical Reconstruction

The absorption map of each wavelength was reconstructed using the dual-mesh approach with lesion parameters obtained from coregistered US. Because the spatial resolution of diffused light is poorer than that of US, the ROI is chosen to be at least two to three times larger than that seen using US in the xy dimensions. In addition, because the depth localization of diffused light is very poor, a tighter ROI in the depth dimension is set using coregistered US. The weight matrix was computed using fitted optical properties of each patient’s normal contralateral breast. The scattered field Usd measured from the lesion area was related to the internal total absorption coefficients Δμavoxel size×Δμa using the following equation:

[Usd]M×1=[W]M×N[Δμa]N×1,
where M=s×d is the total number of source–detector pairs and W is the weight matrix related to the sensitivity of voxels inside the medium. The number of amplitude and phase measurements is 252 (2×M) for 9 sources and 14 detectors. However, the number of voxels varies from 300 to 1000 based on the size of the tumor. To obtain the unknown absorption information, the conjugate gradient method was used to solve the inverse problem formulated as minimize UsdWΔμa2, where . is the Euclidean norm. Since this is an ill-posed problem mainly due to the correlated diffused scattering field, the dual-mesh technique utilizes the tumor location and size information extracted from coregistered US images for reconstruction.15 After applying the dual-mesh technique to minimize the number of unknowns, reconstruction speed improves and convergence is reached in 3 to 4 iterations.

4.

Results

The proposed US segmentation method is evaluated in two steps. First, the US segmented reconstruction results are obtained and deviation is calculated against manually segmented reconstruction results. Second, both automated and manually segmented results are used to generate absorption maps and the corresponding hemoglobin concentration maps and are compared.

4.1.

Validation of Ultrasound Segmentation

To evaluate the performance of the US segmentation algorithm, the tumor boundary for all 20 cases was delineated by an experienced US imager. These readings are taken as standard in this study. Then the experimental results were compared with those manual measurements.

Two input images with manually marked tumor boundaries are presented in Figs. 7(a) and 8(a). In Figs. 7(b) and 8(b), segmented images using the proposed method are presented. It is clear from these figures that the segmented tumor by the proposed algorithm is comparable to the manual measurement. To obtain a quantitative evaluation of the US segmentation procedure, US images from 10 benign and 10 malignant cases are collected. Then center coordinates of the tumor and radius in both axes are measured manually. The same information is also collected from the proposed segmentation method. Then deviation is calculated between the two methods for 20 images. Comparison of the average measurements from these 20 images is given in Table 2. From the table, we found that manual measurements are slightly smaller than the proposed measurements. However, deviation from different measurements never exceeds 0.25 cm, which is the resolution of the optical reconstruction, so optical reconstruction will not be affected by this small deviation.

Fig. 7

(a) US image with manual markers to measure sizes of the tumor. The measurements were 3.1 cm in spatial direction x and 1.6 cm in depth direction z using manual measurements. (b) Segmented US image using the semiautomated procedure and the measurements were 3.3 cm in spatial x direction and 1.6 cm in depth direction.

JBO_22_12_121610_f007.png

Fig. 8

(a) US image with manual markers to measure sizes of the tumor. The measurements were 0.88 cm in spatial direction x and 0.77 cm in depth direction z using manual measurements. (b) Segmented US image using the semiautomated procedure and the measurements were 0.9 cm in spatial x direction and 0.73 cm in depth direction.

JBO_22_12_121610_f008.png

Table 2

Comparison between manually and semiautomatically extracted information from US images.

Manual segmentation (cm)Proposed segmentation (cm)Deviation = manual — proposed (cm)
Benign casesz center position1.491.560.07l
x center position0.140.190.02
z-radius0.950.970.1
x-radius1.551.650.05
Malignant casesz center position1.91.980.07
x center position0.130.150.03
z-radius1.150.990.06
x-radius1.341.40.03

To evaluate the repeatability of the proposed US segmentation algorithm, we measured four parameters (lesion depth, z-radius, x-radius, and x-center) from three different sets of US images and reconstructed the corresponding total hemoglobin maps. For each case, these images were collected from the same lesion location; however, some deviation was expected because the operator intended to hold the probe still for each data set and may move a little between different data sets to obtain the best US images. For each case, mean and standard deviation are given in Table 3. This deviation for depth is <1.5  mm and for the other three spatial measurements is smaller than 0.25 cm (image grid size) and thus does not have any major effect on optical reconstruction. As shown in the table, the maximum deviation obtained from benign cases is 4.45  μM and for malignant case is 2.06  μM.

Table 3

Evaluation of the repeatability of the proposed method.

Depth (cm)z-radius (cm)x-radius (cm)x center (cm)Total hemoglobin (μM)
Benign cases1.86±0.0421.48±0.0742.02±0.2330.47±0.04823.66±0.905
1.78±0.0011.57±0.0063.45±0.1890.3±0.01462.81±0.384
1.69±0.0131.44±0.0793.07±0.1180.45±0.12450.25±4.452
2±0.0060.4±0.0060.53±0.0060.58±0.14362.69±0.362
1.08±0.0030.39±0.0191.03±0.0700±0.02642.33±0.001
1.18±0.0060.56±0.0091.12±0.1660.09±0.05128.26±0.022
1.61±0.0360.75±0.0241.61±0.0460.1±0.29767.15±0.018
1.16±0.0090.24±0.0090.49±0.0240.15±0.116123.58±0.317
1.8±0.0402.52±0.0833.46±0.1140.01±0.07964.11±0.002
1.32±0.0200.26±0.0040.48±0.0420±0.01383.98±0.002
Malignant cases1.35±0.0190.72±0.0120.93±0.0270.21±0.117109.78±0.129
2.55±0.0311.92±0.0292.84±0.0650.16±0.077172.66±0.000
2.97±0.0181.45±0.0362.34±0.0160.63±0.243198.14±1.734
1.66±0.0080.88±0.0091.07±0.0310.48±0.10693.36±2.061
2.03±0.0290.65±0.0780.43±0.0090.49±0.03595.73±0.038
1.45±0.0271.09±0.0091.45±0.0240.52±0.079107.23±1.375
2.59±0.0010.3±0.0300.55±0.0140.19±0.028135.49±0.060
1.94±0.0731.34±0.1372.76±0.2190.53±0.08277.77±0.556
1.79±0.0140.77±0.0260.81±0.0370.46±0.099156.49±0.992
1.57±0.0160.65±0.0000.97±0.0150.24±0.01088.84±0.001

4.2.

Validation of Optical Reconstruction

The ultimate goal of the US segmentation algorithm is to assist DOT reconstruction. In this section, performance of optical reconstruction is evaluated using tumor information extracted from both manual and proposed segmentation processes. Optical data of the same 20 patients were used to generate absorption maps for four different wavelengths. Then the absorption information is used to obtain hemoglobin concentration. Both manual and semiautomatic features are used to generate different absorption maps. In Figs. 9 and 10, absorption maps for a benign case are compared, and a malignant case is presented in Figs. 11 and 12. It is clear from these figures that the reconstructed map is almost similar. The average maximum absorption from 20 cases is compared in Table 4. From the table, we can see that the mean optical absorption obtained from manual measurement was 0.21±0.06  cm1 for malignant and 0.12±0.06  cm1 for benign cases, where for the proposed method it was 0.24±0.08  cm1 for malignant and 0.12±0.055  cm1 for benign tumors.

Fig. 9

Optical absorption maps of four wavelengths using three times of the size measured by US in x-dimension. Depth used in optical reconstruction is the same as US measurement. Each optical absorption map has seven image slides of 0.5 cm from the skin surface to the chest wall with 0.5 cm step in depth. Manually measured tumor information from Fig. 7(a) is used in these maps.

JBO_22_12_121610_f009.png

Fig. 10

Optical absorption maps using three times of the size identified by US in x-dimension. Depth used in optical reconstruction is the same as US measurement. Tumor information for these maps was extracted from Fig. 7(b).

JBO_22_12_121610_f010.png

Fig. 11

Optical absorption maps of four wavelengths using three times of US measured size in x and same size as US measurement in z. Tumor dimension and location were extracted from Fig. 8(a) to generate these maps.

JBO_22_12_121610_f011.png

Fig. 12

Optical absorption maps using three times of US measured size in x and same size as US measurement in z. To generate these maps, tumor information was extracted from Fig. 8(b).

JBO_22_12_121610_f012.png

Table 4

Average absorption coefficient using manual and automatically segmented tumor information.

MalignantBenignRatio
Average (standard deviation) of maximum reconstructed absorption with manual tumor segmentation (cm1)
740 nm0.19 (0.08)0.11 (0.06)1.72
780 nm0.22 (0.07)0.12 (0.06)1.83
808 nm0.22 (0.05)0.14 (0.08)1.57
830 nm0.22 (0.05)0.13 (0.05)1.69
Average (standard deviation) of maximum reconstructed absorption with proposed tumor segmentation (cm1)
740 nm0.21 (0.08)0.14 (0.05)1.5
780 nm0.25 (0.09)0.12 (0.05)2.08
808 nm0.24 (0.08)0.12 (0.06)2
830 nm0.24 (0.08)0.12 (0.06)2

Finally, Fig. 13 shows boxplots for oxygenated, deoxygenated, and total hemoglobin concentrations for the same 20 cases for both manual and proposed automated procedures. We can see from this figure that results for both techniques are almost similar. For benign cases, mean total hemoglobin concentration for all 10 cases is 58.95±27.76  μM from manual segmentation and 58.64±27.93  μM for the proposed automated segmentation. For malignant cases, this measurement is 115.23±39.62  μM from manual segmentation and 114.64±49.66  μM for the automated segmentation. Values for oxygenated hemoglobin for benign cases is 35.73±20.67  μM for manual segmentation method and 38.32±21.67  μM for proposed segmentation method. For malignant cases, this measurement is 72.30±23.07  μM and 75.27±27.92  μM for manual and proposed segmentation methods, respectively. Deoxygenated hemoglobin concentration is 35.41±15.31  μM and 37.13±16.21  μM for benign cases using manual and proposed segmentation methods, respectively. For malignant cases, this measurement increases to 50.26±19.63  μM and 48.04±22.88  μM for manual and proposed segmentation methods, respectively. Thus, the performance of the proposed feature extraction technique is quite acceptable.

Fig. 13

Comparison of hemoglobin concentration for 10 benign and 10 malignant cases.

JBO_22_12_121610_f013.png

5.

Discussion and Summary

In this work, a simple and effective US segmentation algorithm designed for assisting DOT image reconstruction is presented. This algorithm extracts tumor size and location from breast US images with minimum user interaction. It provides sufficiently accurate ROI for the DOT reconstruction; at the same time, it is very easy to implement and does not require much computational resources, thus making it ideal for real-time DOT reconstruction. Along with the threshold-based segmentation, Hough transform-based line detection is combined in the algorithm to detect chest wall location. Chest wall location is only needed when the tumor acoustic attenuation shadows the deeper boundary of the tumor.

However, this algorithm will not be able to extract information from some noisy US images when the contrast between tumor and background is very low. This work also requires limited input from users; thus, it is not fully automated. In the future, it will move toward automatic segmentation by applying a search algorithm based on the local mean28 or similar seed generation method.

In conclusion, this work is one-step closer toward real-time DOT reconstruction. It eliminates the requirement for training an experienced user to provide the tumor location and size from the US images. It provides the required tumor information for the dual-mesh reconstruction with necessary accuracy. Another important feature of this proposed algorithm is that it only utilized the pixel intensities and, thus, is applicable as a segmentation approach for other imaging modalities, such as MRI-guided DOT and x-ray-guided DOT.

Disclosures

The authors have no relevant financial interests in this article and no potential conflicts of interest to disclose.

Acknowledgments

The authors thank the funding support of this work from the National Institutes of Health (No. R01EB002136) and the Connecticut Innovation Bioscience fund.

References

2. 

B. Zheng et al., “Abstract P4-02-06: improving efficacy of applying breast MRI to detect mammography-occult breast cancer,” Cancer Res., 76 (4), P4-02-06 (2016). http://dx.doi.org/10.1158/1538-7445.SABCS15-P4-02-06 CNREA8 0008-5472 Google Scholar

3. 

W. A. Berg et al., “Ultrasound as the primary screening test for breast cancer: analysis from ACRIN 6666,” J. Natl. Cancer Inst., 108 (4), djv367 (2016). http://dx.doi.org/10.1093/jnci/djv367 JNCIEQ Google Scholar

4. 

D. A. Boas et al., “Imaging the body with diffuse optical tomography,” IEEE Signal Process. Mag., 18 (6), 57 –75 (2001). http://dx.doi.org/10.1109/79.962278 ISPRE6 1053-5888 Google Scholar

5. 

X. Wu et al., “Fast and efficient image reconstruction for high density diffuse optical imaging of the human brain,” Biomed. Opt. Express, 6 (11), 4567 –4584 (2015). http://dx.doi.org/10.1364/BOE.6.004567 BOEICL 2156-7085 Google Scholar

6. 

T. Durduran et al., “Diffuse optics for tissue monitoring and tomography,” Rep. Prog. Phys., 73 (7), 076701 (2010). http://dx.doi.org/10.1088/0034-4885/73/7/076701 RPPHAG 0034-4885 Google Scholar

7. 

F. Larusson et al., “Parametric estimation of 3D tubular structures for diffuse optical tomography,” Biomed. Opt. Express, 4 (2), 271 –286 (2013). http://dx.doi.org/10.1364/BOE.4.000271 BOEICL 2156-7085 Google Scholar

8. 

B. Chance et al., “Breast cancer detection based on incremental biochemical and physiological properties of breast cancers: a six-year, two-site study,” Acad. Radiol., 12 (8), 925 –933 (2005). http://dx.doi.org/10.1016/j.acra.2005.04.016 Google Scholar

9. 

G. Quarto et al., “Estimate of tissue composition in malignant and benign breast lesions by time-domain optical mammography,” Biomed. Opt. Express, 5 (10), 3684 –3698 (2014). http://dx.doi.org/10.1364/BOE.5.003684 BOEICL 2156-7085 Google Scholar

10. 

Q. Zhu et al., “Assessment of functional differences in malignant and benign breast lesions and improvement of diagnostic accuracy by using US-guided diffuse optical tomography in conjunction with conventional US,” Radiology, 280 387 –397 (2016). http://dx.doi.org/10.1148/radiol.2016151097 RADLAX 0033-8419 Google Scholar

11. 

B. E. Schaafsma et al., “Optical mammography using diffuse optical spectroscopy for monitoring tumor response to neoadjuvant chemotherapy in women with locally advanced breast cancer,” Clin. Cancer Res., 21 (3), 577 –584 (2015). http://dx.doi.org/10.1158/1078-0432.CCR-14-0736 Google Scholar

12. 

B. J. Tromberg et al., “Assessing the future of diffuse optical imaging technologies for breast cancer management,” Med. Phys., 35 (6), 2443 –2451 (2008). http://dx.doi.org/10.1118/1.2919078 MPHYA6 0094-2405 Google Scholar

13. 

C. Xu et al., “Ultrasound-guided diffuse optical tomography for predicting and monitoring neoadjuvant chemotherapy of breast cancers—recent progress,” Ultrason. Imaging, 38 5 –18 (2015). http://dx.doi.org/10.1177/0161734615580280 ULIMD4 0161-7346 Google Scholar

14. 

H. Vavadi and Q. Zhu, “Automated data selection method to improve robustness of diffuse optical tomography for breast cancer imaging,” Biomed. Opt. Express, 7 (10), 4007 –4020 (2016). http://dx.doi.org/10.1364/BOE.7.004007 BOEICL 2156-7085 Google Scholar

15. 

M. Huang and Q. Zhu, “A dual-mesh optical tomography reconstruction method with depth correction using a priori ultrasound information,” Appl. Opt., 43 (8), 1654 –1662 (2004). http://dx.doi.org/10.1364/AO.43.001654 APOPAI 0003-6935 Google Scholar

16. 

S.-Y. Wan and W. E. Higgins, “Symmetric region growing,” IEEE Trans. Image Process., 12 (9), 1007 –1015 (2003). http://dx.doi.org/10.1109/TIP.2003.815258 IIPRE4 1057-7149 Google Scholar

17. 

Y. Xia et al., “Automatic segmentation of the caudate nucleus from human brain MR images,” IEEE Trans. Med. Imaging, 26 (4), 509 –517 (2007). http://dx.doi.org/10.1109/TMI.2006.891481 ITMID4 0278-0062 Google Scholar

18. 

B. Fischl, M. I. Sereno and A. M. Dale, “Cortical surface-based analysis: II: inflation, flattening, and a surface-based coordinate system,” NeuroImage, 9 (2), 195 –207 (1999). http://dx.doi.org/10.1006/nimg.1998.0396 NEIMEF 1053-8119 Google Scholar

19. 

J. C. Bezdek, L. O. Hall and L. P. Clarke, “Review of MR image segmentation techniques using pattern recognition,” Med. Phys., 20 (4), 1033 –1048 (1993). http://dx.doi.org/10.1118/1.597000 MPHYA6 0094-2405 Google Scholar

20. 

E. R. Davies, Machine Vision: Theory, Algorithms, Practicalities, Elsevier, Amsterdam (2004). Google Scholar

21. 

D. H. Ballard, “Generalizing the Hough transform to detect arbitrary shapes,” Pattern Recognit., 13 (2), 111 –122 (1981). http://dx.doi.org/10.1016/0031-3203(81)90009-1 Google Scholar

22. 

H. Vavadi et al., “Preliminary results of miniaturized and robust ultrasound guided diffuse optical tomography system for breast cancer detection,” Proc. SPIE, 10059 100590F (2017). http://dx.doi.org/10.1117/12.2250034 PSISDG 0277-786X Google Scholar

23. 

R. Maini and H. Aggarwal, “Study and comparison of various image edge detection techniques,” Int. J. Image Process. (IJIP), 3 (1), 1 –11 (2009). Google Scholar

24. 

D. Withey and Z. Koles, “A review of medical image segmentation: methods and available software,” Int. J. Bioelectromagnetism, 10 (3), 125 –148 (2008). Google Scholar

25. 

J. H. Youk et al., “Imaging findings of chest wall lesions on breast sonography,” J. Ultrasound Med., 27 (1), 125 –138 (2008). http://dx.doi.org/10.7863/jum.2008.27.1.125 JUMEDA 0278-4297 Google Scholar

26. 

J. Canny, “A computational approach to edge detection,” IEEE Trans. Pattern Anal. Mach. Intell., PAMI-8 679 –698 (1986). http://dx.doi.org/10.1109/TPAMI.1986.4767851 ITPIDJ 0162-8828 Google Scholar

27. 

F. Zhou, A. Mostafa and Q. Zhu, “Improving breast cancer diagnosis by reducing chest wall effect in diffuse optical tomography,” J. Biomed. Opt., 22 (3), 036004 (2017). http://dx.doi.org/10.1117/1.JBO.22.3.036004 iJBOPFO 1083-3668 Google Scholar

28. 

M. Xian, Y. Zhang and H. D. Cheng, “Fully automatic segmentation of breast ultrasound images based on breast characteristics in space and frequency domains,” Pattern Recognit., 48 (2), 485 –497 (2015). http://dx.doi.org/10.1016/j.patcog.2014.07.026 Google Scholar

Biography

Atahar Mostafa is a PhD student in the Biomedical Engineering Department at Washington University in St. Louis. He received his BS degree in electrical and electronics engineering from Bangladesh University of Engineering and Technology in Bangladesh and his MS degree in electrical engineering from the University of Saskatchewan in Canada. His research is focused on ultrasound-guided diffuse optical tomography.

Hamed Vavadi received his PhD in biomedical engineering at the University of Connecticut. Prior his PhD, he has done his BS degree in electrical engineering and his MSc degree in biomedical engineering with expertise in vital signal processing. He has worked in near-infrared spectroscopy and optical imaging for cancer diagnosis with funding support from the National Institute of Health. He was also awarded Third Bridge Grant from Connecticut Innovation through UCONN School of Engineering for developing a handheld NIR optical diagnosis device. His research interest includes, optical imaging, NIR spectroscopy, cancer detection, wearable devices, and vital signal processing.

K. M. Shihab Uddin is a PhD student in the Biomedical Engineering Department at Washington University in St. Louis. He received his bachelor’s degree in electrical and electronics engineering from Bangladesh University of Engineering and Technology in Bangladesh. His research is focused on ultrasound-guided diffuse optical tomography.

Quing Zhu is a professor in Biomedical Engineering and Radiology Department at Washington University in St. Louis. She has been named a fellow of OSA, SPIE, and is an associate editor of IEEE Photonics Society and editorial board member of Photoacoustic and Biomedical Optics. Her research is focused on ultrasound-guided diffuse optical tomography for breast cancer diagnosis and treatment monitoring, coregistered ultrasound, and photoacoustic tomography for ovarian cancer diagnosis, optical coherent tomography, and photoacoustic microscopy.

© 2017 Society of Photo-Optical Instrumentation Engineers (SPIE) 1083-3668/2017/$25.00 © 2017 SPIE
Atahar Mostafa, Hamed Vavadi, K. M. Shihab Uddin, and Quing Zhu "Diffuse optical tomography using semiautomated coregistered ultrasound measurements," Journal of Biomedical Optics 22(12), 121610 (19 December 2017). https://doi.org/10.1117/1.JBO.22.12.121610
Received: 7 May 2017; Accepted: 4 December 2017; Published: 19 December 2017
Lens.org Logo
CITATIONS
Cited by 12 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Image segmentation

Tumors

Absorption

Chest

Ultrasonography

Diffuse optical tomography

Reconstruction algorithms

Back to Top