## 1.

## Introduction

Diabetic retinopathy (DR) is one of the most serious and most frequent eye diseases in the world. It is a complication of diabetes and the most common cause of blindness in adults. In order to prevent the damage of DR, it is very important to diagnose DR early. The analysis of digital retinal images (Fig. 1), obtained by the fundus camera, is viewed as a feasible approach because the acquisition of the retinal image is nonintrusive and low cost.

There are two categories of analysis based on retinal images. One is morphological analysis on arteries, veins, and optical cup.^{1} The other is based on the detection of the pathology lesion, such as hemorrhages, microaneurysms (MAs), hard exudates, and cotton wool spots.^{2, 3} MAs are the first unequivocal signs of DR, which appear as small reddish isolated patterns of circular shape in color fundus images. They are characterized by their diameters, which are always <125 *μ*m.^{4} Because they are situated on capillaries and capillaries are not visible in color retinal images, they appear as isolated patterns (i.e., disconnected from the vascular tree).

In this paper, we propose a new method to optimize the algorithm to identify MAs. The algorithm for automatic detection of MAs generally comprises three steps. The first step aims at detecting candidates i.e., all patterns possibly corresponding to MAs based on a mathematical morphological black top hat. Then, features are extracted to characterize these candidates. And finally, a support vector machine (SVM) is used to distinguish MAs from all candidates. We use the receiver operating characteristic (ROC) curve to evaluate the distinguishing ability of different feature vectors and different kernel functions of SVM in this paper. Thus, the optimal feature vector and classifier can be determined by ROC analysis.

## 2.

## Automatic Detection of MA

## 2.1.

### Candidates Detection-Based Morphological Operation

Mathematical morphology operation is a way of nonlinear filtering used for image processing.^{5} The primary operations of mathematical morphology are dilation and erosion, denoted by ⊕ and Θ, on which other operations are based, such as opening and closing operations. In this paper, we use a black-top-hat transform. It can extract dark objects and structures in gray images. In the RGB images, the green channel exhibits the best contrast. We work on the gray image from the green channel, denoted by *f*. In the gray image *f*, MAs appear as dark patterns, small, isolated, and of circular shape.

Black-top-hat transform is the first closing operation on the image *f* and then subtracting the original image, defined as

## Eq. 1

[TeX:] \documentclass[12pt]{minimal}\begin{document}\begin{equation} f_{\rm b} = (f \bullet {e}) - f, \end{equation}\end{document} $${f}_{\mathrm{b}}=(f\u2022e)-f,$$*e*is a structuring element and

*f*•

*e*means closing operation to

*f*with

*e*. The morphological closing operation dilates an image

*f*and then erodes the dilated image using the same structuring element, which is defined as follows:

## Eq. 2

[TeX:] \documentclass[12pt]{minimal}\begin{document}\begin{equation} f \bullet e = (f \oplus e)\,\Theta\, e. \end{equation}\end{document} $$f\u2022e=(f\oplus e)\phantom{\rule{0.16em}{0ex}}\Theta \phantom{\rule{0.16em}{0ex}}e.$$The blood vessels in the retinal images are usually considered as piecewise linear structures at different orientations. A total of 12 rotated linear structures are used with a radial resolution of 15 deg. The length of a linear structuring element should be such that it is larger than MAs in retinal images. MAs appear as dark patterns within *s* pixels, which is <125 *μ*m. Thus, the length of the linear-structuring element must be longer than *s* pixels and *s* is variable for different sizes of original images. For each pixel, record the minimum response as the closed result applied with those 12 rotated linear-structuring elements. Then, we obtain the image *f*
_{b} by subtracting the original gray image from the closed image. The image *f*
_{b} contains MAs, which appear as the local bright regions in Fig. 2(a).

Only a certain number of candidates are reasonable (for example, several dozens for each image). A matched filter is used to extract regions of interest (ROI) from *f*
_{b}. The matched filter is a 2-D Gaussian function with σ = 1 and has a size of *s* × *s* pixels. Gaussian difference
[TeX:]
$d_{\rm G}$
${d}_{\mathrm{G}}$
is an index image to evaluate the difference between the local region in *f*
_{b} and Gaussian function, calculated as

## Eq. 3

[TeX:] \documentclass[12pt]{minimal}\begin{document}\begin{equation} d_{\rm G}(i,j) = \frac{\sum_{i',j' \in S} {[f_{\rm b}(i',j') - g(i',j')} ]^2}{s^2}, \end{equation}\end{document} $${d}_{\mathrm{G}}(i,j)=\frac{{\sum}_{{i}^{\prime},{j}^{\prime}\in S}[{f}_{\mathrm{b}}({i}^{\prime},{j}^{\prime})-g({i}^{\prime},{j}^{\prime}){]}^{2}}{{s}^{2}},$$*g*is the normalized 2-D Gaussain function centered at (

*i*,

*j*) and (

*i*′,

*j*′) is the pixel position within the local region of

*f*

_{b}centered at (

*i*,

*j*), with size of

*s*×

*s*pixels. Thus,

*d*

_{G}(

*i*,

*j*) is summed over all the pixels (

*i*′,

*j*′) in the

*s*×

*s*region, and this local region is denoted by

*S*in Eq. 3.

The ROI are the local bright regions in *f*
_{b} with lower Gaussian difference value [*d*
_{G}(*i, j*)]. They are extracted by a global threshold of Gaussian difference. Each ROI has a size of *s* × *s* pixels and is recorded by its center coordinates for facilitating to extract corresponding ROI from different source. The binary candidate region of each ROI is determined by thresholding, which minimizes the intraclass variance of candidate and surrounding [shown in Fig. 2(b)].

## 2.2.

### Feature Extraction for Candidates

Because MAs are mainly characterized by their shape, size, and color, we use three types of features as follows:

1 shape features, such as relative size and compactness of candidate

2 texture features based on the gray-level co-occurrence matrix (GLCM) of ROI from the green channel

3 color features within the ROI from different color space

Shape is one of the essential characteristics of the object, which provides meaningful information. The Shape feature can be divided into two categories: one is based on the boundary of the candidate and the other is based on the region. We use the relative size and compactness to describe the shape of candidates. Each ROI is a region of *s*×*s* pixels, where *s* depends on the size of the original image to ensure the length of *s* pixels is not <125 μm. The binary candidate of each ROI is obtained by thresholding the ROI, making the surrounding black and the candidate white. In the binary ROI, area *A* and perimeter *P* of the candidate is calculated. Relative size, *s*
_{1}, is defined as the ratio of area of candidates and ROI. Compactness, *s*
_{2}, is used to describe circularity of the shape of candidate, which is defined as

## Eq. 4

[TeX:] \documentclass[12pt]{minimal}\begin{document}\begin{equation} \begin{array}{*{20}c} {s_1 = A/s^2 }, \\[2pt] {s_2 = 1 - 4\pi A/P^2 }, \\ \end{array} \end{equation}\end{document} $$\begin{array}{c}{s}_{1}=A/{s}^{2},\\ {s}_{2}=1-4\pi A/{P}^{2},\end{array}$$*A*is the number of pixels in a candidate and

*P*is the total number of pixels around the boundary of the candidate.

Texture is a significant property in digital images. It has an important role in human visual perception and offers information for recognition and interpretation. The GLCM is a powerful statistical tool that has proved its usefulness in a variety of image-analysis applications. It captures second-order gray-level information, which is mostly related to human perception and the discrimination of textures. It is common practice to utilize the 14 well-known Haralick's coefficients as the GLCM-based features.^{6} The coefficients are usually calculated from the average GLCM obtained by averaging the matrices calculated for 0-, 45-, 90-, and 135-deg directions. In this work, we first convert the gray scale of each ROI to 11, then calculate the average GLCM,
[TeX:]
$\bar C$
$\overline{C}$
. Last, we select four coefficients based on GLCM. They are energy *t*
_{1}, contrast *t*
_{2}, local homogeneity *t*
_{3}, and entropy *t*
_{4}.

## Eq. 5

[TeX:] \documentclass[12pt]{minimal}\begin{document}\begin{equation} \displaystyle\begin{array}{*{20}c} {t_1 = \sum\limits_{i,j = 1}^N {\overline{C}^2 (i,j)} }, \\[2pt] {t_2 = \sum\limits_{i,j = 1}^N {(i - j)^2 \overline{C}(i,j)} }, \\[2pt] {t_3 = \sum\limits_{i,j = 1}^N \displaystyle{\frac{1}{{1 + (i - j)^2 }}\overline{C}(i,j)} }, \\[2pt] {t_4 = \sum\limits_{i,j = 1}^N {\bar C(i,j)} \log \overline{C}(i,j)}. \\[2pt] \end{array} \end{equation}\end{document} $$\begin{array}{c}{t}_{1}=\sum _{i,j=1}^{N}{\overline{C}}^{2}(i,j),\\ {t}_{2}=\sum _{i,j=1}^{N}{(i-j)}^{2}\overline{C}(i,j),\\ {\displaystyle {t}_{3}=\sum _{i,j=1}^{N}\frac{1}{1+{(i-j)}^{2}}\overline{C}(i,j)},\\ {t}_{4}=\sum _{i,j=1}^{N}\overline{C}(i,j)\mathrm{log}\overline{C}(i,j).\end{array}$$Intensity is the only available information in the gray image, but color images provide more abundant color information, except intensity. The color contrast is a useful feature. It is defined as the sum of squared differences between the candidate region and its surroundings.^{4} We extract color features in RGB color space first, defined as

## Eq. 6

[TeX:] \documentclass[12pt]{minimal}\begin{document}\begin{equation} \begin{array}{*{20}c} {c_1 = [\mu _{{\rm ext}} (R) - \mu _{{\mathop{\rm int}} } (R)]^2 }, \\[2pt] {c_2 = [\mu _{{\rm ext}} (G) - \mu _{{\mathop{\rm int}} } (G)]^2 }, \\[2pt] {c_3 = [\mu _{{\rm ext}} (B) - \mu _{{\mathop{\rm int}} } (B)]^2 }, \\ \end{array} \end{equation}\end{document} $$\begin{array}{c}{c}_{1}={[{\mu}_{\mathrm{ext}}\left(R\right)-{\mu}_{\mathrm{int}}\left(R\right)]}^{2},\\ {c}_{2}={[{\mu}_{\mathrm{ext}}\left(G\right)-{\mu}_{\mathrm{int}}\left(G\right)]}^{2},\\ {c}_{3}={[{\mu}_{\mathrm{ext}}\left(B\right)-{\mu}_{\mathrm{int}}\left(B\right)]}^{2},\end{array}$$The selection of these color features is a complicated task due to the variety of color models. The RGB space of the original image is transformed to hue, saturation, and value space (HSV) because HSV color space is more appropriate since it allows the value component to be separated from the other two color components. Then we extract two kinds of color feature from the hue and saturation components as

## Eq. 7

[TeX:] \documentclass[12pt]{minimal}\begin{document}\begin{equation} \begin{array}{*{20}c} {c_4 = [\mu _{{\rm ext}} (H) - \mu _{{\mathop{\rm int}} } (H)]^2 }, \\[4pt] {c_5 = [\mu _{{\rm ext}} (S) - \mu _{{\mathop{\rm int}} } (S)]^2 }. \\ \end{array}\vadjust{\vspace*{-2pt}\pagebreak} \end{equation}\end{document} $$\begin{array}{c}{c}_{4}={[{\mu}_{\mathrm{ext}}\left(H\right)-{\mu}_{\mathrm{int}}\left(H\right)]}^{2},\\ {c}_{5}={[{\mu}_{\mathrm{ext}}\left(S\right)-{\mu}_{\mathrm{int}}\left(S\right)]}^{2}.\end{array}$$In order to select the optimal feature vector, we construct several feature vectors, denoted by *S*, *T*, *C*1, *C*2, *A*1, and *A*2. Here, *S* = [*s*
_{1},*s*
_{2}], T = [*t*
_{1},*t*
_{2},*t*
_{3},*t*
_{4}], *C*1 = [*c*
_{1},*c*
_{2},*c*
_{3}], *C*2 = [*c*
_{4},*c*
_{5}], *A*1 = {*S*,*T*,*C*1}, and *A*2 = {*S*, *T*,*C*2}. We test these feature vectors by SVM in Section 2.3.

## 2.3.

### Validation of MA Based on SVM

The SVM, first introduced by Vapnik, is a learning algorithm for two-class classification. It is widely used in pattern recognition applications. It is based on strong foundations from the broad area of statistical learning theory according to structural risk minimization. A SVM is known for its good performance. The basic principle behind a binary SVM is to find the hyperplane that best separates vectors from both classes in feature space while maximizing the distance from each class to the hyperplane.^{7} There are both linear and nonlinear approaches to a SVM. If the two classes are linearly separable, then the SVM attempts to find the optimal separating hyperplane by maximizing the margin between both classes. The margin is
[TeX:]
$2/\!\left\| w \right\|^2$
$2/{\Vert w\Vert}^{2}$
. For a linear SVM, to find the optimal hyperplane is equal to min
[TeX:]
$\frac{1}{2}\left\| w \right\|^2$
$\frac{1}{2}{\Vert w\Vert}^{2}$
. When the two classes are nonlinearly separable, the SVM computes the optimal separating hyperplane by minimizing the following equation:

## Eq. 8

[TeX:] \documentclass[12pt]{minimal}\begin{document}\begin{equation} J(w) = \frac{1}{2}\left\| w \right\|^2 + C\sum\limits_{i = 1}^{\rm N} {\xi _i }, \end{equation}\end{document} $$J\left(w\right)=\frac{1}{2}{\Vert w\Vert}^{2}+C\sum _{i=1}^{\mathrm{N}}{\xi}_{i},$$*C*> 0 is user defined and determines the trade-off between the maximization of the margin and minimization of the classification error and [TeX:] $\xi _i$ ${\xi}_{i}$ are slack variables introduced for nonlinearly separable classes.

Different kernel functions determine the classification performance of the SVM. Thus, the selection of the kernel function is important to the SVM. We compare the performances of commonly used kernel functions based on the receiver operation characteristic (ROC) curve. The commonly used kernels are defined as

## Eq. 9

[TeX:] \documentclass[12pt]{minimal}\begin{document}\begin{eqnarray} \begin{array}{*{20}c} {{\rm Linear}\,{\rm function{:}}\; K(x_1, x_2) = \left\langle {x_1, x_2 } \right\rangle },\\ {{\rm Polynomial}\,{\rm (poly){:}}\; K(x_1, x_2) = (\left\langle {x_1, x_2 } \right\rangle + 1)^p },\hspace*{-10pt} \\ {{\rm Gaussian}\,{\rm radial}\,{\rm basis}\,{\rm function}\,{\rm (rbf){:}}\; K(x_1, x_2) = e^{{{ - \left| {x_1 - x_2 } \right|^2 } \mathord{\left/ {\vphantom {{ - \left| {x_1 - x_2 } \right|^2 } {2\sigma ^2 }}} \right. \kern-\nulldelimiterspace} {2\sigma ^2 }}} }\!, \end{array}\nonumber\hspace*{-10pt}\\ \end{eqnarray}\end{document} $$\begin{array}{c}\hfill \begin{array}{c}\mathrm{Linear}\phantom{\rule{0.16em}{0ex}}\mathrm{function}:\phantom{\rule{0.28em}{0ex}}K({x}_{1},{x}_{2})=\u2329{x}_{1},{x}_{2}\u232a,\\ \mathrm{Polynomial}\phantom{\rule{0.16em}{0ex}}\left(\mathrm{poly}\right):\phantom{\rule{0.28em}{0ex}}K({x}_{1},{x}_{2})={(\u2329{x}_{1},{x}_{2}\u232a+1)}^{p},\\ \mathrm{Gaussian}\phantom{\rule{0.16em}{0ex}}\mathrm{radial}\phantom{\rule{0.16em}{0ex}}\mathrm{basis}\phantom{\rule{0.16em}{0ex}}\mathrm{function}\phantom{\rule{0.16em}{0ex}}\left(\mathrm{rbf}\right):\phantom{\rule{0.28em}{0ex}}K({x}_{1},{x}_{2})={e}^{-{\left|{x}_{1}-{x}_{2}\right|}^{2}/\phantom{-{\left|{x}_{1}-{x}_{2}\right|}^{2}2{\sigma}^{2}}\phantom{\rule{0.0pt}{0ex}}2{\sigma}^{2}},\end{array}\end{array}$$*p*is order of polynomial and usually takes 2 or 3 and σ is width of the rbf and controls its functionary range.

## 3.

## Optimizing Features Vector and Classifier Based on ROC Curve

The ROC is a performance measure commonly used to compare different classifiers. The ROC curve can be drawn using sensitivity as the *x* coordinate and 1-specificity as the *y* coordinate.^{8} Sensitivity describes the probability of a positive test among all positive samples and indicates the probability of a negative test among all negative samples. Therefore, the diagonal in an ROC plot is the performance of random guessing. The ROC curves move toward the upper left corner, indicating rising accuracy of performance. The area under the ROC curve (AUC^{ROC}) is an appropriate performance measure. The AUC^{ROC} value ranges from 0.5 to 1. The larger the AUC^{ROC} is and the closer it is to 1.0, the higher the validity of the classifiers is. Conversely, the nearer the AUC^{ROC} is to 0.5, the lower the validity of the classifiers is. If AUC^{ROC} is 1, then this means the classifier is perfect.

## 4.

## Results and Discussion

## 4.1.

### Sample Sets

MAs can easily be confounded with other dark patterns. One of the major problems in detection of a MA is to establish a “gold standard,” namely, a set of annotated samples for learning and testing. In this paper, we used the 50 annotated retinal images from the Retinopathy Online Challenge database.^{9} The 50 images were from patients with diabetes without known diabetic retinopathy at the moment of photography; they represent a random sample of unique patients with “red lesions” from a large (>10,000 patients) diabetic retinopathy screening program. These images were taken with Topcon NW 100, NW 200, or Canon CR5-45NM “nonmydriatic” cameras at their native resolution and compression settings. The retinal specialist annotations were obtained from a combination of three ophthalmologists with retinal fellowship training. The first 20 images are for training, whereas the remaining 30 are for testing in this paper.

We extract all annotated MA regions as true positive samples and ∼30 spurious objects as true negative samples from each training image and construct a training sample set, size of 727. And we obtain a test sample set, size of 1112, in the same manner.

## 4.2.

### Experimental Results

In order to identify the most favorable feature vector to distinguish real MAs from spurious objects, the feature vector shape features *S*, texture features *T*, and color features *C*1, *C*2, and the different combinations of them, *A*1 and *A*2, are respectively used as the input of the SVM. ROC curves for different feature vectors are shown in Fig. 3.

The diagonal in Fig. 3 is the ROC of the SVM using *C*1 or *C*2 or *S* as feature vector, separately. They are laid over each other and have no diagnostic value. The ROC for *T*, *A*1, and *A*2 have higher AUC^{ROC}. Table 1 shows that the ROC for *A*2 has the highest AUC^{ROC} and the optimal classification performance among these feature vectors.

## Table 1

Sensitivity and specificity of different feature vectors for SVM.

T | A1 | A2 | ||||
---|---|---|---|---|---|---|

Feature | Sensitivity | Specificity | Sensitivity | Specificity | Sensitivity | Specificity |

1 | 0 | 1 | 0 | 1 | 0 | |

0.6121 | 0.7494 | 0.9720 | 0.0735 | 0.6916 | 0.7606 | |

0.5935 | 0.7806 | 0.8879 | 0.3062 | 0.6541 | 0.7873 | |

0.4860 | 0.8608 | 0.7991 | 0.5412 | 0.5981 | 0.8396 | |

0.4533 | 0.8719 | 0.6963 | 0.6158 | 0.4907 | 0.8998 | |

Data | 0.3972 | 0.9020 | 0.5981 | 0.7595 | 0.3972 | 0.9388 |

0.3551 | 0.9287 | 0.4860 | 0.8686 | 0.3037 | 0.9610 | |

0.2991 | 0.9465 | 0.3925 | 0.9020 | 0.2897 | 0.9621 | |

0.1963 | 0.9666 | 0.2944 | 0.9399 | 0.1822 | 0.9788 | |

0.0935 | 0.9889 | 0.1963 | 0.9699 | 0.1449 | 0.9866 | |

0 | 1 | 0 | 1 | 0 | 1 | |

AUC | 0.7093 | 0.7359 | 0.7574 |

An SVM with different kernel functions has a significant difference of classification performance. In this paper, the commonly used kernel functions, such as linear function, Gaussian rbf, and polynomial kernel, are discussed. Figure 4 shows these ROC curves of SVM using these different kernel functions, and the best result is obtained by using the polynomial function kernel with *p* = 2. Table 2 shows that the ROCs for linear and polynomial functions have higher areas than the one for rbf, and the ROC for the polynomial function with *p* = 2 has the largest AUC^{ROC} value (0.7574), which is slightly larger than the one for the linear function. This indicates that the SVM using the polynomial function with *p* = 2 has the highest classification performance among the above-mentioned kernels.

## Table 2

Sensitivity and specificity of different kernels of SVM.

Linear | rbf | Poly 2 | Poly 3 | |||||
---|---|---|---|---|---|---|---|---|

Kernel | Sensitivity | Specificity | Sensitivity | Specificity | Sensitivity | Specificity | Sensitivity | Specificity |

1 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | |

0.6963 | 0.6915 | 0.7103 | 0.4788 | 0.6916 | 0.7606 | 0.6355 | 0.7650 | |

0.5935 | 0.8107 | 0.6963 | 0.5078 | 0.6541 | 0.7873 | 0.5888 | 0.8274 | |

0.4860 | 0.8864 | 0.5981 | 0.6537 | 0.5981 | 0.8396 | 0.4953 | 0.8391 | |

0.4439 | 0.9065 | 0.4953 | 0.7706 | 0.4907 | 0.8998 | 0.4597 | 0.9198 | |

Data | 0.3925 | 0.9354 | 0.4533 | 0.8241 | 0.3972 | 0.9388 | 0.3972 | 0.9410 |

0.3598 | 0.9399 | 0.3972 | 0.8664 | 0.3037 | 0.9610 | 0.3598 | 0.9488 | |

0.2897 | 0.9577 | 0.2991 | 0.9410 | 0.2897 | 0.9621 | 0.2944 | 0.9677 | |

0.1963 | 0.9800 | 0.1916 | 0.9577 | 0.1822 | 0.9788 | 0.1963 | 0.9800 | |

0.0981 | 0.9944 | 0.1262 | 0.9766 | 0.1449 | 0.9866 | 0.1729 | 0.9866 | |

0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | |

AUC | 0.7427 | 0.6677 | 0.7574 | 0.7364 |

Different thresholds can bring different classification results. Thus, the threshold is very important for classification performance of the classifier. Here, sensitivity × specificity is used as a performance measure to determinate which threshold has the better diagnostic value.^{9} When threshold is set at −0.9, sensitivity × specificity reached its maximum, as shown in Fig. 5. Because the threshold of classification is −0.9, the sensitivity is 64% and the specificity is 80%.

## 5.

## Conclusion

In this paper, we present a new method to optimize the algorithm of automatic detection of MAs. The MA detection algorithm comprises three steps: candidate detection, feature extraction, and classification. We use a ROC curve to evaluate the distinguishing ability of different feature vectors and different kernel functions of an SVM. The feature vector *A*2 has the highest AUC^{ROC} in Fig. 3 and is used for the input of an SVM, and the polynomial function kernel with *p* = 2 shows the best discriminating performance according to those the ROC curves in Fig. 4. As mentioned above, the ROC curve is a useful technique for estimate classification performance of classifiers. It can be used to select the appropriate feature vector and to optimize the classifier.

## References

**,” Inf. Sci., 178 (1), 106 –121 (2008). https://doi.org/10.1016/j.ins.2007.07.020 Google Scholar**

*Identification of different stages of diabetic retinopathy using retinal optical images***,” Proc. SPIE, 6514 65142M1 (2007). Google Scholar**

*CAD scheme to detect hemorrhages and exudates in ocular fundus images***,” Pattern Recogn., 43 (6), 2237 –2248 (2010). https://doi.org/10.1016/j.patcog.2009.12.017 Google Scholar**

*Detection of microaneurysms using multi-scale correlation coefficients***,” Med. Image Anal., 11 (6), 555 –566 (2007). https://doi.org/10.1016/j.media.2007.05.001 Google Scholar**

*Automatic detection of microaneurysms in color fundus imagews***,” Comput. Med. Imaging Graphics, 32 (8), 720 –727 (2008). https://doi.org/10.1016/j.compmedimag.2008.08.009 Google Scholar**

*Automatic detection of diabetic retinopathy exudates from non-dilated retinal images using mathematical morphology methods***,” Pattern Recogn., 40 (9), 2367 –2372 (2007). https://doi.org/10.1016/j.patcog.2006.12.004 Google Scholar**

*Increasing the discrimination power of the co-occurrence matrix-based features***,” Expert Syst. Appl., 36 (4), 8124 –8133 (2009). https://doi.org/10.1016/j.eswa.2008.10.030 Google Scholar**

*Multi-class support vector machine for classification of the ultrasonic images of supraspinatus***,” Chin.-German J. Clin. Oncol., 9 (3), 165 –168 (2010). https://doi.org/10.1007/s10330-009-0191-7 Google Scholar**

*ROC analysis of CT hemodynamic in the diagnosis of breast cancer*