
1.IntroductionAutomatic autofocusing (AF) in digital microscopy is highly dependent on the sample topography variability and also its color distribution. As stated by Qu et al.,^{1} different focus criterion functions perform quite differently even for the same sample. The majority of these methods have been addressed to study AF in the context of monochromatic frames.^{2}^{–}^{5} Furthermore, many works have been published that present a comparative evaluation of the performance of these kinds of AF techniques.^{6}^{–}^{8} Some research has determined that the best AF metric is based on the Brenner function;^{2} other research gives priority to the variance,^{9} Vollath4,^{10}^{–}^{12} or the summodifiedLaplacian,^{13} among other methods. In the case of the RGB space, few works for AF have been reported.^{14}^{,}^{15} In addition, the effectiveness of the AF algorithms depends on the color space selection wherever the numerical computation is done.^{16} To avoid it, a waveletbased technique for converting multichannel (e.g., color) data to a single channel by principal components analysis has been reported for this task;^{17} unfortunately, it is computationally intense. In this paper, we propose an extension of the procedures currently used to digitally compute focus measure in the monochromatic version of an image; these techniques now will be utilized for color images with an adjustment of the AF algorithms through the modulus of the gradient of the color planes (MGC) operator.^{18}^{–}^{20} Hence, it is possible to improve the performance of a large quantity of AF algorithms since all of them are capable of indicating a focused slice from the MGC image. Even more, because first derivative methods can be efficiently implemented in GPUs, the MGC algorithm can work in parallel. In widefield microscopy, it may be possible to focus the transverse sections that are placed at the depth of field (DOF) of the objective lens. To record the threedimensional (3D) volume, it is necessary to axially scan the sample. Additionally, an extra difficulty arises: the DOF of the optical objectives decreases when the numerical aperture (NA) increases. It abruptly produces blurry images in the portion of the object that lies outside of the DOF. A common approach to digitally extend depth of field (EDoF) is by the use of a digital image fusion scheme. Typically, the image fusion schemes select the infocus pixels along the $z$axis to reconstruct an allinfocus composite image. Due to the high computational effort, these methods have been implemented in parallel computer systems such as clusters and GPUs.^{21}^{–}^{23} In this work, a parallel implementation in GPU of a pixelbypixel image fusion of multifocus color images based on MGC is done. According to the image quality metrics, the proposed method is competitive to merge these kinds of images. The 3D visualization of the infocus images verifies the fusion results. This work is organized as follows: in Sec. 2, the MGC transformation for multichannel to grayscale frames is briefly reviewed, and the AF functions and image fusion technique used in this paper are analyzed. In Sec. 3, the procedure for acquiring the different $z$stacks of digital images is described. In this research, human and animal tissue samples have been employed as test objects to prove the proposed algorithms. The human tissue samples were prepared by Mikroskope. Net^{24}, and the animal tissue came from the Human Connective Tissues Microscope Slide Set.^{25} In Sec. 4, the AF and fusion results of the experiments, which we conducted to evaluate the algorithms are presented. Finally, the conclusions of the work are presented in Sec. 5. 2.Mathematical Methods2.1.Multichannel Conversion to a Grayscale ImageIn the RGB space, the red, green, and blue components of a vector are commonly related to the pixels of an RGB image of size $M\times N$. They can be represented by $C(x,y)$, as in the following equation: where $R(x,y)$, $G(x,y)$, and $B(x,y)$ are the RGB space channels and $\widehat{i}$, $\widehat{j}$, $\widehat{k}$ are the unitary vectors, respectively.Typically, a compound gradient image ${g}^{c}(x,y)$ is determined by^{18}^{,}^{19} where ${g}^{R}(x,y)$, ${g}^{G}(x,y)$, and ${g}^{B}(x,y)$ are the gradient images for each channel.In general, the modulus of the gradient of the color planes ${g}^{c}$ is computed using the Euclidean distance,^{20} as follows: Eq. (3)$${g}^{c}(x,y)=\sqrt{\sum _{i=1}^{\text{band}}\{{\left[\frac{\partial C(x,y,i)}{\partial x}\right]}^{2}+{\left[\frac{\partial C(x,y,i)}{\partial y}\right]}^{2}\}},$$Conventionally, the partial derivative along the $x$axis of a twodimensional function $C(x,y,i)$ can be numerically approximated as $\frac{\partial C(x,y,i)}{\partial x}\approx C(x+1,y,i)C(x,y,i)$. Likewise, the partial derivative along the $Y$axis is given by $\frac{\partial C(x,y,i)}{\partial y}\approx C(x,y+1,i)C(x,y,i)$. In color image processing, the gradient is commonly used as a procedure of color edge detection. Therefore, the modulus of the gradient of the color planes is a sharp image, which can be computed using the equation Eq. (4)$${g}^{c}(x,y)={[\sum _{i=1}^{\text{band}}{[C(x+1,y,i)C(x,y,i)]}^{2}+\sum _{i=1}^{\text{band}}{[C(x,y+1,i)C(x,y,i)]}^{2}]}^{1/2}.$$In addition to the RGB space, color images are also processed in the hue, saturation, and intensity (HSI) color space, because it is a suitable model for color description and analysis. The HSI space is modeled as a double cone where hue represents the dominant color, saturation represents the purity of the color, and intensity represents the brightness, respectively. As stated by Gonzalez and Woods,^{26} this model decouples the intensity component from the colorcarrying information (huesaturation) in a color image. The intensity channel is an essential descriptor of monochromatic images, and it is classically used for multichannel conversion to a grayscale image. The difference measurement when working in the HSI color space is modified as established by Koschan and Abidi.^{18} In this work, the multichannel conversion to a grayscale image has been done by means of the MGC operator as is shown in Fig. 1. As the MGC operator is a color edge detection technique for digital images, the MGC(RGB) and MGC(HSI) matrices show the high spatial frequency content of the input color images. Thereby, this makes it suitable for finding focused regions. 2.2.Autofocus MethodsIn the literature, there exist some comparisons about the performance of AF algorithms.^{4}^{,}^{6}^{,}^{8}^{,}^{9}^{,}^{12} Each algorithm is capable of producing a figure of merit (FM) that is analyzed by taking into account the global or local variance in the image intensity values $f(x,y)$. Customarily, the AF algorithms can be classified into five groups according to their mathematical nature: derivativebased algorithms,^{27}^{,}^{28} statistical algorithms,^{10} histogrambased algorithms,^{6}^{,}^{12} intuitive algorithms,^{9} and image transformationsbased algorithms.^{3} Throughout this paper 15 AF algorithms, which have been widely reported in the literature, are tested and compared using the MGC images. This task was carried out to improve the performance of AF algorithms. Table 1 summarizes the definitions of the most typical AF metrics defined in the new approach, namely the MGC transformation. The output of an ideal AF algorithm is commonly defined as having a maximum value in relation to the best focused image position. Moreover, this value clearly decreases as defocus increases. As noted by Tian,^{3} the fundamental requirements for an FM are unimodality and monotonicity, which ensure that the FM has only one extreme value and is monotonic on each side of its peak or valley. Furthermore, Redondo et al.^{4} defined the number $\eta $ of local maxima, the width of the focus curve $\alpha /\beta $ given by the width of the focus curve at 80% and 40%, and noise/illumination invariance as important features of the autofocus curve. Another complementary characteristic of the AF algorithms is their accuracy and fast response. To evaluate the AF performance (AP) of each AF algorithm, the following score is proposed: where $Z=z/\mathrm{\Delta}Z$ represents the number of focal planes along the $z$axis far away from the origin $z=0$, and $\mathrm{\Delta}Z$ is the distance between axial planes. For instance, if $Z=\frac{150\text{\hspace{0.17em}\hspace{0.17em}}\mu \mathrm{m}}{50\text{\hspace{0.17em}\hspace{0.17em}}\mu \mathrm{m}}=3$ then the AP metric is equal to 0.7. This happens because the mentioned measurements do not locate the focused plane until precisely three $\mathrm{\Delta}Z$ steps away from the plane $z=0$. At best, AP is equal to 1. Low AP results from two conditions: (a) the AF algorithms do not reveal the focal plane at $z=0$ or (b) the AF algorithms indicate a wrong focal plane, which is too far from $z=0$.Table 1AF algorithms rewritten in terms of the modulus of the color gradient operator gc(x,y).
2.3.Multifocus Image FusionAs mentioned previously, any microscopic imaging system can only focus the field of view (FOV) of the sample that is inside the DOF of the objective lens. This means that only certain axial planes of the sample are infocus. A current solution to this drawback is a multifocus image fusion to reconstruct an allinfocus image of the complete FOV for a particular specimen. This can be done by capturing images of the sample on different focal axial planes. In this section, a color image fusion scheme based on the MGC method is proposed as shown in Fig. 2. Let ${C}_{z}(x,y,i)$ be a set of input images, where $z=\mathrm{1,2},\dots ,Z$. The index $i=\mathrm{1,2},3$ is related with the channel/band used. For each axial plane, Eq. (2) is computed to create a compound gradient image and then for each pixel $(x,y)$ the maximum value is selected using $\mathrm{sap}(x,y)={\mathrm{max}}_{z}\{{g}_{1}^{c}(x,y),\cdots {g}_{Z}^{c}(x,y)\}$. In other words, the $\mathrm{sap}(x,y)$ matrix denotes the slice axial position of infocus pixels along the $z$axis. A postprocessing stage involves a spatial consistency algorithm.^{17} This postprocessing is carried out by means of a low pass filtered $\tilde{\mathrm{sap}}(x,y)$ matrix using a $p\times q$ median filter. This algorithm ensures that the majority of the intensity pixels in a $p\times q$ neighborhood of $\tilde{\mathrm{sap}}(x,y)$, come from the same $z$slice or from the closest one. For example, the spatial consistency of the $\tilde{\mathrm{sap}}(x,y)$ matrix is shown in Fig. 3. Figure 3(a) contains three $p\times q$ neighborhoods, where the value of the slice axial position is higher than its neighbors. Figure 3(b) shows these values adjusted to match the values of the $p\times q$ neighborhood of $\tilde{\mathrm{sap}}(x,y)$ to conserve the continuity of the surface of the sample. To avoid the introduction of artificial information, the fused image $\mathrm{\varphi}(x,y,i)$ is composed from the (multichannel) pixels that are present in the original input data ${C}_{z}(x,y,i)$, only if the slice position fulfills the condition $z\in \tilde{\mathrm{sap}}(x,y)$ for each pixel $(x,y)$. Therefore, a multifocus image fusion algorithm can be defined as follows: Schematically, the proposed image fusion procedure is shown in Fig. 2. As can be seen, the fused image $\mathrm{\varphi}(x,y,i)$ is composed of the sharp regions provided by the infocus pixels of the input color images. To accelerate the numerical computation, the fusion process is migrated to GPU.The resulting fused images are evaluated with a nonreference image quality metric based on measuring the anisotropy of the images. The anisotropic quality index (AQI) of an image $\mathrm{\varphi}(x,y,i)$ is given by^{29} Eq. (7)$$\mathrm{AQI}(\mathrm{\varphi})=\sqrt{\sum _{s=1}^{S}{[{\mu}_{\mathrm{\varphi}}R(t,{\theta}_{\mathrm{\varphi}})]}^{2}/S},$$2.3.1.Simulated dataFor testing purposes, a simulated stack of 20 frames is constructed from a color image of $2584\times 1936\text{\hspace{0.17em}\hspace{0.17em}}\text{pixels}$. Figure 2(a) shows some digitally defocused slices using the software package extended EDoF plugin.^{17} Each blurred image was obtained by convolving an image with a Gaussian point spread function (PSF) with increasing width. The 3D visualization of the resulting infocus image using the fused scheme of Eq. (6) is sketched in Fig. 2(d). Their spatial consistency of the $\tilde{\mathrm{sap}}(x,y)$ matrix is shown in Fig. 3. In addition, the results of the AQI of the fused image and the normalized mean square error (NMSE)^{30} between the original image and the merged image are shown in Table 2. Table 2Image quality assessment of infocus images.
3.Image Acquisition of Histological SamplesA motorized AxioImagerM1 optical microscope system manufactured by Carl Zeiss is used to image the histological samples and to capture their color digital images. Some examples of these kinds of tissue samples are shown in Fig. 4. These microscopic objects are imaged using brightfield illumination in the optical microscope system. The microscope incorporates an AxioCam Mid Range Color camera of 5 megapixels with an image resolution of $2584\times 1936\text{\hspace{0.17em}\hspace{0.17em}}\text{pixels}$, a chip size of $8.7\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{mm}\times 6.6\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{mm}$, a pixel size of $3.4\text{\hspace{0.17em}\hspace{0.17em}}\mu \mathrm{m}\text{\hspace{0.17em}\hspace{0.17em}}\times \text{\hspace{0.17em}\hspace{0.17em}}3.4\text{\hspace{0.17em}\hspace{0.17em}}\mu \mathrm{m}$, and a spectral range of 400 to 710 nm. Furthermore, as part of the optical microscope device, an $x$ to $y$ mechanical platform and a motorized stage are integrated to control the focus movements along the $z$axis. From Table 3, it is evident that interplanar distance $\mathrm{\Delta}Z$ between different optical sections is determined by the NA of the objective lens. Table 3Specifications of the EC planNeofluar objective lenses (Carl Zeiss microscopy, retrieved from Ref. 32) employed during image acquisition.33,34
4.Results and Discussions4.1.Focusing ResultsTo obtain a performance evaluation of the 15 AF techniques on the MGC images, six $z$stacks of 21 multichannel images are recorded using two histological samples. Each stack has a particular color that is highly dominant as shown in Fig. 4. This allows us to evaluate the MGC method for different color distributions and amplifications inside of the digital image.
Some research^{4} has reported that beyond a magnification of $63\times $, the performance of the various AF metrics is drastically impaired. According to results shown in the graphs of Fig. 8, when using the MGC approach all the FM curves present monotonic behavior, even when the magnification is increased to $100\times $ (oil immersion). This last experimental result supports the advantage of a colortoMGC space transformation. Nevertheless, a problem arises when images of a sample of thickness $t\ge \mathrm{DOF}$ are acquired at a magnification of $100\times $. There exist portions of the image partly in focus. In the graphs of Fig. 8(d), two regions in focus located at $z=0$ and $z=6\text{\hspace{0.17em}\hspace{0.17em}}\mu \mathrm{m}$ can be seen. According to the results given in Tables 4 and 5, all the AF measures realized in the MGC space are accurate in spite of the different magnifications, unlike some typically used channels for focus measure. Table 4Autofocusing Performance AP of all metrics in different grayscale channels and MGC images. The mean and standard deviation of AP is given in bold.
Table 5Autofocusing Performance AP of all metrics in different grayscale channels and MGC images. The mean and standard deviation of AP is given in bold.
Another advantage of the MGC method is its computational simplicity and inherent parallelism. Figure 9 shows the computational cost of the MGC(RGB) method in a $z$stack of digital images of $2584\times 1936\text{\hspace{0.17em}\hspace{0.17em}}\text{pixels}$, when they run on Intel^{©} X^{©}(R) 2.10 GHz, 16 GB RAM, NVIDIA Quadro K4000. The parallelized MGC method on the GPU is one order of magnitude faster than the same application implemented in CPU. 4.2.Multifocus Image Fusion ResultsIt is well known that the digital images of thick microscopic objects provided by an optical widefield microscope device are strongly blurred for the portion of the object that lies outside of the DOF of the objective lens. We can seek those regions of the FOV, which are conveniently located infocus. The present subsection will describe the results of a method to merge multifocus frames based on the MGC approach. Our experiment starts with the acquisition of a digital image $z$stack from a histological sample. This set of $z$images are obtained by moving the microscope stage along the optical axis. For this, the axial extension $t$ of the sample is defined and then the axial stage with the sample is moving to cover this extension. The interplanar distance $\mathrm{\Delta}Z$ between different optical sections is less than the axial resolution of the microscope, defined as the DOF in Table 3. From this table, it is evident that $\mathrm{\Delta}Z$ is determined by the NA of the objective lens. Digital images of a beetle shell are acquired with amplification of $10\times $ and interplanar distance $\mathrm{\Delta}Z=3\text{\hspace{0.17em}\hspace{0.17em}}\mu \mathrm{m}$. The given $z$stack is composed of 42 images with $1024\times 768\text{\hspace{0.17em}\hspace{0.17em}}\text{pixels}$. Figure 10(a) shows the infocus image obtained with the software package EDoF plugin^{31} based on a complex wavelets algorithm for EDoF.^{17} The fusion process takes an average execution time of 52.2 s, whereas the fused image of Fig. 10(b) is based on the proposed MGC fusion method. In this technique, the resulted slice axial position matrix $\mathrm{sap}(x,y)$ is lowpass filtered using a $p\times q$ median filter with $p=q=3,15,35$. The total execution time is 32.1 s. The 3D visualization of the resulting infocus image is sketched in Fig. 10(c). Finally, the nonreference image quality metric of Eq. (7) is computed for the infocus image quality assessment. The results are shown in Fig. 10(d). Another example is the case of an umbilical cord that is imaged at the amplification of $10\times $ and interplanar distance $\mathrm{\Delta}Z=3\text{\hspace{0.17em}\hspace{0.17em}}\mu \mathrm{m}$. The given $z$stack is composed of 39 images with $2584\times 1936\text{\hspace{0.17em}\hspace{0.17em}}\text{pixels}$. Again, Fig. 11(a) shows the infocus image obtained with the EDoF plugin.^{17}^{,}^{31} It takes an average execution time of 477.43 s. The fused image of Fig. 11(b) is based on the proposed MGC fusion method, where the resulted slice axial position matrix $\mathrm{sap}(x,y)$ is again lowpass filtered using a $p\times q$ median filter with $p=q=\mathrm{3,15},35$. The total execution time is 238.9 s, and the fusion evaluation is shown in Fig. 11(d). As we can see, the proposed fusion method reveals a highquality image independent of faulty illumination during the image acquisition. 5.ConclusionsIn this research, the MGC operator has been applied to digital color images. This procedure transforms the multichannel information to a grayscale image, which is used for (a) focus measurements during the AF process and (b) for extending the DOF in the framework of digital microscopy applications. The AF experimental results of this work demonstrate the effectiveness of the MGC method when it is applied to several $z$stacks of images. From this point of view, we can conclude that the use of the proposed MGC image increases the performance of currently used passive AF algorithms and produces monotonic FM curves with an only one local maximum $\eta $ and a similar width $\alpha /\beta $ of the focus curve, as shown in Figs. 5, 6, and 8. The test frames have been acquired from two histological samples, which are amplified at the magnifications of $2.5\times $, $10\times $, $40\times $, and $100\times $ (oil immersion). The AF graphs in Fig. 8 that are obtained by the MGC method present similar behaviors even up to a magnification of $100\times $. Therefore, all the AF algorithms reveal the image slice on $z=0$. Contrastingly, as shown in the AP results in Tables 4 and 5, the same AF algorithms in other color spaces only work properly in some cases. As can be seen in the same tables, the mean and the standard deviation of the AF performance for the MGC image are 1 and 0, respectively, for both amplifications. We can conclude that the effectiveness of the AF algorithms depends on several factors (1) the color space selection for doing the numerical computation, (2) the color distribution of the sample under inspection, and (3) the sample magnification. Only in the MGC space does the AF performance tend to be invariant according to these factors. Another remarkable characteristic of the MGC method is that it is computationally simple and inherently parallel. The computational cost of the MGC(RGB) algorithm implemented on a GPU can be reduced by an order of magnitude, for images with $2584\times 1936\text{\hspace{0.17em}\hspace{0.17em}}\text{pixels}$, as is shown in Fig. 9. On the other hand, the fusion scheme $\mathrm{\varphi}(x,y,i)$ was implemented on an image $z$stack for EDoF. The fused image is composed of the sharp regions provided by the infocus pixels $\tilde{\mathrm{sap}}(x,y)$ of the input data. Our fusion method has been quantitatively and qualitatively compared with the EDoF plugin, which is widely used in digital microscopy for DOF extension. From a simulated image stack, the resulting image fusion was compared with the corresponding original images using the NMSE, as shown in Fig. 2. Also, a nonreference image quality metric AQI was implemented for image quality assessment. These quantitative evaluations given in Table 2 show that the quality of the resulting fused image $\mathrm{\varphi}(x,y,i)$ is better than the fused image given by the EDoF plugin. The 3D visualization of the infocus images verifies the fusion results. Based on the experimental results of Figs. 10 and 11, the MGC method is an algorithm sufficiently competitive to merge multifocus images. In general, the main advantages of the proposed fusion method based on MGC transformation are that it is computationally simpler, faster, and more efficient than other methods which have been typically used to fuse multifocus information. Additionally, the comparisons in Figs. 11(a) and 11(b) show that our method reveals a highquality image independent of faulty illumination during the image acquisition. AcknowledgmentsR. Hurtado thanks to Consejo Nacional de Ciencia y Tecnología (CONACyT); Award no. 578446. Also we thank by the support to PADES program; Award no. 201713011053. We extend our gratitude to the reviewers and Jennifer Speier for their useful suggestions. ReferencesY. Qu, S. Zhu and P. Zhang,
“A selfadaptive and nonmechanical motion autofocusing system for optical microscopes,”
Microsc. Res. Tech., 79
(11), 1112
–1122
(2016). https://doi.org/10.1002/jemt.v79.11 MRTEEO 1059910X Google Scholar
S. Yazdanfar et al.,
“Simple and robust imagebased autofocusing for digital microscopy,”
Opt. Express, 16
(12), 8670
–8677
(2008). https://doi.org/10.1364/OE.16.008670 OPEXFF 10944087 Google Scholar
Y. Tian,
“Autofocus using image phase congruency,”
Opt. Express, 19
(1), 261
–270
(2011). https://doi.org/10.1364/OE.19.000261 OPEXFF 10944087 Google Scholar
R. Redondo et al.,
“Autofocus evaluation for brightfield microscopy pathology,”
J. Biomed. Opt., 17
(3), 036008
(2012). https://doi.org/10.1117/1.JBO.17.3.036008 JBOPFO 10833668 Google Scholar
A. Lipton and E. J. Breen,
“On the use of local statistical properties in focusing microscopy images,”
Microsc. Res. Tech., 31
(4), 326
–333
(1995). https://doi.org/10.1002/(ISSN)10970029 MRTEEO 1059910X Google Scholar
L. Firestone et al.,
“Comparision of autofocus method for use in automated algorithms,”
Cytometry, 12 195
–206
(1991). https://doi.org/10.1002/(ISSN)10970320 CYTODQ 01964763 Google Scholar
M. Subbarao and J. K. Tyan,
“Selecting the optimal focus measure for autofocusing and depth from focus,”
IEEE Trans. Pattern Anal. Mach. Intell., 20 864
–870
(1998). https://doi.org/10.1109/34.709612 ITPIDJ 01628828 Google Scholar
Y. Sun, S. Duthaler and B. J. Nelson,
“Autofocusing in computer microscopy: selecting the optimal focus algorithm,”
Microsc. Res. Tech., 65 139
–149
(2004). https://doi.org/10.1002/(ISSN)10970029 MRTEEO 1059910X Google Scholar
X. Y. Liu, W. H. Wang and Y. Sun,
“Dynamic evaluation of autofocusing for automated microscopic analysis of blood smear and pap smear,”
J. Microsc., 227
(1), 15
–23
(2007). https://doi.org/10.1111/jmi.2007.227.issue1 JMICAR 00222720 Google Scholar
D. Vollath,
“The influence of the scene parameters and of noise on the behavior of automatic focusing algorithms,”
J. Microsc., 151
(2), 133
–146
(1988). https://doi.org/10.1111/jmi.1988.151.issue2 JMICAR 00222720 Google Scholar
O. A. Osibote et al.,
“Automated focusing in brightfield microscopy for tuberculosis detection,”
J. Microsc., 240
(2), 155
–163
(2010). https://doi.org/10.1111/jmi.2010.240.issue2 JMICAR 00222720 Google Scholar
A. Santos et al.,
“Evaluation of autofocus functions in molecular cytogenetic analysis,”
J. Microsc., 188
(3), 264
–272
(1997). https://doi.org/10.1046/j.13652818.1997.2630819.x JMICAR 00222720 Google Scholar
J. Cao et al.,
“Method based on bioinspired sample improves autofocusing performances,”
Opt. Eng., 55
(10), 103103
(2016). https://doi.org/10.1117/1.OE.55.10.103103 Google Scholar
K. Omasa, M. Kouda,
“3D color video microscopy of intact plants,”
Image Analysis: Methods and Applications, 257
–263 CRC Press, New York
(2001). Google Scholar
H. Shi, Y. Shi and X. Li,
“Study on autofocus methods of optical microscope,”
in 2nd Int. Conf. on Circuits, System and Simulation (ICCSS 2012), IPCSIT,
(2012). Google Scholar
M. Selek,
“A new autofocusing method based on brightness and contrast for color cameras,”
Adv. Electr. Comput. Eng., 16
(4), 39
–44
(2016). https://doi.org/10.4316/AECE.2016.04006 Google Scholar
B. Forster et al.,
“Complex wavelets for extended depth of field: a new method for the fusion of multichannel microscopy images,”
Microsc. Res. Tech., 65 33
–42
(2004). https://doi.org/10.1002/(ISSN)10970029 MRTEEO 1059910X Google Scholar
A. Koschan and M. Abidi, Digital Color Image Processing, Wiley Interscience, New Jersey
(2008). Google Scholar
R. M. Rangayyan, B. Acha and C. Serrano, Color Image Processing with Biomedical Applications, SPIE Press, Bellingham, Washington
(2011). Google Scholar
T. Gevers and H. Stokman,
“Classifying color edges in video into shadowgeometry, highlight, or material transitions,”
IEEE Trans. Multimedia, 5
(2), 237
–243
(2003). https://doi.org/10.1109/TMM.2003.811620 Google Scholar
W. A. Carrington and D. Lisin,
“Cluster computing for digital microscopy,”
Microsc. Res. Tech., 64
(2), 204
–213
(2004). https://doi.org/10.1002/(ISSN)10970029 MRTEEO 1059910X Google Scholar
J. M. CastilloSecilla et al.,
“Autofocus method for automated microscopy using embedded GPUs,”
Biomed. Opt. Express, 8
(31), 1731
–1740
(2017). https://doi.org/10.1364/BOE.8.001731 Google Scholar
J. C. ValdiviezoN et al.,
“Autofocusing in microscopy systems using graphics processing units,”
Proc. SPIE, 8856 88562K
(2013). https://doi.org/10.1117/12.2024967 PSISDG 0277786X Google Scholar
“Konus prepared slides: the human body I and III sets,”
(2017) http://www.microscopes.eu/en/Brand/Konus/ February ). 2017). Google Scholar
Carolina, “Human connective tissues microscope slide set,”
(2017) http://www.carolina.com/ February ). 2017). Google Scholar
R. C. Gonzalez and R. E. Woods, Digital Image Processing, 2nd ed.Prentice Hall(2002). Google Scholar
J. M. Tenenbaum,
“Accommodation in computer vision,”
Department of Computer Science, Stanford University,
(1970). Google Scholar
J. F. Brenner et al.,
“An automated microscope for cytologic research a preliminary evaluation,”
J. Histochem. Cytochem., 24
(1), 100
–111
(1976). https://doi.org/10.1177/24.1.1254907 Google Scholar
S. Gabarda et al.,
“Image denoising and quality assessment through the Rényi entropy,”
Proc. SPIE, 7444 744419
(2009). https://doi.org/10.1117/12.826153 Google Scholar
S. Sumathi, L. A. Kumar and P. Surekha, Computational Intelligence Paradigms for Optimization Problems Using MATLAB^{®}/SIMULINK^{®}, CRC Press, New York
(2016). Google Scholar
A. Prudencio, J. Berent and D. Sage,
“Extended depth of field plugin,”
(2017) http://bigwww.epfl.ch/demo/edf June ). 2017). Google Scholar
Carl Zeiss Microscopy GmbH, “Objectives from Carl Zeiss,”
(2017) https://www.zeiss.com/microscopy/int/home.html June ). 2017). Google Scholar
R. Hurtado et al.,
“Extending the depthoffield for microscopic imaging by means of multifocus color image fusion,”
Proc. SPIE, 9578 957811
(2015). https://doi.org/10.1117/12.2188927 PSISDG 0277786X Google Scholar
K. R. Spring and M. W. Davidson,
“The source for microscopy education, depth of field and depth of focus,”
(2017) https://www.microscopyu.com/microscopybasics/depthoffieldanddepthoffocus December ). 2017). Google Scholar
BiographyRomán HurtadoPérez received his bachelor’s degree in computational systems and his master’s degree from the Polytechnic University of Tulancingo (UPT) in 2004 and 2013, respectively. He is a PhD degree student in optomechatronics from UPT. His current research areas include multifocus image fusion, autofocusing, GPU, and computer vision. Carina ToxquiQuitl is an assistant professor at the Polytechnic University of Tulancingo. She received her BS degree from the Puebla Autonomous University, Mexico, in 2004. She received her MS and PhD degrees in optics from the National Institute of Astrophysics, Optics, and Electronics in 2006 and 2010, respectively. Her current research areas include image moments, multifocus image fusion, wavelet analysis, and computer vision. Alfonso PadillaVivanco received his bachelor in physics from Puebla Autonomous University, Mexico, and his MS and PhD degrees both in optics from the National Institute of Astrophysics, Optics, and Electronics in 1995 and 1999, respectively. In 2000, he held a postdoctoral position in the physics department at the University of Santiago de Compostela, Spain. He is a professor at the Polytechnic University of Tulancingo. His research interests include optical information processing, image analysis, and computer vision. J. Félix AguilarValdez received his BS degree in physics from National Autonomous University of Mexico in 1980. He received his MS and PhD degrees in optics from Center for Scientific Research and Higher Education of Ensenada, Baja California, México, in 1994. His current research areas include highresolution microscopy, confocal microscopy, nearfield diffraction, and microscopic imaging. Gabriel OrtegaMendoza received his PhD from the FCFMBUAP, Puebla, México, in 2013 and has been a fullprofessor at Universidad Politécnica de Tulancingo, Hidalgo, México, since 2013. His research areas include multifocus image fusion, image microscopy, optical fiber laser, plasmon resonance, and manipulation of photothermalinduced microbubbles. 