Open Access
11 May 2018 Region-based multifocus image fusion for the precise acquisition of Pap smear images
Santiago Tello-Mijares, Jesús Bescós
Author Affiliations +
Abstract
A multifocus image fusion method to obtain a single focused image from a sequence of microscopic high-magnification Papanicolau source (Pap smear) images is presented. These images, captured each in a different position of the microscope lens, frequently show partially focused cells or parts of cells, which makes them unpractical for the direct application of image analysis techniques. The proposed method obtains a focused image with a high preservation of original pixels information while achieving a negligible visibility of the fusion artifacts. The method starts by identifying the best-focused image of the sequence; then, it performs a mean-shift segmentation over this image; the focus level of the segmented regions is evaluated in all the images of the sequence, and best-focused regions are merged in a single combined image; finally, this image is processed with an adaptive artifact removal process. The combination of a region-oriented approach, instead of block-based approaches, and a minimum modification of the value of focused pixels in the original images achieve a highly contrasted image with no visible artifacts, which makes this method especially convenient for the medical imaging domain. The proposed method is compared with several state-of-the-art alternatives over a representative dataset. The experimental results show that our proposal obtains the best and more stable quality indicators.

1.

Introduction

Methods used in microscopic autofocus systems (MAFSs) are generally based on maximizing a focus measure obtained from a captured image; such measure is evaluated as a function of the lens’ Z axis position. There are plenty of works that report algorithms to control the motion of the lens along the Z axis in order to efficiently find the best position: based on the evaluation of that focus measure, they obtain the best focused single image.1

In the case of a MAFS applied to cytology observation, when working with targets of high magnification—around 40×—one of the principal issues is that in many cases, cells are in fact located on different Z levels in the slide, even corresponding to different ranges of the depth-of-field (DoF) of the lens. In these cases, the best focused image selected by a classical autofocus method will include unfocused parts, hence missing important information. A solution is to somehow combine a set of images captured with different DoFs to obtain a fully focused image.

Image fusion is the process of combining relevant visual information (i.e., important, complementary, and redundant) from multiple input images into a single resulting image. This should be achieved without introducing artifacts, in a way in which the resulting image contains more accurate, stable, and complete information24 than the input images, therefore making it more suitable for human perception and for later processing operations (e.g., segmentation and feature extraction). For the autofocus application, this resulting image might be obtained by applying multifocus image fusion (MFIF) techniques: a set of N images is captured from a static scene at different focus levels, and the focused objects in this set of images are fused together to create a sharp image with all those relevant objects fully focused.1,35 While the reported MFIF methods are generally applied to fuse two images of a scene, the underlying techniques can be adapted to fuse a larger set of images, as we propose.

The context of our work is a project (see ACK section for details) to help early diagnosis of cervical intraepithelial neoplasia in the rural areas of the Coahuila State (Mexico). In these areas, for cultural reasons, the refusal among women to go to the capital persists until the symptoms of the disease are unbearable. The objective of the project is to enable the Papanicolau test, facilitating through its automation the taking of tissue samples in the rural area and the telematic sending of selected images of these samples for diagnosis by specialists. This requires capturing hundredths of focused images per tissue sample (see our autofocus contributions in Ref. 6) and analyzing of these images to identify and segment cervical nuclei (see our contributions on this area in Ref. 7). In this paper, we target the enhancement of the captured images via MFIF techniques.

MFIF techniques operate on a set of N input images of a single scene, each with a different DoF. Overall, these images are first partitioned into generally homologous regions. Regions sharing a same location in the set of images are then evaluated, and the region with the highest focus measure is selected; finally, selected regions, usually from different images, are fused to compose the final focused image. There are many MFIF algorithms reported in the literature; a comprehensive review can be found in Refs. 8 and 9. The MFIF technique is broadly used in many application fields, such as microscopy,10 biology,11 and medical imaging.12

In this paper, we analyze and compare with works using focus measures like those we used in our base autofocus system (i.e., transformed-domain measures6), for a fair comparison. In this direction, the work in Ref. 1 applies an 8×8 block discrete cosine transform (DCT) to two input images; then, it compares homologous blocks of both images using the variance of the coefficients and selects the block with the highest value; finally, it applies the inverse DCT to the image composed by the selected blocks. They also propose a variant which applies a consistency verification index for block selection, in order to enhance the resulting image quality. The work in Ref. 4 is similar to that in Ref. 1 but using a different measure to compare blocks. Methods based on blocks generally present artifacts, because parts of the focused cells in different levels of the DoF might belong to a same block. Kumar5 uses the discrete cosine harmonic wavelet transform (DCHWT), a multiscale technique (three levels, in this case). As these multiscale methods involve decimation, most pixels in the resulting image do not keep original pixel values of any of the source images. Recently, some fusion techniques operating in the gradient domain have been reported, as that in Ref. 3. This work, also using a multiscale approach, uses a focus measure based on the saliency structure of the image, and it is designed to operate on well-known images (i.e., flower, clock, pepsi, etc.) with just two objects (one focused and the other unfocused).

Some recent works present similar fusion techniques applied to combine different sources of information into a single image but not necessarily due to a multifocus situation. Liu et al.13 described a MFIF method that separates source images into “cartoon content” and “texture content” via an improved iterative reweighted decomposition algorithm; fusion rules are designed to separately fuse both types of content, and finally, the fused cartoon and texture components are combined. The technique naturally approximates the morphological structure of the scene. The work in Ref. 14 presents a medical application to diagnose vascular diseases; they use a type of wavelet transform combined with an averaging-based fusion model to fuse osseous and vascular information together; they present a rapid MFIF algorithm, less complex but still very effective, with very low memory requirements. For a similar objective, Dogra et al.15 propose an effective image fusion method also working on the wavelet domain, along with a preprocessing of the source images with a selected sequence of spatial and transformed-domain techniques to create a highly informative fused image for osseous-vascular 2-D data.

In this paper, we propose an MFIF method that analyzes sequences of up to 15 microscopy input images corresponding to different levels of DoF of a same “slide-scene.” We propose (Sec. 2) an object-based approach, which dramatically reduces the visibility of fusion-generated artifacts while keeping focused parts of the input images intact. To evaluate our results, we compare with five different existing techniques (Sec. 3) by testing over 50 realistic and practical Pap smear sequences of images, and over the two blurred microscopic images provided by Ref. 11. Finally, conclusions are presented in Sec. 4.

2.

Proposed Multifocus Microscope-Image Fusion Method

Figure 1 illustrates the general flow of the proposed MFIF method, which is further detailed in the following subsections. The starting point is a set of images (15 in our experiments, but 2 in other works we compare to) captured with the lens in a varying position of the Z axis. The first step, which is not the topic of this paper, is the selection of the “best-focused image” of the set (see details in Ref. 6). Let {Ii,i=1N} be this set of input images (Fig. 1) and let Ibf be the best-focused image, being i=bf its index in the set. This image is first coarsely segmented to identify its main regions or objects, which are considered the main scene objects, each represented by a binary mask. Then, for each scene object or segmented region, its mask is applied to the set of input images, and a focus measure is obtained for that region in every image of the set. A preliminary image, which we name “combined image, I_c,” is then generated by replacing in the best-focused image each segmented region with the corresponding best-focused region in the set. Finally, the removal of the artifacts generated in the contours of these combined regions is performed by a total variational-based filter16 to obtain the “final focused image, I_ff.”

Fig. 1

Proposed multifocus image fusion method for a set of microscopic high-magnification Pap smear images.

JBO_23_5_056005_f001.png

2.1.

Mean-Shift Segmentation

We use the mean-shift algorithm17 to obtain a coarse segmentation, I_seg, of the best-focused image, Ibf (Fig. 1) into Nc regions or clusters. Mean-shift is a nonparametric technique for analyzing multimodal data that has multiple applications in pattern analysis,18 including its use for image segmentation. We start from the observation that cells have a predetermined size and colors that are always much darker than the background. We characterize each image pixel by a vector [L,a,b,x,y] or [L,x,y], depending on whether the input images are RGB or gray: [L,a,b] describes the pixel color, and [x,y] its coordinates. We then run the mean-shift algorithm over this five-dimensional or three-dimensional distribution with a bandwidth value h=20, which was selected so that cell regions and background are segmented in more than one cluster; this is required for the next assigning process to be effective. A proper selection of the h parameter is somehow application dependent: it should be larger than the smallest nonfocused region. However, if this requirement is met, its effect on the results is negligible. Consider that block-based approaches also prefer unfocused regions to be greater than the block size, but there is usually no flexibility in the selection of this size.

2.2.

Preliminary Image Fusion Based on the DCT Focus Measure

The next step is to generate a “combined image” (follow this process in Fig. 2), I_c, which is a merging of the best focused parts of the set of input images. First, a focus measure is locally obtained for every image of the set, following the method described in Ref. 6: in brief, for every image of the input set, Ii, a 8×8 block DCT is performed, the sum of the absolute value of its 32 lower-frequency AC coefficients is calculated, and a same-size energy image, Ei, is obtained by assigning each pixel the calculated energy of its corresponding block. This results in a set of DCT energy images, {Ei,i=1N} [Fig. 2(a)].

Fig. 2

Obtaining the combined image, I_c form the set of input images, {Ii,i=1,,N}.

JBO_23_5_056005_f002.png

The topology of the I_c image is the same of that of the segmented image, I_seg. Every cluster or region in I_seg is used to generate a mask, {Mr,r=1Nc} [Fig. 2(b)]. For every region, its corresponding mask is applied over every energy image, {Ei,i=1N}, and the mean energy of the masked region is calculated for every such energy image. The index, i_max, of the energy image showing maximum energy for that region is obtained; then, the corresponding region of the I_c image is initialized with the pixel values of the homologous region of the Ii_max image from the {Ii,i=1N} set. In parallel, in the I_diff image (see Fig. 2), we keep for every region the absolute difference between the i_max index and the i=bf index of the best-focused image, which somehow indicates the degree of out-of-focus of such region, or the object focus level, ranging from 0 (black-level: the region is best-focused in the Ibf image) to N (white-level: the region is best-focused in the image with the worst global focus measure).

As opposed to other methods, such as Refs. 1 and 4, where the local focus comparison among the set of input images is performed block by block, we propose to compare region by region using the segmentation of the best-focused image to define such regions. This avoids highly visible block artifacts appear anywhere. Instead, contour artifacts might appear in the boundaries of the identified regions, being here much less visible. The visibility of these contour artifacts depends on the aforementioned degree of out-of-focus of each combined region. In the next subsection, we propose to use a total variational-based filter to eliminate the contour artifacts of the combined image, I_c.

2.3.

Artifacts Removal

The next step is to generate the final focused image, I_ff, by attenuating the artifacts or false contours that may appear in the combined image, I_c, due to merging regions from different input images. We propose to attenuate artifacts by applying a total variational-based diffusion method.16 This method will only be applied in the artifacts-prone areas, according to the information in the I_diff image, hence preserving or keeping intact most of the image pixels. Observe that the method aims to mitigate these false contours, not real object contours.

2.3.1.

Generation of a mask of the artifact-prone areas

In Fig. 3, we show several examples of artifacts generated at the boundaries of the regions of three I_c images. Our proposal is to process I_c pixels only at the edges defined by I_diff, i.e., only at the boundaries between regions with different degree of focus, in order to obtain an image without artifacts on these boundaries while keeping original pixels in most of the resulting image. For this purpose, we first obtain an edges image from I_diff (see Fig. 4). Then, as the extent of the artifacts between adjacent regions is expected to be proportional to the difference between their degrees of focus, we perform an adaptive morphological dilation over the thresholded edges image, using a structural element with a size proportional to the intensity of every edge. The resulting mask, M_artifacts (see Fig. 4), will define where the following enhancement steps will be applied.

Fig. 3

Examples of contour artifacts in the boundaries of the merged regions: (a) I_diff; (c) I_c; (b) and (d) detailed images of the artifacts (see white arrows) generated by the regions merging.

JBO_23_5_056005_f003.png

Fig. 4

Example of the generation of the artifacts mask: (a) I_c, (b) I_diff, (c) edges in I_diff, and (d) dilated and thresholded edges in M_artifacts.

JBO_23_5_056005_f004.png

A main contribution of our method is that artifacts removal is only performed in the areas that may include them, hence preserving original pixels in most of the image, which is critical for medical imaging applications. Works in Refs. 1 and 4, as they perform image fusion over DCT blocks, are prone to generate block-artifacts, which are not later eliminated. In the multiscale methods, such as Refs. 5 and 3, the original pixels are not usually preserved in the fused image: the resulting image in Ref. 5 does not present artifacts due to the nature of the method, which modifies pixels intensity via averaging, resulting in a smoother image; the method proposed in Ref. 3 eliminates artifacts just in the “unknown zone,” which is a predefined area in the boundary generated between the two considered source images with two different focus levels.

2.3.2.

Artifacts removal via total-variation filtering

Let us consider that the combined image I_c, which contains contour artifacts in the contours defined by the M_artifacts mask, is a noisy image; let I_ff, the final focused image, be the desired sharp and clean image. We can then declare that I_c=I_ff+n, where n is the aditive noise, which we assume concentrated in the pixels indicated by M_artifacts. We obtain I_ff from I_c using a total variation filter. These filters were first suggested by Ref. 16 and are based on the minimization of an energy functional, subject to the constraint uu0=σ2, where u, u0, and σ2, for this work, are, respectively, the gradient of the true image (I_ff), the gradient of observed image (I_c), and the variance of the noise n=(uu0). Then, the iterative equation to obtain the desired clean image is I_fft=I_ctλ(uu0t), where I_c0=I_c, and λ is a regularization parameter which we set to λ=0.1 to preserve the smallest structures.

In order to obtain u0 for the first iteration, we apply a Laplacian filter to I_c (Fig. 5). To estimate u, we consider that its gradient equals that of I_c except for the edge areas defined by M_artifacts. In these areas, for every pixel, we assume that u equals the gradient, {ui,i=1N}, of the source image showing maximum local variance around such pixel. The gradients of the source images are also obtained applying a Laplacian filter, and the local variance is computed in 3×3 windows. Once we get I_fft for this first iteration, i.e., I_ff0, we set I_ct+1=I_fft and repeat the process until it converges to I_ff. Figure 5 shows an example of the evolution of the variance of the gradient difference, uu0t, and of the obtained image, I_fft, for every iteration.

Fig. 5

Example of the iterative artifacts elimination process: (a) evolution of the gradients difference u0tu and (b) comparison of the I_c image and of the I_ff image obtained after the process converges; white arrows indicate the main removed artifacts.

JBO_23_5_056005_f005.png

3.

Experimental Results

To assess the potential of our approach, we compare the proposed method with the works reported in Refs. 1-1, 1-2, 5, 3 and 4, as we can see in Figs. 68 captions, and Tables 1 and 2. The code to run these reported algorithms was kindly provided by every author: the software is available together with the papers. The set of images used for the experimental evaluation, hereinafter the dataset, consist of the microscopic image pair from the MFIF reported in Ref. 11 (see Fig. 6) and a set of 50 Pap smear image sequences (320×240  pixels in RGB), each containing 15 images with different DoF and focused cells in several of them (see examples in Figs. 7 and 8).

Fig. 6

Visual results for the first experiment. The top row shows full images and the bottom one shows the corresponding image in detail. Columns include source images A (I) and B (II); resulting images for the compared methods 1-1 (a) Haghighat, 2011(1), 1-2 (b) Haghighat, 2011(2), 5 (c) Kumar, 2013, 3 (d) Zhou, 2014 and 4 (e) Phamila, 2014; (f) the resulting image, I_ff.

JBO_23_5_056005_f006.png

Fig. 7

Visual results for the second experiment. The top row shows the sequence of input images for sequence 3 (I). The bottom two rows show full images and the corresponding image detail for the final fused images obtained by each compared method: 1-1 (a) Haghighat, 2011(1), 1-2 (b) Haghighat, 2011(2), 5 (c) Kumar, 2013, 3 (d) Zhou, 2014, 4 (e) Phamila, 2014 and (f) the resulting image, I_ff.

JBO_23_5_056005_f007.png

Fig. 8

Visual results for the second experiment. The top row shows the sequence of input images for sequence 37 (I). The bottom two rows show full images and the corresponding image detail for the final fused images obtained by each compared method: 1-1 (a) Haghighat, 2011(1), 1-2 (b) Haghighat, 2011(2), 5 (c) Kumar, 2013, 3 (d) Zhou, 2014, 4 (e) Phamila, 2014 and (f) the resulting image, I_ff.

JBO_23_5_056005_f008.png

3.1.

Quality Metrics

The objective evaluation of a fused image is a difficult task because there is no universally accepted metric to evaluate an image fusion process.2 A frequent solution is the use of different metrics to test the fusion results from different viewpoints.19 Quality metrics for MFIF can be classified depending on the availability of the target image:20 metrics known as full-reference assume that a complete reference image (distortion-free) is available; however, in many practical applications, the reference image is not available; so, “no-reference” or “blind” quality metrics are used. As our dataset includes images captured in practical situations, we do not account for reference images. An alternative to these MFIF-based quality metrics is to evaluate focus metrics on the resulting fused image, as the aim in this scenario is to obtain a perfectly focused image. We describe below the metrics we have used.

No-reference metrics—The Petrovic metrics,21,22 based on gradient information, include three indicators: QAB/F, which represents in a normalized way the total information transferred from the source images (A,B) to the fused image (F); and LAB/F and NAB/F, which evaluate the complement to QAB/F, i.e., the loss of information, but just considering locations, where the gradient of the source images is greater (LAB/F) or lower (NAB/F) than that of the fused image. We have computed the QAB/F indicator as an overall quantitative measure of the fusion quality. For M×N images, QAB/F is obtained according to

Eq. (1)

QAB/F=n,mN,MQn,mAFwn,mA+Qn,mBFwn,mB/n,mN,Mwn,mA+wn,mB,
where QAF and QBF estimate edges preservation from the A and B source images, and wA and wB are local perceptual weighting factors usually corresponding to the gradients of these source images. A value of QAB/F=0 means complete loss of information and QAB/F=1 represents ideal fusion. This indicator is defined for the case of two source images (A and B); we have adapted it to the dataset sequences including 15 source images (from I1 to I15):

Eq. (2)

Q=QI1I15/F=n,mN,MQn,mI1Fwn,mI1++Qn,mI15Fwn,mI15n,mN,Mwn,mI1++wn,mI15.
Focus metrics—These include the standard deviation of the normalized pixel intensities of the fused image, F, the entropy of this image, the average gradient magnitude, which indicates sharpness, etc. We have selected the standard deviation, because it has been demonstrated to provide the best overall performance in estimating the focus level for nonfluorescence microscopy applications, including Pap smear and blood smear samples.2326

3.2.

Experiments and Discussion

The first experiment is conducted over two microscopic images kindly provided by Ref. 11, each showing different focused parts of the same object [see Figs. 6(I) and 6(II)]. We have applied to these source images the aforementioned five fusion algorithms and our proposed method. Figure 6 shows the resulting images along with a detail of each, in order to visually or qualitatively assess the performance of each method. Table 1 includes data with the quantitative evaluation of this first experiment.

Table 1

Quantitative results for the first experiment: performance quality metrics for the final fused microscope image obtained by each method.

1-1 Fig. 6 (a) Haghighat, 2011(1)1-2 Fig. 6 (b) Haghighat, 2011(2)5Fig. 6 (c) Kumar, 20133Fig. 6 (d) Zhou, 20144Fig. 6 (e) Phamila, 2014I_ffFig. 6 (f)
SD0.98770.97830.95340.99570.98431
QAB/F0.86050.76290.88290.88840.84360.8919

From a qualitative point of view, we observe that methods Refs. 1-1 and 4 [Figs. 6(a) and 6(e)] present highly visible block artifacts; these methods compare the DCT energy in homologous 8×8 blocks, which generates comparison errors when the images contain nonsquare elements in different depths of field or when cervical cells are round. The enhancement proposed by Ref. 1-2 [Fig. 6(b)], based on a consistency verification index to decide which block is selected, removes block artifacts in this example but at the expense of a poor visual result. The multiscale approach proposed in Ref. 5 [Fig. 6(c)], which does not keep original pixel values, presents a noticeable contrast reduction. The method reported in Ref. 3 and the proposed method [Figs. 6(d) and 6(f)] yield acceptable visual results.

From a quantitative point of view (see Table 1), an interesting observation is to contrast the correlation between each measure of quality and the perceived visual result: the SD measure yields very good values for images with highly noticeable block artifacts [in case of Figs. 6(a) and 6(e)], because these artifacts increase image variance; the QAB/F measure seems to be more in line with the qualitative findings.

Independently of these observations, Table 1 indicates that the proposed method behaves better in the light of both quality measures.

The second experiment targets the 50 sequences of Pap smear images obtained from the autofocus operation of a microscope. While reported works have focused on fusing two blurred images, many of them have applied their method in an iterative way to more than two input images, which is our practical context. Figures 7 and 8 show qualitative results for two of these sequences, and Table 2 and Fig. 9 compile the quantitative evaluation for the 50 sequences.

Table 2

Quantitative results for the second experiment: performance quality metrics (mean and deviation) for the final fused microscope images obtained by each method applied to the 50 image sequences.

1-1 Haghighat, 2011(1)1-2 Haghighat, 2011(2)5 Kumar, 20133 Zhou, 20144 Phamila, 2014I_ff
SD0.9632±0.01870.9832±0.01340.8397±0.07260.9424±0.03590.9785±0.01060.9987±0.0045
Q0.9920±0.00720.9897±0.01520.8941±0.10110.6630±0.06780.9940±0.00810.9974±0.0041

Fig. 9

Quantitative results for the second experiment. Performance quality metrics SD and Q for the final fused microscope images obtained by each method for every image sequence.

JBO_23_5_056005_f009.png

From a qualitative point of view, we clearly observe in Fig. 7 that the methods based on 8×8 DCT blocks [Figs. 7(a), 7(b) and 7(e), 8(a), and 8(e)] cannot avoid generating block artifacts. We can also observe that the multiscale approaches [Figs. 7(c), 7(d), 8(c), and 8(d)], which somehow process original pixels so that their value is never directly transferred to the final image, suffer from a severe loss of definition when the technique is applied to a large number of source image (15 images, instead of 2, for this experiment): several of the objects of interest are averaged, resulting in a loss of information and even the loss of complete cells. This is the situation for the method proposed in Ref. 3 [Figs. 7(d) and 8(d)], which, while losing very few information and objects of interest, sometimes loses full objects because it only compares two areas or regions in the image (focused and unfocused).

From a quantitative point of view, Table 2 indicates that the proposed method also behaves better for this part of the dataset including 50 image sequences. Apart from the mean values of the quality indicators, Table 2 includes their standard deviation, which proves that the results obtained by the proposed method are also the most stable. Finally, Fig. 9 intends to further illustrate the stability of the tested methods that obtained better global results. We observe that the proposed method systematically outperforms other approaches in the light of these quality indicators.

4.

Conclusion

This paper presents an object-oriented approach to the problem of obtaining a single focused image from a set of microscopic images captured from a single slide including objects that happen to be focused each in a different image of the set. The proposed MFIF method shows several specific advantages respect to other state-of-the-art methods: first, it is driven by a region-based segmentation, which prevents for the highly visible artifacts that may appear in block-based methods; second, it does not apply any kind of image transform, hence respecting the pixel-values of all focused regions, which is crucial for medical imaging applications; and finally, it includes a artifacts-removal technique, which only operates were required and adapts to the expected extent of the fusion-generated artifacts. Results, obtained over a representative dataset and compared to other published approaches, prove the validity of our proposal.

Appendices

Appendix A:

Pap Smear Images Sequences Dataset for Multifocus Image Fusion

A.1.

Extra Material for Download

The extra materials are available for download (Ref. 27) and contain the following: the entire 50 cervical cells images sequences dataset; the region-based MFIF method proposed for comparison with the other MFIF methods (as a MATLAB interface); and the entire fusion results.

Disclosure

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

We want to thank Dr. Maria de la Paz Hernandez for the provided Pap-smear samples. Also special thanks to Dr. Fomuy Woo, Dr. Maura Huerta, Dr. Victor Campos, and specialist Cytotechnologist Laura Meraz for their assistance in the ground truth generation (Hospital ISSSTE, México) and thanks to Ing. Edgar Valdez for their work in the Pap smear image acquisition.

References

1. 

M. B. A. Haghighat, A. Aghagolzadeh and H. Seyedarabi, “Multi-focus image fusion for visual sensor networks in DCT domain,” Comput. Electr. Eng., 37 (5), 789 –797 (2011). https://doi.org/10.1016/j.compeleceng.2011.04.016 Google Scholar

2. 

H. Zhao et al., “Multi-focus image fusion based on the neighbor distance,” Pattern Recognit., 46 (3), 1002 –1011 (2013). https://doi.org/10.1016/j.patcog.2012.09.012 Google Scholar

3. 

Z. Zhou, S. Li and B. Wang, “Multi-scale weighted gradient-based fusion for multi-focus image,” Inf. Fusion, 20 60 –72 (2014). https://doi.org/10.1016/j.inffus.2013.11.005 Google Scholar

4. 

Y. Phamila and R. Amutha, “Discrete cosine transform based fusion of multi-focus images for visual sensor networks,” Signal Process., 95 161 –170 (2014). https://doi.org/10.1016/j.sigpro.2013.09.001 Google Scholar

5. 

B. S. Kumar, “Multifocus and multispectral image fusion based on pixel significance using discrete cosine harmonic wavelet transform,” Signal Image Video Process., 7 (6), 1125 –1143 (2013). https://doi.org/10.1007/s11760-012-0361-x Google Scholar

6. 

S. Tello-Mijares et al., “Efficient autofocus method for sequential automatic capturing of high-magnification microscopic images,” Chin. Opt. Lett., 11 (12), 121102 (2013). https://doi.org/10.3788/COL201311.121102 Google Scholar

7. 

S. Tello-Mijares, J. Bescós and F. Flores, “Nuclei segmentation and identification in practical pap-smear images with multiple overlapping cells,” J. Med. Imaging Health Inf., 6 (4), 992 –1000 (2016). https://doi.org/10.1166/jmihi.2016.1750 Google Scholar

8. 

L. Chen, J. Li and C. L. Chen, “Regional multifocus image fusion using sparse representation,” Opt. Express, 21 (4), 5182 –5197 (2013). https://doi.org/10.1364/OE.21.005182 Google Scholar

9. 

Z. Liu et al., “Fusing synergistic information from multi-sensor images: an overview from implementation to performance assessment,” Inf. Fusion, 42 127 –145 (2018). https://doi.org/10.1016/j.inffus.2017.10.010 Google Scholar

10. 

L. Kong et al., “Multifocus confocal Raman microspectroscopy for rapid single-particle analysis,” J. Biomed. Opt., 16 (12), 120503 (2011). https://doi.org/10.1117/1.3662456 Google Scholar

11. 

J. Tian and L. Chen, “Adaptive multi-focus image fusion using a wavelet-based statistical sharpness measure,” Signal Process., 92 (9), 2137 –2146 (2012). https://doi.org/10.1016/j.sigpro.2012.01.027 Google Scholar

12. 

Q. Guihong, Z. Dali and Y. Pingfan, “Medical image fusion by wavelet transform modulus maxima,” Opt. Express, 9 (4), 184 –190 (2001). https://doi.org/10.1364/OE.9.000184 Google Scholar

13. 

Z. Liu et al., “A novel multi-focus image fusion approach based on image decomposition,” Inf. Fusion, 35 102 –116 (2017). https://doi.org/10.1016/j.inffus.2016.09.007 Google Scholar

14. 

A. Dogra, B. Goyal and S. Agrawal, “Bone vessel image fusion via generalized reisz wavelet transform using averaging fusion rule,” J. Comput. Sci., 21 371 –378 (2017). https://doi.org/10.1016/j.jocs.2016.10.009 Google Scholar

15. 

A. Dogra et al., “Efficient fusion of osseous and vascular details in wavelet domain,” Pattern Recognit. Lett., 94 189 –193 (2017). https://doi.org/10.1016/j.patrec.2017.03.002 Google Scholar

16. 

L. I. Rudin, S. Osher and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Phys. D, 60 (1), 259 –268 (1992). https://doi.org/10.1016/0167-2789(92)90242-F PDNPDT 0167-2789 Google Scholar

17. 

K. Fukunaga and L. Hostetler, “The estimation of the gradient of a density function, with applications in pattern recognition,” IEEE Trans. Inf. Theory, 21 32 –40 (1975). https://doi.org/10.1109/TIT.1975.1055330 Google Scholar

18. 

D. Comaniciu and P. Meer, “Mean-shift: a robust approach toward feature space analysis,” IEEE Trans. Pattern Anal. Mach. Intell., 24 603 –619 (2002). https://doi.org/10.1109/34.1000236 Google Scholar

19. 

X. Xia, S. Fang and Y. Xiao, “High resolution image fusion algorithm based on multi-focused region extraction,” Pattern Recognit. Lett., 45 115 –120 (2014). https://doi.org/10.1016/j.patrec.2014.03.018 Google Scholar

20. 

Z. Wang et al., “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process., 13 (4), 600 –612 (2004). https://doi.org/10.1109/TIP.2003.819861 Google Scholar

21. 

C. S. Xydeas and V. Petrović, “Objective image fusion performance measure,” Electron. Lett., 36 (4), 308 –309 (2000). https://doi.org/10.1049/el:20000267 Google Scholar

22. 

V. Petrovic and C. Xydeas, “Objective image fusion performance characterization,” in Tenth IEEE Int. Conf. on in Computer Vision (ICCV ’05), 1866 –1871 (2005). https://doi.org/10.1109/ICCV.2005.175 Google Scholar

23. 

A. Santos et al., “Evaluation of autofocus functions in molecular cytogenetic analysis,” J. Microsc., 188 (3), 264 –272 (1997). https://doi.org/10.1046/j.1365-2818.1997.2630819.x Google Scholar

24. 

Y. Sun, S. Duthaler and B. J. Nelson, “Autofocusing in computer microscopy: selecting the optimal focus algorithm,” Microsc. Res. Tech., 65 (3), 139 –149 (2004). https://doi.org/10.1002/jemt.20118 Google Scholar

25. 

X. Y. Liu, W. H. Wang and Y. Sun, “Autofocusing for automated microscopic evaluation of blood smear and pap smear,” in 28th Annual Int. Conf. of the IEEE In Engineering in Medicine and Biology Society (EMBS ’06), 4718 –4721 (2006). https://doi.org/10.1109/IEMBS.2006.259263 Google Scholar

26. 

X. Y. Liu, W. H. Wang and Y. Sun, “Dynamic evaluation of autofocusing for automated microscopic analysis of blood smear and pap smear,” J. Microsc., 227 (1), 15 –23 (2007). https://doi.org/10.1111/j.1365-2818.2007.01779.x Google Scholar

Biography

Santiago Tello-Mijares Received his BS degree in electronic engineering in 2006 and his PhD degree in electrical engineering science in 2013, from Instituto Tecnológico de la Laguna, Torreón, México; and in 2017, the PhD degree in telecommunications and informatics engineering at Universidad Autonóma de Madrid, Madrid, Spain. He is actually titular professor at Postgraduate Department in Instituto Tecnológico Superior de Lerdo, Lerdo, Mexico. His research interests are biomedical image, artificial intelligence, and robotics.

Jesús Bescós received his BS degree in telecommunications engineering in 1993 and the PhD degree in the same field in 2001 from Universidad Politécnica de Madrid, Spain. He is a professor (since 2003) at the Universidad Autonóma de Madrid, where he codirects the Video Processing and Understanding Lab. His research interests include the analysis of video sequences, video indexing based on content, 2-D and 3-D machine vision.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Santiago Tello-Mijares and Jesús Bescós "Region-based multifocus image fusion for the precise acquisition of Pap smear images," Journal of Biomedical Optics 23(5), 056005 (11 May 2018). https://doi.org/10.1117/1.JBO.23.5.056005
Received: 12 February 2018; Accepted: 20 April 2018; Published: 11 May 2018
Lens.org Logo
CITATIONS
Cited by 11 scholarly publications.
Advertisement
Advertisement
KEYWORDS
Image fusion

Image segmentation

Image quality

Image processing

Fusion energy

Medical imaging

Visualization

Back to Top