Translator Disclaimer
18 September 2020 Automated extraction of homogeneous regions by seeded region shrinkage
Author Affiliations +
Abstract

Drawing a spectrally homogeneous region of interest in a remotely sensed image is a common task for an image analyst when performing, for instance, atmospheric correction or end-member selection. Manually selecting a homogeneous sample of pixels can be tedious and error prone due to the limits of human perception and data visualization. I present a region shrinkage method that automates the extraction of a spectrally homogeneous and spatially contiguous region from a user selected seed pixel. The proposed technique combines divisive clustering, connected component analysis, and image noise estimation to generate a series of candidate regions of increasingly smaller size until they converge to the seed pixel through similarity space. From these candidate regions, an optimal one is identified that is spectrally homogeneous, spatially contiguous, and as large as possible. Experimental results demonstrate that the proposed method achieved detection rates of up to 95%, false alarm rates below 1%, and was robust to the main user input, the seed pixel location.

1.

Introduction

Seed region growing (SRG) techniques are simple, fast, and effective image segmentation algorithms.1,2 These methods require the selection of seed pixels that satisfy some user criterion then grow the seeds into segments by absorbing adjacent pixels until a statistical, visual, or physical criterion is met. The goal of these algorithms frequently is segmenting the entire image3 or extracting a domain specific region of interest (ROI) such as vegetation,4 cancer masses,5 or road networks.6 Another popular technique, superpixels,7 over-segments an image into patches of similar pixels that help define visually meaningful boundaries. This method requires a priori knowledge, a database of exemplars, and a classifier trained using features based on the classical gestalt theory of grouping. However, these approaches introduce additional requirements not needed when the goal is a simpler problem, extracting spectrally homogeneous ROIs which need not correspond to an object’s visual boundaries.

Drawing accurate and homogeneous ROIs is a common yet often difficult task even for trained image analysts when exploiting remote sensing imagery.8 For instance, drawing spectrally homogeneous ROI samples of several materials can be a first step in applying an in-scene atmospheric compensation algorithm such as the empirical line method.9 However, several well-known problems arise when visually exploiting imagery such as (1) contrast stretch selection, (2) the spatial scale of image statistics used to compute contrast histograms, and (3) the selection of which spectral bands to display.10 Therefore, image exploitation algorithms should automate any steps that rely on human perception wherever possible.11

There are surprisingly few tools to deal with this common problem of extracting a spectrally homogeneous, spatially contiguous region. For example, a standard image analysis software package like ENVI, developed by L3Harris, has a seed growing tool. However, it requires (1) drawing a seed polygon or multiple points, (2) providing a threshold that depends on target statistics knowledge, and (3) using only a single band. Those inputs are susceptible to the human perception issues addressed or require a priori knowledge.

To overcome these common limitations, this work proposes a method that requires two inputs easier to provide: (1) a single seed point and (2) a target agnostic noise level threshold that is shown to be robust over a reasonably wide range of values. The method combines divisive clustering, connected component analysis, and image noise estimation to gauge the homogeneity of a series of shrinking candidate ROIs and then from these selects an optimal ROI. Region shrinkage itself is not new and has been previously proposed to solve problems like image segmentation.12 However in the proposed approach, the region shrinkage is guided by the seed pixel and therefore converges to the seed through similarity space. Finally, since the goal is neither image nor object segmentation, the resulting ROI need not comprise the entire underlying physical object.

Figure 1 provides an overview of the problem. The user wants an ROI of a target (e.g., the reflectance panel). They place a seed at its center. The scene is clustered into a series of shrinking ROIs using the proposed method. Each ROI (e.g., #2) is contained by the previous ROI (e.g., #1). The spectral variance of the initial ROIs (#1 to #4) is due to the target and the background. The next ROIs (#5 to #7) contain only target pixels. The final ROI (#8) is just the seed pixel which has a zero variance. Generally, the shrinking ROIs have a decreasing variance but not always. ROI #4 has a spike because it contains the target and a neighboring panel but is no longer dominated by the background statistics. However, the target is not always as simple as a calibration panel for which ground truth is likely available. Therefore, the goal is to build a classifier that automates finding a threshold to discriminate between the background and target regions without requiring a priori knowledge of either.

Fig. 1

A target (a) is clustered into a series of shrinking ROIs (#1 to #8). The variance of the ROIs (b) is generally decreasing. The optimal ROI (#5) is the largest region below a to-be-determined noise threshold.

JARS_14_3_036518_f001.png

This paper is organized as follows. In Sec. 2, the proposed method is described. Section 3 describes the experimental setup used to validate the method. Section 4 presents the experimental results of applying the proposed method to multiband imagery. Section 5 provides the conclusions drawn from this work and recommendations for future research.

2.

Proposed Method

This work proposes to reverse the SRG concept to extract a homogeneous ROI. The intuition behind the approach is that since SRG methods start from a seed and grow outward, they require assumptions of what constitutes closeness between pixels to decide which to absorb. However, by starting with the entire image and proceeding inward, closeness can be learned by how the pixels group together. There are three steps to the proposed method outlined below and detailed in the following sections.

  • 1. Candidate ROI generation. A series of shrinking candidate ROIs are generated using divisive clustering and connected component analysis guided by the user-selected seed.

  • 2. Noise level estimation. A patch-based approach estimates a signal-dependent noise model which is used to assess the spectral homogeneity of each candidate ROI.

  • 3. Optimal ROI selection. The candidate ROI that is spectrally homogeneous in all spectral bands, spatially contiguous, and as large as possible is labeled the optimal ROI.

2.1.

Candidate Region of Interest Generation

Unsupervised divisive clustering is used to create a series of shrinking candidate ROIs. A Gaussian mixture model13 (GMM) solved using expectation maximization14 (EM) was selected due to its ability to allow fuzzy memberships instead of hard assignments (e.g., k-means) and not require performing full image segmentation or object extraction15 (e.g., semantic labeling). GMM-EM uses all bands simultaneously during clustering. The candidate ROIs are generated as follows.

A seed is selected by the user that is contained within the desired target. This seed remains fixed throughout the generation process. The first candidate ROI is the entire image. Then this ROI is divided into two clusters using the GMM-EM algorithm. The connected component within these two clusters that contains the seed pixel is the next candidate ROI. A connected component is defined as a set of pixels that have the same cluster labeling and each pixel is spatially adjacent to some other pixel with the same labeling. This next candidate is further subdivided into two clusters and the connected component with the seed is the next candidate. This process of bilevel clustering and connected component extraction is continued until the last candidate ROI is the seed pixel itself. Bilevel clustering is used because the goal of the proposed method is to identify increasingly homogeneous clusters that contain the seed and not to segment the entire scene which would likely require more than two clusters.

This clustering process yields a series of shrinking ROIs where each ROI is contained by the previous ROI. Neither the largest candidate (which is the entire image and likely spectrally heterogeneous) nor the smallest candidate (which is the seed pixel and too small to provide meaningful statistics) are expected to be of interest to the user. Therefore, one of the other intermediate ROIs is expected to be more useful for further exploitation. The next section describes a noise estimation approach that computes the signal-dependent noise level in an image which is used to identify spectrally homogeneous ROIs.

2.2.

Noise Level Estimation

Among the simplest metrics that characterize the variability of the pixels in an ROI is the variance of their intensity values. The variance of a spectrally homogeneous ROI should only be due to the noise of the image acquisition process16 and not the variability in a natural image’s scene content.17

There are different types of noise present in digital imagery including Gaussian noise, salt-and-pepper noise, shot noise, quantization noise, and speckle noise.18 A common assumption in remote sensing is that the image corrupting noise is additive white Gaussian. However, digital remote sensors with increasingly smaller pixel resolution are becoming more sensitive to photon counting which yields signal-dependent noise. Therefore, this work assumes a Poisson–Gaussian model,19 which yields a signal-dependent variance given by

Eq. (1)

σ2(μ)=a·μ+b,
where the first term is a signal-dependent component, a depends on the sensor quantum efficiency and other camera-specific settings, μ is the expected sensor output pixel intensity, and b is the signal-independent component due to thermal and electronic noise.20 These model parameters can be approximated using in-scene noise estimation techniques.

A variety of noise estimation algorithms have been proposed including principal components analysis,21 kurtosis estimation,22 bit planes,23 and patch-based approaches.24 Recent advances in image denoising also include deep learning approaches with convolutional neural networks.25 However, our goal is to develop a tool that is fast and efficient, does not require significant expertise or training databases, and can be easily used by an image analyst. Therefore, machine learning methods were not considered suitable for our goal. To account for both scene content variability and signal-dependent noise, the patch-based approach presented in Ref. 24 was selected. This approach estimates the local noise using a Laplacian-based filter to compute the noise standard deviation and is summarized as follows, for further details see Ref. 24.

The following noise estimation is performed for each image band separately. The image is partitioned into M nonoverlapping patches. For the j’th patch, the sample average intensity μj is computed and the local noise standard deviation σj is computed using the following estimator:

Eq. (2)

σj=π216(W2)(H2)i|I(xi,yi)*N|,

Eq. (3)

N=(121242121),
where W is the patch width, H is the patch height, I is the noisy image, N is a Laplacian-based mask, and the sum is over the i’th pixels in the j’th patch. Finally, a and b are estimated using least squares regression between the M values of μj and σj2. Specifically, the ordinary least squares solver (lscov) provided by the Octave26 software package was used to perform the regression, where σj2 are the response variables and μj are the regressor variables.

This approach is simple, fast (requiring only a single convolution and a few arithmetic operations), and it mitigates the effects of varying terrain without having to explicitly identify edges27 or weakly textured patches,28 which can be difficult in multiband imagery.

2.3.

Spectral Homogeneity and Optimal ROI Selection

The spectral homogeneity of a candidate ROI is determined using the estimated noise level in Eq. (1). The intensity sample mean μR and sample standard deviation σR are computed for each candidate ROI. The predicted noise level standard deviation σP is computed with Eq. (1) using μR and the estimated parameters a and b. Then an ROI is labeled as spectrally homogeneous if,

Eq. (4)

σRT·σP,
the variability of the ROI σR is at or below the noise level σP times a noise level threshold factor T. For multiband images, this test is performed on each band separately and the ROI is labeled homogeneous if the test is satisfied for all bands.

The value of T can be determined empirically for a given sensor and collection environment. However, experiments showed that the proposed method is robust around a T value of one. This would be reasonable to label an ROI homogeneous if it is within one standard deviation from the noise floor. In fact, a range near one was evaluated by computing the detection and false alarm rates using ground truth and it performed well (see Sec. 3).

Finally, the optimal ROI is the candidate that is spectrally homogeneous with the most pixels and has no spectrally heterogeneous smaller candidate. This ROI is optimal because larger candidates introduce heterogeneity and smaller ones are simply partitioning pixels among noise.

3.

Experimental Setup

The proposed method is a binary classifier that discriminates between two classes of targets—spectrally homogeneous versus heterogeneous ROIs. Receiver operating characteristic (ROC) metrics were calculated to evaluate the diagnostic ability of the classifier. The experiments used to evaluate the classifier are divided into two groups, those with and those without ground truth. For the experiments with ground truth, the accuracy and the sensitivity of the optimal ROIs were tested using spectrally homogeneous reflectance calibration panels. The ground truth consists of the polygon shapes that outline the calibration panels. For the experiments without ground truth, the sensitivity of the optimal ROIs was evaluated, and a visual assessment was performed. For both imagery with and without ground truth, the author selected the test seed pixels near the center of the target regions.

3.1.

Experimental Methodology

To determine a viable operating interval for the noise level threshold factor T from Eq. (4), the ground truth data sets were evaluated using a range of values from zero to ten. From this analysis, an ROC curve and its area under the curve (AUC) were computed. In addition, the interval near one noise level standard deviation was evaluated since this would be the range many users would expect to set as a noise threshold. A threshold from this range was used to evaluate the sensitivity of the classifier.

To evaluate the accuracy of any optimal ROI generated, a truth mask was created of the ground truth panels. The pixels within the truth mask are the expected output of the classifier for any seed pixel selected within the corresponding panel. The actual classifier output ROI was then compared to the expected truth to create a confusion matrix from which the detection rate and false alarm rate were computed. The seed pixels for the ground truth cases were selected near the panel centers. Note, for cases without ground truth only a relative evaluation is possible, and the optimal ROI generated from an initial user-selected, seed pixel is considered “ground truth.” The procedure described in the remainder of this section is used to evaluate the sensitivity to this seed location.

It is unrealistic to require a user to select any specific seed to achieve a good result. Generally, an image analyst will try to select a seed near the center of the desired target and not be too concerned with its exact position to within a few pixels. So any ROI extraction algorithm must also be insensitive to the seed location to within a few pixels. To evaluate the sensitivity of the optimal ROI, similar confusion matrix metrics were computed. However, the optimal ROI using an initial seed pixel was used as the expected, or “truth,” mask. Then the seed was perturbed by a few pixels in the line and sample directions. The optimal ROI for each perturbation was compared to the unperturbed ROI. This yields a relative accuracy whose behavior as a function of the perturbation distance characterizes the sensitivity of the algorithm to the seed location. The perturbation window was 5×5  pixels centered at the seed.

Note to reduce the computational load of the clustering process, the initial ROI can be a subset of the initial image. For this analysis, the initial ROI is a window of size 101×101  pixels centered at the seed. However, the noise estimation was performed using the entire image and used patch sizes of 13×13  pixels. This patch size was determined empirically to be large enough to still find many homogeneous patches and small enough to minimize the computational load. The connected component analysis used an eight-connectivity neighborhood.

3.2.

Data Sets

Testing was limited to RGB images. The ground truth data sets consisted of the RGB imagery from the SHARE 201229 collection and Forest Radiance I hyperspectral data for the HYDICE30 sensor. The HYDICE data were reduced to RGB bands by selecting those corresponding bands. These images contain 12 spectrally homogeneous calibration panels.

Data sets without suitable ground truth included the Moffett Field imagery collected using the AVIRIS31 sensor and the Pavia University imagery collected using the ROSIS32 sensor. These data cubes were also similarly reduced to RGB images. Although these data cubes had land cover mask ground truth, they did not have ground truth targets deemed to be spectrally homogeneous like reflectance calibration panels.

4.

Experimental Results

The detector ROC curve [Fig. 2(a)], parameterized by T, has an AUC of 98%. Its detection rates were 82%, 90%, and 95% with false alarms rates of 0.2%, 0.3%, and 0.31% for T values at 0.5, 0.75, and 1.0, respectively. Since the goal is to find a homogeneous ROI and not the entire object, extracting dozens or hundreds of pixels will be sufficient for most users. So even having detection rates as low as, say, 50% and false alarm rates as high as 1% would be satisfactory. For example, if the desired target had 100 pixels these rates yield 50 true pixels and one false pixel. A few false pixels potentially contaminating ROI statistics can be easily mitigated with a robust estimator. So the classifier does a good job of detecting homogeneous ROIs in the range of expected thresholds. A threshold in this interval 1.0 was used for subsequent tests.

Fig. 2

ROC curve (a) shows a high AUC and detection rates with low false alarm rates near one noise level standard deviation. The accuracy (b) shows a gradual decline across the seed pixel perturbation distances.

JARS_14_3_036518_f002.png

The accuracy [Fig. 2(b)], detection rate [Fig. 3(a)], and false alarm rate [Fig. 3(b)] for each test case and their average demonstrate that the detector is insensitive to the exact location of the seed. The average detection rate showed a gradual decline from 95% to a minimum of 90% at a perturbation distance of about 3 pixels. All cases yielded similar declines in detection rate with nearly constant false alarm rates. The Moffett image showed the largest drop to 75%. The total accuracy also showed a gradual decline across the perturbation distances. Note for the Forest Radiance and SHARE 2012 cases, Fig. 2(b) presents absolute accuracies since ground truth is available but for the Moffett and Pavia cases, they represent relative accuracies since no ground truth is available. The same is true for Fig. 3.

Fig. 3

The detection rate (a) shows a gradual decline across perturbations with (b) a nearly constant false alarm rate.

JARS_14_3_036518_f003.png

Optimal ROIs were extracted for six Forest Radiance calibration panels [Fig. 4(a)] and six SHARE 2012 calibration panels [Fig. 4(b)]. The optimal ROIs visually appear to do a good job covering most of the panel pixels.

Fig. 4

Seeds (blue) and optimal ROIs (green) of the (a) six forest radiance and (b) six SHARE 2012 panels.

JARS_14_3_036518_f004.png

Optimal ROIs of nonground truth for the Pavia (Fig. 5), Moffett (Fig. 6), and SHARE 2012 (Fig. 7) images present similar results. The optimal ROI does not always fill the underlying physical object (e.g., a building roof) but visually appears to occupy spectrally similar nearby pixels. In the Pavia image, the optimal ROI discriminates between the different shadings on each side of the tilted roof and avoids a group of anomalous blue pixels in the upper right corner. Also when those anomalous pixels were used as a seed, the optimal ROI detected was just those pixels. So the proposed method did not force the anomalies to be grouped with their neighbors.

Fig. 5

Seeds (blue) and optimal ROIs (green) of (a) two buildings and (b) two patches of vegetation in the Pavia image. The top ROI in (a) discriminates between the shade on each side of the roof due to its tilt.

JARS_14_3_036518_f005.png

Fig. 6

Seeds (blue) and optimal ROIs (green) of (a) light and dark terrain and (b) three roofs in the Moffett image.

JARS_14_3_036518_f006.png

Fig. 7

Seeds (blue) and optimal ROIs (green) of (a) two sections of a basketball court and (b) a flat building roof in the SHARE 2012 image. ROIs in (a) contain similar pixels and avoid the court paint and nearby yellow pixels.

JARS_14_3_036518_f007.png

The Moffett image shows the ROIs generated for a light and dark area of terrain. For the dark area, the classifier was able to exclude the similar but spectrally different areas of lighter pixels to its right. The SHARE 2012 image shows optimal ROIs taking the shape of the basketball court regions without including the surrounding paint lines or anomalous, neighboring yellow pixels.

5.

Conclusions

An automated extraction method of spectrally homogeneous ROIs has been developed for remotely sensed images. The method requires the selection of a single pixel and a target agnostic, noise threshold by the user. The ROI is generated based on an in-scene estimation of the image noise level. Performance and sensitivity tests showed the extracted ROIs achieved detection rates up to 95%, false alarms rates below 1%, and remained robust to the main user input, the initial seed location. The method was also able to exclude anomalous pixels and subtle terrain shading that a user might incorrectly include in an ROI due to suboptimal visualization decisions.

Four key improvements over current approaches are: (1) simplifying the user input to a single seed pixel, (2) preventing the misapplication of algorithms that segment the physical object and not spectrally homogeneous pixels, (3) eliminating the need for the user to possess a priori knowledge of image or target specific thresholds, and (4) mitigating against mistakes introduced by human visualization of multiband spectral data. The proposed method avoids these problems by estimating an accurate noise model and reversing the growing process by converging to the seed through similarity space.

The approach described can be reasonably extended to imagery with additional bands in the same spectral region. However, more research is needed to evaluate its efficacy with, for example, hyperspectral images (HSI) with hundreds of bands. In particular, the ROI selection process (Sec. 2.3), which requires target homogeneity in all bands, may be too stringent since a target’s spectral behavior can change significantly between, say, the visible and the infrared range.

Future research will include evaluating (1) a more efficient noise estimation since the current approach may be slow for large images, (2) different noise models for other sensor modalities (e.g., SAR), (3) the behavior of the proposed method at various nominal and stressing noise levels, and (4) different data types (e.g., HSI).

Acknowledgments

The author would like to thank the editor and the anonymous reviewers for their suggestions that greatly improved this work.

References

1. 

R. Adams and L. Bischof, “Seeded region growing,” IEEE Trans. Pattern Anal. Mach. Intell., 16 (6), 641 –647 (1994). https://doi.org/10.1109/34.295913 ITPIDJ 0162-8828 Google Scholar

2. 

J. Fan et al., “Seeded region growing: an extensive and comparative study,” Pattern Recognit. Lett., 26 1139 –1156 (2005). https://doi.org/10.1016/j.patrec.2004.10.010 PRLEDG 0167-8655 Google Scholar

3. 

M. Fan and T. C. M. Lee, “Variants of seeded region growing,” IET Image Process., 9 (6), 478 –485 (2015). https://doi.org/10.1049/iet-ipr.2014.0490 Google Scholar

4. 

J. Zhou, Y. Huang and B. Yu, “Mapping vegetation-covered urban surfaces using seeded region growing in visible-NIR air photos,” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., 8 (5), 2212 –2221 (2015). https://doi.org/10.1109/JSTARS.2014.2362308 Google Scholar

5. 

E. Kozegar et al., “Mass segmentation in automated 3-D breast ultrasound using adaptive region growing and supervised edge-based deformable model,” IEEE Trans. Med. Imaging, 37 (4), 918 –928 (2018). https://doi.org/10.1109/TMI.2017.2787685 ITMID4 0278-0062 Google Scholar

6. 

P. Lu et al., “A new region growing-based method for road network extraction and its application on different resolution SAR images,” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., 7 (12), 4772 –4783 (2014). https://doi.org/10.1109/JSTARS.2014.2340394 Google Scholar

7. 

X. Ren and J. Malik, “Learning a classification model for segmentation,” in Proc. Ninth IEEE Int. Conf. Comput. Vision, 10 –17 (2003). https://doi.org/10.1109/ICCV.2003.1238308 Google Scholar

8. 

Interpreting Remote Sensing Imagery: Human Factors, CRC Press, Boca Raton, Florida (2001). Google Scholar

9. 

J. R. Schott, Remote Sensing: The Image Chain Approach, 2nd ed.Oxford University Press, New York (2007). Google Scholar

10. 

G. Polder and G. W. van der Heijden, “Visualization of spectral images,” Proc. SPIE, 4553 132 –137 (2001). https://doi.org/10.1117/12.441578 Google Scholar

11. 

E. Michaelsen, “On the automation of gestalt perception in remotely sensed data,” Comput. Opt., 42 (6), 1008 –1014 (2018). https://doi.org/10.18287/2412-6179-2018-42-6-1008-1014 Google Scholar

12. 

H. Jose Antonio Martin et al., “A divisive hierarchical k-means based algorithm for image segmentation,” in IEEE Int. Conf. Intell. Syst. Knowl. Eng., 300 –304 (2010). https://doi.org/10.1109/ISKE.2010.5680865 Google Scholar

13. 

H. Permuter, J. Francos and I. Jermyn, “A study of Gaussian mixture models of color and texture features for image classification and segmentation,” Pattern Recognit., 39 (4), 695 –706 (2006). https://doi.org/10.1016/j.patcog.2005.10.028 Google Scholar

14. 

A. P. Dempster, N. M. Laird and D. B. Rubin, “Maximum likelihood from incomplete data via the EM algorithm,” J. R. Stat. Soc.: Ser. B (Methodol.), 39 (1), 1 –38 (1977). https://doi.org/10.2307/2984875 Google Scholar

15. 

J. S. Sevak et al., “Survey on semantic image segmentation techniques,” in Int. Conf. Intell. Sustain. Syst. (ICISS), 306 –313 (2017). Google Scholar

16. 

R. C. Gonzalez and R. E. Woods, Digital Image Processing, Pearson, Upper Saddle River, New Jersey (2008). Google Scholar

17. 

D. L. Ruderman, “The statistics of natural images,” Network: Comput. Neural Syst., 5 (4), 517 –548 (1994). https://doi.org/10.1088/0954-898X_5_4_006 Google Scholar

18. 

A. K. Boyat and B. K. Joshi, “A review paper: noise models in digital image processing,” SIPIJ, 6 (2), 63 –75 (2015). https://doi.org/10.5121/sipij.2015.6206 Google Scholar

19. 

A. Foi et al., “Practical Poissonian–Gaussian noise modeling and fitting for single-image raw-data,” IEEE Trans. Image Process., 17 (10), 1737 –1754 (2008). https://doi.org/10.1109/TIP.2008.2001399 IIPRE4 1057-7149 Google Scholar

20. 

L. Dong, J. Zhou and Y. Y. Tang, “Effective and fast estimation for image sensor noise via constrained weighted least squares,” IEEE Trans. Image Process., 27 (6), 2715 –2730 (2018). https://doi.org/10.1109/TIP.2018.2812083 IIPRE4 1057-7149 Google Scholar

21. 

S. Pyatykh, J. Hesser and L. Zheng, “Image noise level estimation by principal component analysis,” IEEE Trans. Image Process., 22 (2), 687 –699 (2013). https://doi.org/10.1109/TIP.2012.2221728 IIPRE4 1057-7149 Google Scholar

22. 

L. Dong, J. Zhou and Y. Y. Tang, “Noise level estimation for natural images based on scale-invariant kurtosis and piecewise stationarity,” IEEE Trans. Image Process., 26 (2), 1017 –1030 (2017). https://doi.org/10.1109/TIP.2016.2639447 IIPRE4 1057-7149 Google Scholar

23. 

A. Barducci et al., “Assessing noise amplitude in remotely sensed images using bit-plane and scatterplot approaches,” IEEE Trans. Geosci. Remote Sens., 45 (8), 2665 –2675 (2007). https://doi.org/10.1109/TGRS.2007.897421 IGRSD2 0196-2892 Google Scholar

24. 

J. Immerkær, “Fast noise variance estimation,” Comput. Vision and Image Understanding, 64 (2), 300 –302 (1996). https://doi.org/10.1006/cviu.1996.0060 CVIUF4 1077-3142 Google Scholar

25. 

L. Fan et al., “Brief review of image denoising techniques,” Vis. Comput. Industry, Biomed., 2 7 (2019). https://doi.org/10.1186/s42492-019-0016-7 Google Scholar

26. 

J. W. Eaton, “GNU project, Octave,” https://www.gnu.org/software/octave Google Scholar

27. 

B. S. Paskaleva et al., “Joint spatio-spectral based edge detection for multispectral infrared imagery,” in IEEE Int. Geosci. Remote Sens. Symp., 2198 –2201 (2010). https://doi.org/10.1109/IGARSS.2010.5648869 Google Scholar

28. 

X. Liu, M. Tanaka and M. Okutomi, “Practical signal-dependent noise parameter estimation from a single noisy image,” IEEE Trans. Image Process., 23 (10), 4361 –4371 (2014). https://doi.org/10.1109/TIP.2014.2347204 IIPRE4 1057-7149 Google Scholar

29. 

A. Giannandrea et al., “The SHARE 2012 data campaign,” Proc. SPIE, 8743 87430F (2013). https://doi.org/10.1117/12.2015935 PSISDG 0277-786X Google Scholar

30. 

P. A. Mitchell, “Hyperspectral digital imagery collection experiment (HYDICE),” Proc. SPIE, 2587 70 –95 (1995). https://doi.org/10.1117/12.226807 Google Scholar

31. 

, “AVIRIS—Airborne visible/infrared imaging spectrometer,” https://aviris.jpl.nasa.gov Google Scholar

32. 

, “Reflective optics system imaging spectrometer (ROSIS),” http://crs.hi.is Google Scholar

Biography

Daniel Pulido received his BS degree in aerospace engineering from Boston University in 1998, his MS degree in applied mathematics from Worcester Polytechnic Institute in 2003, and his PhD in computational sciences and informatics from George Mason University in 2017. He is a remote sensing scientist at Leidos Inc. His research interests include multispectral, hyperspectral, and LIDAR image processing, algorithm development, and data compression.

© The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Daniel Pulido "Automated extraction of homogeneous regions by seeded region shrinkage," Journal of Applied Remote Sensing 14(3), 036518 (18 September 2020). https://doi.org/10.1117/1.JRS.14.036518
Received: 26 May 2020; Accepted: 8 September 2020; Published: 18 September 2020
JOURNAL ARTICLE
10 PAGES


SHARE
Advertisement
Advertisement
Back to Top