Open Access Paper
10 September 2019 Preprocessing techniques for removing artifacts in synchrotron-based tomographic images
Author Affiliations +
Abstract
X-Ray parallel-beam micro-tomography systems at synchrotron facilities are mainly bespoke designs. Technical problems from a detector system can strongly affect the quality of reconstruction results. Radial lens distortion in the visible light optics causes streak artifacts which get stronger towards the edge of the image. The irregular response of the detector gives rise to a variety of stripe artifacts in the sonogram; full stripes, partial stripes, fluctuating stripes, and unresponsive stripes. These give rise to ring artifacts of different kinds in the reconstructed image. The scattering of the scintillation photons can cause artifacts which are similar to beam hardening artifacts and reduce the resolution of the image. Here we present our practical approaches to tackle each such problem. These approaches are easy to implement and have low computational cost. The algorithms are freely available as open-source software.

1.

INTRODUCTION

A common design of a detecting system (Fig. 1(a)) in synchrotron-based tomographic systems consists of an X-ray fluorescent scintillator, a visible light optics system, and a digital camera1-4. Imperfections of each component are the causes of artifacts in tomographic images in which the lens and the scintillator are the main contributors. Generally, the design of a lens is optimized for resolution, efficiency, and radiation hardness. As a trade-off it can produce radial distortion as can be seen in Fig. 1(b), which shows the image of a grid pattern distorted by a lens having a magnification decreasing away from the optical axis. The distortion disturbs the geometrical conditions of parallel-beam tomography in two ways5. First, the pixel size is varied continuously from the optical axis resulting the increase of artifacts with distance from this axis. Second, a sinogram formed by combining projections recorded by a single row of the detector contains contributions from nearby rows instead of completely independent. Although there are many distortion correction methods proposed6, it is crucial that the correction result be at sub-pixel accuracy for parallel-beam tomography. Besides, most tomographic imaging systems at synchrotron facilities are highly configurable and require frequent disassembly for maintenance or replacement of parts. In such cases, a fast, accurate, and easy to implement method is desirable.

Figure 1.

(a) Detecting system; (b) Distorted image of a grid pattern. (c) Magnified view from the blue frame in (a).

00022_PSISDG11113_111131I_page_1_1.jpg

The scintillator which converts X-rays into visible light is the most important component in an indirect X-ray detector and has the greatest influence on the detector’s imaging properties. It is very challenging to fabricate scintillators with high quality, high resolution, high conversion rate, and radiation tolerance as required at synchrotron facilities7. High of X-ray flux for a long period can damage the micro-structure of a scintillator and degrade its quality as can be seen in Fig. 2. These visible defects (dark and bright blobs in Fig. 2(a,b)) and invisible defects, which can only be revealed by analysing the linear response of the detector, cause ring artifacts in reconstructed images8 which is the most pervasive type of artifacts in tomographic imaging.

Figure 2.

Degradation of a scintillator during an experiment: (a) At the beginning; (b) After few days of use.

00022_PSISDG11113_111131I_page_2_1.jpg

In parallel-beam micro-tomography, scintillators are mainly unstructured types to achieve high resolution9. To improve time resolution, a scintillator needs to be thick enough to maximize the light yield. As a result, the background caused by scattered visible photons10 alters the linear response of the detecting system which affects the quality of any result of data processing methods that rely on this linearity. Although it is a known problem, very few efforts have been made to tackle it 11. Figure 3 shows a flat field image where a half field of view is blocked. As can be seen in Fig. 3(b), there is light yield inside the area supposed to be fully obscured.

Figure 3.

Flat field image with a half field of view blocked: (a) Under 0.05s of exposure time; (b) Under 0.5s of exposure time.

00022_PSISDG11113_111131I_page_2_2.jpg

All of the above problems of the detector give rise to different types of artifacts in reconstructed images which hamper the data analysis. Solving these problems using hardware approaches may be expensive or technically impossible. In this report, we present our digital approaches to tackle each such problem. Most of data were collected at the I12-JEEP beamline, Diamond Light Source using the settings of 53keV monochromatic X-ray, 1800 projections, 3.2 μm pixel size, and 1000 mm sample-detector distance. Complete detail of the detecting system can be found in Ref. [1]. Data were processed using I12 in-house python codes12-13. The filtered-back projection (FBP) method14-15 was used for reconstruction.

2.

CORRECTION OF RADIAL LENS DISTORTION

2.1

Problems

As the influence of the distortion gets stronger with increasing radial distance from the optical axis, i.e towards the borders of an image, the related artifacts get stronger there, too5. Figure 4 shows a reconstructed image affected by this type of problem. As can be seen the streak artifacts are stronger in Fig. 4(c) (near the image border) compared to Fig. 4(b) (at the image center). This is typical of the distortion problem. It helps to distinguish this type of artifacts from streak artifacts caused by other problems such as misalignment or wrong center of rotation12.

Figure 4.

Typical artifacts caused by the distortion problem: (a) Reconstructed image; (b) Magnified view from the middle frame in (a) shows no artifacts; (c) Magnified view from the top frame in (a) shows streak artifacts.

00022_PSISDG11113_111131I_page_3_1.jpg

2.2

Correction method

There are different models for correcting the radial distortion of the lens. Here we use the polynomial model6 in which the backward model is utilised for efficiently processing tomographic data. Parameters of the correction model that need to be determined are the center of distortion (CoD) and polynomial coefficients5,16. The CoD is calculated independently from the polynomial coefficients. This is useful for bespoke designed systems where the routine mechanical alterations do not alter the lens characteristics but may change the center of the optical axis with respect to the digital camera. The achievable sub-pixel accuracy relies on the quality of a calibration target (e.g Fig. 1(b)) which can provide straight and equidistant lines vertically and horizontally. The calibration methods extract these lines, represent them by the coefficients of parabolic fits, and use these coefficients for calculating distortion coefficients. This requires the number of patterns (dots or lines) to be large enough to allow fitting with sub-pixel accuracy. The basic routine of calculation is as follows.

2.2.1

Pre-processing

From the image of a grid pattern, dots are segmented and grouped into horizontal lines and vertical lines as demonstrated in Fig 5.

Figure 5.

Dots are segmented and grouped into lines: (a) Horizontal lines; (b) Vertical lines.

00022_PSISDG11113_111131I_page_3_2.jpg

2.2.2

Calculation of the center of distortion

Points on each group are fitted to parabolas in which the horizontal lines are represented by

00022_PSISDG11113_111131I_page_3_3.jpg

and vertical lines by

00022_PSISDG11113_111131I_page_3_4.jpg

where i, j are the index of the horizontal lines and vertical lines respectively. The rough estimate of the CoD, (xc, yc), is explained in the Fig. 6(a) where (x0, y0) is the average of the axis intercepts c of two parabolas between which the coefficient a changes sign. The slopes of the red and green line are the average of the b coefficients of these parabolas. For accurately determining the CoD coordinates, they are varied inside the bounds of these parabolas and metrics are calculated as the following steps: The point with minimum distance to the current CoD is located for each parabola. The horizontal and vertical parabolas yield two sets of points. Each set of points is fitted to a straight line. The sum of the intercepts of two fitted straight lines is the metric of the estimated CoD. The best CoD is the one with the minimum metric (Fig. 6(b)).

Figure 6.

Demonstration of estimating the center of distortion: (a) Coarse estimate; (b) Accurate estimate.

00022_PSISDG11113_111131I_page_4_1.jpg

2.2.3

Calculation of the polynomial coefficients of the backward model

In the backward (BW) model, the relationship between an undistorted point (xu, yu) and a distorted point (x, y) is described as

00022_PSISDG11113_111131I_page_4_2.jpg

where ru and r are their distances from the CoD in the undistorted and distorted image, respectively; kn are the coefficients. They are calculated by solving the following equation5

00022_PSISDG11113_111131I_page_4_3.jpg

where 00022_PSISDG11113_111131I_page_4_4.jpg and 00022_PSISDG11113_111131I_page_4_5.jpg.

The undistorted intercepts, 00022_PSISDG11113_111131I_page_4_6.jpg and 00022_PSISDG11113_111131I_page_4_7.jpg, of undistorted lines can be determined without distortion correction. We use the assumption that the undistorted, uniform line spacing can be calculated and extrapolated from the area near the CoD which have negligible distortion. The undistorted intercepts are calculated as

00022_PSISDG11113_111131I_page_4_8.jpg

where sgn() function returns the value of -1, 0, or 1 corresponding to its input of negative, zero, or positive value. i0 is the index of the line closest to the CoD. Δc is the average of the difference of ci near the CoD. The same routine is used for the vertical lines as

00022_PSISDG11113_111131I_page_4_9.jpg

In practice, using polynomial coefficients up to the fourth order is accurate enough, as there is no significant gain in accuracy by using higher orders.

2.3

Results

From the determined parameters, the undistorted image is reconstructed by using the backward mapping routine as shown in Fig. 7 and as the following steps

Figure 7.

Demonstration of the backward mapping.

00022_PSISDG11113_111131I_page_4_10.jpg
  • - Input a grid point, (xu, yu), of the undistorted image.

  • - Translate the coordinates: xu = xu - xc ; yu = yu - yc.

  • - Calculate: 00022_PSISDG11113_111131I_page_5_1.jpg.

  • - Calculate: 00022_PSISDG11113_111131I_page_5_2.jpg.

  • - Calculate 00022_PSISDG11113_111131I_page_5_3.jpg.

  • - Translate the coordinates x = x + xc ; y = y + yc

  • - Interpolate the image value at (x, y) using values of nearest points. Assign the result to the point (xu, yu) in the undistorted image.

This routine is applied to all projections of the tomographic data set, then the same slice as in Fig. 4 is reconstructed for comparison. As can be seen in Fig. 8, streak artifacts are removed.

Figure 8.

(a) Reconstructed image at the same slice as in Fig. 4 after distortion correction; (b) Magnified view from the middle frame in (a); (c) Magnified view from the top frame in (a) shows no more streak artifacts.

00022_PSISDG11113_111131I_page_5_4.jpg

Implementations of our approaches in python are available at https://github.com/DiamondLightSource/vounwarp. The tomographic dataset, a grid pattern used for distortion calculation, and the reconstruction results used for this report can be downloaded from Zenodo https://zenodo.org/record/3339629.

3.

CORRECTION OF THE IRREGULAR RESPONSE OF A DETECTOR

3.1

Problems

The irregular response of a detector is often caused by scintillator defects such as those seen in Fig. 9(a) and sometimes by electronics behaviour of a sensor chip. If they can not be removed by the flat-field correction technique, they result in stripe artifacts in the sinogram (Fig. 9(b)) which appear as ring artifacts in the reconstructed image (Fig. 9(c))8.

Figure 9.

Causes and effects of the irregular response of a detector: (a) Defects in the scintillator; (b) Stripe artifacts in the sinogram; (c) Ring artifacts in the reconstructed image.

00022_PSISDG11113_111131I_page_5_5.jpg

Some defects of the scintillator are visible as in Fig. 9(a). However, there are invisible defects which also result in stripe artifacts but can only be revealed by analysing the linear response of the detector. Analysis was performed by acquiring projections through an X-ray absorber (a glass plate) at different thicknesses by rotating the plate in the range of [0; 90°] (Fig. 10(a)). Lines were fitted to the measured data based on Beer-Lambert’s law?17. Figure 10(b,c) show the intercepts and slopes of the fitting results of all pixels. As can be seen, there are clear differences between Fig. 10(b,c) and Fig. 9(a) which reveals underlying information about the quality of the scintillator. For example, a defect, which is indicated by the horizontal arrow in Fig. 10(b,c), is not visible in Fig. 9(a).

Figure 10.

Analysing the local linear response of the detector. (a) X-ray intensities are varied by rotating an X-ray attenuator; (b) Intercepts of the fitting results; (c) Slopes of the fitting results.

00022_PSISDG11113_111131I_page_6_1.jpg

The retrieved response maps are useful to characterize a detector system. Unfortunately, they cannot be used for correcting the irregular response of the detector. The reason is that the response of each pixel is not independent due to the scattering of scintillation photons as can be seen in Fig. 3(b) As a result, the shape and absorption characteristics of a sample dictate the influence of scattered light to the response of a pixel. This means that the occurrence of artifacts depend on the sample and the projection angle. To demonstrate this important point, we compare the reconstructed images of three samples with different shapes and absorption characteristics (Fig. 11) where all tomographic datasets were collected under the same conditions.

Figure 11.

Flat-field-corrected projections from three different types of samples: (a) Sample 1 giving a low dynamic range of transmitted intensities; (b) Sample 2 giving a medium dynamic range of transmitted intensities; (c) Sample 3 giving a high dynamic range of transmitted intensities.

00022_PSISDG11113_111131I_page_6_2.jpg

Fig. 12 shows reconstructed slices of three samples at the same detector row, indicated by the red lines in Fig. 11. Sample 1 shows no ring artifacts (Fig. 12(a)). However, sample 2 gives rise to a single ring artifact (red arrows in Fig. 12(b)) and sample 3 gives rise to many ring artifacts which do not show in both sample 1 and 2 (yellow arrows in Fig. 12(c)).

Figure 12.

Occurrence of the ring artifacts is sample-dependent as can be seen in reconstructed images of three samples at the same detector row. (a) Sample 1; (b) Sample 2; Sample 3.

00022_PSISDG11113_111131I_page_6_3.jpg

3.2

Classification of the types of stripe artifacts.

The occurrence of stripe or ring artifacts depending on the sample makes it challenging to design a generic approach to remove them. Furthermore, there are many types of stripe artifacts which may require different methods of treatment. The use of pre-processing techniques such as distortion correction5 or phase retrieval18 blurs and enlarges these stripes, making it even more challenging to clean them. It is important to understand the physical causes of the stripe artifacts and classify them. This helps to tackle the problem most efficiently. By comparing the intensity profile, which is the plot of the measured intensities against the angles, of a defective pixel with its adjacent non-defective pixel we are able to classify stripe artifacts into four different types:

Full stripe

A typical profile of a full stripe exhibits intensities that are offset at all angles compared with that of a neighboring good pixel (Fig. 13(a,b)). It gives rise to a half-ring artifact (in 180-degree tomography) in the reconstructed image (Fig. 13(c)).

Figure 13.

Demonstration of a full stripe artifact: (a) Stripe in the sinogram (arrowed); (b) Intensity profile along the stripe (red color) in comparison with an adjacent good pixel (blue color); (c) Ring artifact in the reconstructed image.

00022_PSISDG11113_111131I_page_7_1.jpg

Partial stripe

Different to the full stripe, intensities of a partial stripe are offset only at certain ranges of angles as demonstrated in Fig. 14. As a result, it gives rise to a partial ring artifact in the reconstructed image (Fig. 14(c)).

Figure 14.

Demonstration of a partial stripe artifact: (a) Partial stripe in the sinogram (arrowed); (b) Intensity profile along the stripe (red color) in comparison with an adjacent good pixel (blue color); (c) Part-ring artifact in the reconstructed image.

00022_PSISDG11113_111131I_page_7_2.jpg

Unresponsive stripe

Intensities of this type of stripe are independent of the angles. The pixel is not responsive to the variation of intensity versus the angle in the same way as its neighboring good pixel. This type of stripes may come from dead pixels of the sensor chip, light-blocking dusts or damaged scintillator (bright blobs in Fig. 9(a)) giving rise to stripes of constant brightness as clearly visible in Fig. 15(a). Missing information in these stripes strongly affects the reconstructed image; the constant intensity results in a prominent half-ring, and high-frequency edges of the stripes cause severe streak artifacts (Fig. 15(c)).

Figure 15.

Demonstration of an unresponsive stripe artifact: (a) Unresponsive stripe in the sinogram (red arrows; (b) Intensity profile along the stripe (red color) in comparison with an adjacent good pixel (blue color); (c) Ring artifact (red arrows) and streak artifacts (yellow arrows) in the reconstructed image.

00022_PSISDG11113_111131I_page_8_1.jpg

Fluctuating stripe

Intensities fluctuate significantly between the angles. This type of stripes may come from defective pixels of the sensor chip rather than optical components. They are extremely small in number and their size is between 1 or 2-pixel as observed in our detector systems1. Like unresponsive stripes, they give rise to both ring artifacts and streak artifacts as demonstrated in Fig. 16.

Figure 16.

Demonstration of a fluctuating stripe artifact: (a) Fluctuating stripe in the sinogram (zoomed in and arrowed); (b) Intensity profile along the stripe (red color) in comparison with an adjacent good pixel (blue color); (c) Ring artifact and streak artifacts in the reconstructed image.

00022_PSISDG11113_111131I_page_8_2.jpg

Other types of stripe

There are other types of stripes which are the combination of the above defined stripes such as full stripes and partial stripes, or the extension of them such as large stripes or blurry stripes. Blurry stripes may come from the use of pre-processing methods such as phase retrieval or distortion correction. Large stripes may need a separate treatment to reduce the side effects of cleaning methods.

3.3

Correction methods and results

A number of methods for removing ring artifacts in sinogram space (or removing stripe artifacts) were proposed. They are mainly used for full and partial types of stripes and can be classified into two categories: real space methods19-21 and Fourier space methods22-23. Real-space methods rely on a simple assumption that stripe artifacts are mainly the full stripe type, or the intensity offsets of these stripes are constant. Different methods use different ways of calculating these offsets, but they mainly use the average of intensities along angular direction of a sinogram to detect stripes. These assumptions limit such methods to full stripe artifacts and they may result in extra stripe artifacts for certain sample geometry (Fig. 17(b)). An improvement to tackle partial stripes by dividing the sinogram into many chunks of rows may introduce void-center artifacts as shown in Fig. 17(c).

Figure 17.

Extra artifacts produced by well-known methods. (a) Reconstructed image without ring removal; (b) Extra ring artifacts caused by using the regularization-based method20; (c) Void-center artifact introduced by an over-adjusted parameter of the regularization-based method; (d) Extra ring artifacts introduced by the wavelet-FFT-based method23; (e) Void-center artifact introduced by the wavelet-FFT-based method.

00022_PSISDG11113_111131I_page_9_1.jpg

Fourier-based methods use an assumption that stripe artifacts are corresponding to high-frequency components in the Fourier domain. As a result, they can be removed by damping these components. Apparently, this approach risks to damp other features in the sinogram which also correspond to high-frequency components, and thus give rise to extra artifacts as can be seen in Fig. 17(d). They also can produce void-center artifacts (Fig. 17(e)) if damping parameters are over-adjusted.

There is another class of methods working on reconstruction space24. They are mainly used for cone-beam tomography where the reconstruction is performed without sinogram generation as in parallel-tomography. These methods work well on data acquired by using 360-degree scan which is often not used in parallel-beam tomography due to data redundancy, and they may not work well on unresponsive and fluctuating stripes which not only give rise to ring artifacts but also streak artifacts.

We introduced a new class of methods which remove stripe artifacts by equalizing the responses of neighboring pixels (dubbed equalization-based methods). Depending on the ways of retrieving the underlying responses, different methods were proposed8. For cleaning other types of stripes and reducing the side effect of equalization-based methods we also introduced a robust, easy-to-use method of locating stripes.

3.3.1

Equalization-based methods

Figure 18 shows three different ways of retrieving the underlying response curve of each pixel. In the first approach, each column of the sinogram (Fig. 18(a)) is fitted to a polynomial as shown in Fig. 18 (d). In the second approach, low-frequency components of each column are separated (Fig. 18(b, e)), and the grayscale values in each column are sorted in the last approach (Fig. 18(c,f). These underlying curves reveal the difference in response of adjacent pixels which is the cause of stripe artifacts. Correcting the difference in response of neighboring pixels is the way of removing stripe artifacts. This can be done by applying a smoothing filter along the horizontal direction of the retrieved images.

Figure 18.

Demonstration of different ways of retrieving the underlying response curves of each pixel. (a) and (d): Original sinogram and the result of fitting each column to a second order polynomial. (b) and (e) Original sinogram and the result of separating the low-frequency components of each column. (c) and (f) Original sinogram and its column-sorted intensities.

00022_PSISDG11113_111131I_page_9_2.jpg

From the smoothed images, the stripe-cleaned images are obtained differently using above approaches. In the first approach, called the fitting-based approach, the corrected sinogram is the result of dividing the multiplication of the original sinogram with the smoothed image to the fitted sinogram. In the second approach, called the filtering-based approach, the corrected sinogram is the addition of the smoothed image and the high-frequency components of the original sinogram. In the last approach, called the sorting-based approach, simply re-sorting the smoothed image returns the cleaned sinogram. Detail of these implementations and python source code can be found in Ref. [8] and at https://github.com/nghia-vo/sarepy. The effect of applying these removal methods can clearly be seen in the reconstructed images before and after correction (Fig. 19).

Figure 19.

Reconstructed images before and after correction. (a) and (d) Without and with the fitting-based approach. (b) and (e) Without and with the filtering-based approach. (c) and (f) Without and with the sorting-based approach.

00022_PSISDG11113_111131I_page_10_1.jpg

Each of these approaches has its own pros and cons as summarized in Table. 1. However, we can combine them in different ways to achieve superior approaches. This will be presented in the section 3.3.3.

Table 1.

Summary of pros and cons of each technique.

ProsCons
Fitting-based approach-There are many choices of smoothing filters.-Strong filtering is possible which helps to remove blurry stripe artifacts or low-frequency stripe artifacts.-Limited to a sinogram with a low dynamic range of intensities, i.e. its low-frequency components can be fitted to a low order polynomial.-Can yield extra stripe artifacts if there are sharp jumps in intensity.
Filtering-based approach-Does not yield extra stripe artifacts.-Limited to the use of edge-preserving smoothing filters.-Can yield void-center artifacts.-Can result in streak artifacts.
Sorting-based approach-Works very well to remove partial stripes.-Does not yield extra stripe artifacts and void-center artifacts.-Limited to the use of edge-preserving smoothing filters.-Can cause streak artifacts.

3.3.2

Detection of stripe artifacts

The above techniques are able to remove full and partial stripe artifacts which are of small or medium size without significantly affecting other areas. However, for large stripes, applying a stronger filter to the whole sinogram will degrade the final image. Furthermore, the large stripes are few in number. In this case, the correction could be selectively applied only on defective pixels. To do that, we propose a segmentation algorithm which works on a 1D array to locate stripes. Depending on the types of stripe artifacts, different pre-processing steps are used to generate the 1D array which then is input to the segmentation algorithm. Steps of this algorithm8, called SFTS (Sorting, Fitting, and Thresholding-based Segmentation) for short, is described as follows and demonstrated in Fig. 20.

  • Step 1: Sort the values of the 1D array in ascending order.

  • Step 2: Apply a linear fit to values around the middle of the sorted array, i.e. half of the array size, respect to the array indexes.

  • Step 3: Calculate the upper threshold (TU) and the lower threshold (TL) using the following formula

    If (F0-I0)/(F1-F0)>R,

    00022_PSISDG11113_111131I_page_11_1.jpg

    If (I1-F1)/(F1-F0)>R,

    00022_PSISDG11113_111131I_page_11_2.jpg

    where I0 and I1 are the minimum and maximum value of the sorted array; F0 and F1 are the fitted value at the first and last index of the array; and R is a user-controlled value.

  • Step 4: Binarize the array by replacing all values between TL and TU with 0 and others with 1.

R can be understood as a signal-to-noise ratio which controls the sensitivity of the algorithm. A smaller R is more sensitive to detect the stripes. A reasonable choice of R is around 3.0. This algorithm can be used as a binarization method. This is convenient for users because one does not need to know the absolute values of the array.

Figure 20.

Demonstration of the detection algorithm. (a) Normalized 1D array, i.e. non-uniform background is corrected. (b) Sorted array and fitted array using the middle part of the sorted array.

00022_PSISDG11113_111131I_page_11_3.jpg

3.3.3

Combination of techniques for tackling all types of stripe artifacts

From the equalization-based techniques and the SFTS algorithm, we can derive various algorithms for removing different types of stripe artifacts.

Removal of large stripes

Large stripes (Fig. 21(a)) may come from partially defective regions (Fig. 10(b)) or the adjacent areas of the damaged scintillator which receive extra scattered light, i.e. the so-called halo effect (Fig. 9(a)). To detect them using the SFTS algorithm from the sinogram, the pre-processing steps are:

  • 1 - Sort intensities in each column of the sinogram (Fig. 21(b)).

  • 2 - Apply the strong median filter along each row to remove stripes (Fig. 21(c)).

  • 3 - Average along the columns of the sorted sinogram where some percentage of pixels at the top and bottom are dropped. This simple technique helps to reduce the possibility of wrongly detecting stripes caused by high-frequency edges of the sinogram (Fig. 21(a)). It also can be used to improve other ring removal methods.

  • 4 - Do the same for the smoothed sinogram.

  • 5 - Divide the result of step 3 to the result of step 4 to get the normalized 1D array.

  • 6 - Use the SFTS algorithm to get the locations of the stripes.

Then large stripes are removed by:

  • 1 - Normalize each row of the sinogram using the result of the step 5 in the pre-processing algorithms (Fig. 22(b)). This step helps to correct the non-uniform background around the large stripes (Fig. 22(a)).

  • 2 - Apply the sorting-based algorithm with the strong filter to remove large stripes.

  • 3 - Replace intensities in the stripes of the normalized sinogram with the one of the result of step 2 (Fig. 22(c)).

Figure 21.

Demonstration of pre-processing steps used for detecting large stripes. (a) Original sinogram with large stripes and high-frequency edges (arrowed). (b) Averaging along columns of the sorted sinogram inside the blue box to avoid the false detection of stripes caused by the edges in (a). (c) Same as in (b) but on the smoothed sinogram.

00022_PSISDG11113_111131I_page_12_1.jpg

Figure 22.

Results of removing large stripes. (a) Reconstructed image without ring removal; (b) Reconstructed image after the normalization step; (c) Final reconstructed image.

00022_PSISDG11113_111131I_page_12_2.jpg

Removal of unresponsive stripes and fluctuating stripes

The unresponsive stripes and fluctuating stripes (Fig. 23(a)) give rise to both ring artifacts and streak artifacts in the reconstructed image. Their intensity profiles show opposite characteristics. The unresponsive stripe shows very little variation (Fig. 15(b)) while the fluctuating stripe shows excessive variation (Fig. 16(b)). Exploiting these features can help us detect them all together based on the SFTS algorithm by the following pre-processing steps:

  • 1 - Apply the strong mean filter along each column of the sinogram (Fig. 23(b)).

  • 2 - Take absolute values of the difference between the result of step 1 and the original sinogram (Fig. 23(c)).

  • 3 - Average along each column of the result of step 2 resulting a 1D array.

  • 4 - Apply the strong median filter to the result of step 3.

  • 5 - Divide the result of step 3 to the result of step 4 to get the normalized 1D array.

  • 6 - Use the SFTS algorithm to get the locations of the stripes.

Then the stripes are removed by:

  • 1 - Interpolate values inside the stripes from neighboring pixels.

  • 2 - Apply the algorithm of removing large stripes to the result of step 1. This step is needed because intensities around the unresponsive stripes are modulated by the scattered light resulting the large stripes.

Figure 23.

Demonstration of pre-processing steps used for detecting unresponsive and fluctuating stripes. (a) Original sinogram having both unresponsive stripe (right arrows) and fluctuating stripe (left arrows). (b) Smoothed sinogram using the strong mean filter along each column. (c) Absolute difference between (a) and (b).

00022_PSISDG11113_111131I_page_12_3.jpg

Figure 24.

Results of removing the unresponsive and fluctuating stripes. (a) Reconstructed image without ring removal; (b) Reconstructed image after the interpolation step; (c) Final reconstructed image.

00022_PSISDG11113_111131I_page_13_1.jpg

Combination of algorithms for removing all types of stripes and reducing extra artifacts

As demonstrated in previous sections, there is no single method that can remove all types of stripe artifacts. Each method has its own advantages and disadvantages and only works well with certain types of stripes. As a result, we have to combine them to remove all types of stripes and reduce extra artifacts they may cause. The efficient combination is that unresponsive and fluctuating stripes are removed first, then large stripes are removed, and smaller stripes of the full and partial type are removed last. Removal of large stripes needs to be performed after removing unresponsive and fluctuating stripes as explained in Fig. 24. Methods of removing smaller stripes need to be employed last because they may enlarge large stripes.

3.3.4

Applications of the sorting-based method

In our approaches, the sorting technique has proven its efficiency in tackling partial stripes, detecting stripes, and reducing extra artifacts. Indeed it can be combined with other ring removal methods to improve their efficiency. In the filtering-based method, the sorting-based method can be applied to the low-frequency components of the sinogram columns (Fig. 18(e)). This combination helps to remove void-center artifacts introduced by the filtering-based approach and reduce the streak artifacts introduced by the sorting-based approach. In the fitting-based approach, the sorting technique can be used before the fitting step. The sorted intensity profiles can easily be fitted to polynomial functions. This helps to extend the applications of the fitting-based method to more complex sinograms.

Well-known methods can be improved by using the average along the columns of the sorted sinogram where some percentage of pixels at the top and bottom are dropped (Fig. 21(b)). This helps to reduce extra stripe artifacts in the normalization-based and regularization-based methods. Applying stripe removal methods to the sorted sinogram help to remove void-center artifacts (Fig. 17(c) and (e)).

Here we present two more applications of the sorting-based technique which have not been introduced before.

Removing stripe artifacts using a 2D smoothing filter and the sorting-based technique

In the filtering-based method introduced in section 3.3.1, the filtering step which separates the low-frequency component from the high-frequency component is applied to each intensity profile of each sinogram column. Then stripes are removed by applying a smoothing filter perpendicular to the low-frequency components of the sinogram columns. Here we introduce another way (demonstrated in Fig. 25) where a 2D strong smoothing filter is used to separate two frequency components of the sinogram, then the stripes in the high-frequency component is removed by the sorting-based method. This approach uses the same assumption as the FFT-based methods in which stripes artifacts are corresponding to high-frequency components. The advantage is that it does not yield void-center artifacts.

Figure 25.

Demonstration of another method for removing stripe artifacts using a 2D smoothing filter and the sorting-based technique. (a) Low-frequency components of the sinogram in Fig. 18(c). (b) High-frequency component of the sinogram in Fig. 18(c) in which stripe artifacts are enhanced (arrowed). (c) Stripe-removed image of (b) using the sorting-based technique. (d) Combination of (a) and (c).

00022_PSISDG11113_111131I_page_13_2.jpg

Removing ring artifacts in cone-beam tomography

In a cone-beam geometry, projections of a sample slice at different angles are not constrained in a single row of a detector. As a result, reconstruction is performed directly without the intermediate step of sinogram creation. Furthermore, partial rings are more popular in the reconstructed image because projections of a point in the reconstruction space do not always stay inside the field of view of defective areas. As there are no sinograms, methods of removing ring artifacts working on the reconstructed images are reasonable choices. However, here we demonstrate how the sorting-based method, which works in the projection domain, can be used to remove ring artifacts in the reconstructed image of a cone-beam system25.

The idea is very simple. The steps of the sorting-based method are applied to the series of 2D projections: sorting intensities along the angular direction of each pixel; applying a 2D edge-preserving smoothing filter (e.g the median filter) to each sorted image; and resorting the smoothed images to the original positions to get the corrected images. This routine equalizes the responses of neighboring pixels in the projection domain which helps to remove ring artifacts in the reconstruction domain. The algorithm is quite memory-consuming because the indices of a 3D array need to be kept for the resorting step. Results of this approach in comparison to the wavelet-fft approach23, which is applied to the polar coordinate transformation of the reconstructed image, are shown in Fig. 26 and Fig. 27. As can be seen in these images, our approach not only is able to remove all full and partial rings but also leave no artifacts along the vertical center of the reconstructed volume.

Figure 26.

Reconstructed image from a cone-beam system. (a) Without ring removal; (b) Using the wavelet-FFT approach but some partial rings still left (arrowed); (c) Using the sorting-based approach in 3D.

00022_PSISDG11113_111131I_page_14_1.jpg

Figure 27:

Xz-slice of the 3D reconstructed volume. (a) Without ring removal. (b) Using the wavelet-FFT approach where artifacts are left at the middle. (c) Using the sorting-based approach in 3D.

00022_PSISDG11113_111131I_page_14_2.jpg

Python implementations of presented methods can be found at https://github.com/nghia-vo/sarepy. These approaches are also available in other tomographic software packages such as Tomopy26 and Savu27.

4.

CORRECTION OF THE SCATTERING OF SCINTILATION PHOTONS

4.1

Problem

Due to the scattering of the scintillation photons, the response of a pixel depends on how much light it receives from the adjacent areas. The assumption of linearity is still valid if the photon scattering is uniform. However, this condition is dictated by the sample shape and absorption characteristics. By investigating the interface areas between the sample and the free space of the flat-field-corrected projection we can clearly see the effect of the scintillation scattering. It causes the dark halos around the sample. As shown in Fig. 28(a) and (b), which are the rescaled images of the Fig. 11(a) and (c) respectively, the impact of the photon scattering to the linearity is more profound in the strongly absorbing sample (arrowed Fig. 28(c)). The areas in the free space closer to the sample (arrows in Fig. 28) are darker because they receive less scattered light. Note that the bright edges in Fig. 28 come from the phase contrast enhancement.

Figure 28.

Rescaled image to visualize the interface areas between the sample and the free space. (a) Same sample as shown in Fig. 11(a). (b) Same sample as shown in Fig. 11(c). (c) Intensity profiles along the red line in (a) and (b).

00022_PSISDG11113_111131I_page_15_1.jpg

One significant effect on the reconstructed image of this phenomenon is that the occurrence of the ring artifacts is sample-dependent as shown in Fig. 12. Another effect is that the accuracy in measuring linear attenuation coefficients from the reconstructed image is significantly affected. To demonstrate this, we measured the linear attenuation coefficient using tomographic images of two samples, one made of Aluminum spheres and the other made of Titanium spheres, at 53 keV. The Ti sphere is larger in diameter, 3mm compared to 1mm of the Al sphere (Fig. 29) to enhance the dark halo effect. The linear attenuation coefficient of the Al material obtained from the reconstructed image (Fig. 30(a)) is 0.091 mm-1 which is very close to the calculated value28 of 0.0902 mm-1. However, the measured value of 0.383 mm-1 of the Ti material (Fig. 30(c)) is significant different from the calculated value of 0.469 mm-1.

Figure 29.

Projection of the sample: (a) A1 spheres; (b) Ti spheres.

00022_PSISDG11113_111131I_page_15_2.jpg

Figure 30.

Reconstructed image used for measuring the attenuation coefficient. (a) Al material. (b) Line profile along the red line in (a). (c) Ti material. (d) Line profile along the red line in (c).

00022_PSISDG11113_111131I_page_15_3.jpg

If the sample is a strong absorber of X-rays, the effect of the scintillation photon scattering on the reconstructed images is similar to the beam hardening artefact, which is a well-known issue when using a polychromatic X-ray source, as can be seen in Fig. 31. Furthermore, scattered light modulates the background of an image which affects the image visibility and reduces the effective resolution of the detecting system.

Figure 31.

Artifact from a strong absorption sample. (a) Reconstructed image. (b) Line profile along the red line in (a)

00022_PSISDG11113_111131I_page_16_1.jpg

4.2

Correction method

As demonstrated in the previous section, the scattering of the scintillation photons has profound but hidden effects to the quality of the detecting system. Using structured scintillators can help but it is challenging and costly to fabricate them to micrometer resolution. Very few digital approaches to tackle the problem can be found. There is one in Ref. [11], however authors did not describe their approach but shown only the results. Here we present our developing approach which directly determines the filter window (kernel) in the Fourier domain to correct the scattering effect. This window may be referred as the modulation transfer function (MTF) of a scintillator. From the filter window, an image is corrected or deconvolved simply by the division in the Fourier domain.

Acquiring a calibration image

An ideal calibration image is the one can provide information about the intensity profiles inside the dark areas, which are not exposed to the X-ray, and sharp edges between the free spaces and the dark areas. Furthermore, any contribution from diffraction owning to the coherent property of the X-rays need to be removed or minimized. Figure 32 shows some examples of calibration images can be used to determine the MTF. The X-ray image in Fig. 32(a) is the projection of a long Tungsten rod held by a plastic holder. The rod is aligned to be parallel to the X-rays to block any refracted X-rays going inside the dark area. This type of calibration image allows to determine the 1D MTFs at different directions in the scintillator plane, then they can be combined to form a 2D MTF function. The calibration images in Fig. 32(b) and (c) can be used to determine the x and y-component of the MTF separately.

Figure 32.

Calibration image used for determining the MTF. (a) Projection of a long Tungsten rod. (b) Projection of two thick Tungsten plates. (c) Projection of a long Tungsten plate.

00022_PSISDG11113_111131I_page_16_2.jpg

Determining the MTF function

Our approach aims to determine the MTF directly because it is most efficiently used for correcting a large number of projections of a tomographic dataset, instead of using the PSF (Point Spread Function) 29 However, because the effect of the scintillation photon scattering depends on the quality of a scintillator such as the uniformity of thickness, material distribution, and scattering direction, it is challenging to design a generic MTF. For simplicity, here we apply two constraints: the MTF is space-invariant and separable.

The formula of the filter window is defined as

00022_PSISDG11113_111131I_page_16_3.jpg

where 00022_PSISDG11113_111131I_page_17_1.jpg is the unit function which can be changed to best describe an optical component; u is a variable in the range of [-1;1]; ak are coefficients. To determine ak we use a global optimization technique which is using the best of multiple estimates (BME)30. It is an improvement of the simulated annealing technique in which the dependency on a cooling scheme is removed. The cost function is evaluated by using the combination of increasing the slope of the edges, reducing the intensity values in the dark area, and reducing the ringing artifacts caused by the deconvolution in Fourier space31 from the intensity profile as shown in Fig. 33. The calculation routine is as follows:

  • - Generate random ak.

  • - Calculate the MTF using Eq. (9).

  • - Apply deconvolution to the intensity profile.

  • - Evaluate the cost function from the deconvolved intensity profile.

  • - Repeat above steps for all ak.

  • - Use BME, i.e choosing ak giving the minimum cost function.

  • - Repeat all above steps until the cost function is stagnated or after a certain number of iteration.

Figure 33 shows the results of determining the x-direction MTF of a detecting system using a scintillator made of GGG:Eu, 17.5 micron thickness. The calibration image (Fig. 32) was taken at 53 keV. Here we used n=1/2 and N=200. 200 iterations were performed. Note that the calculation routine is applied to the image without flat field correction. As can be seen in Fig. 33(c) the slope of the edges which defines the sharpness of the image is increased and the exponential decrease of the intensity in the dark area is corrected.

Figure 33.

Results of the determination of the MTF. (a) Convergence of the cost function. (b) 1D determined MTF. (c) Intensities profile before deconvolution (Blue) and after convolution (Red).

00022_PSISDG11113_111131I_page_17_2.jpg

The MTF in y-direction is determined independently using the vertical intensity profile at the middle of the calibration image in Fig. 32(a). Then it is combined with the x-direction MTF to form the 2D-MTF (Fig. 34(a)). Deconvolution of the calibration image using this MTF and the difference is shown in Fig. 34(b) and (c).

Figure. 34.

Deconvolution results using the determined MTF. (a) 2D MTF combined from the x-direction MTF and the y-direction MTF. (b) Deconvolved image in Fig. 32(a). (c) Difference between (b) and Fig. 32(a).

00022_PSISDG11113_111131I_page_17_3.jpg

4.3

Results

From the MTF determined, we applied deconvolution to all projections of a tomographic dataset. As can be seen in Fig. 35, which shows the projections of the sample before and after correction, the image contrast and the sharpness are improved. More important, the dimming artifacts around the interface areas (arrowed) are removed.

Figure 35.

Projections of the sample (flat-field correction applied). (a) Before MTF deconvolution. (b) After MTF deconvolution. (c) Rescaled image of (a) showing the dimming artifacts (arrows). (d) Rescaled image of (b) showing no dimming artifacts (arrows).

00022_PSISDG11113_111131I_page_18_1.jpg

Reconstructed images before and after deconvolution are shown in Fig. 36 in which only flat-field correction are used. As can be seen the cupping artifacts caused by the scattering of the scintillation photons are removed. The average of the ratio between the corrected attenuation coefficients and the original ones is close to 1.5 (Fig. 36(b, d)) which well explains the results achieved in Ref. [32].

Figure 36.

Reconstructed results. (a) Before convolution. (b) Line profile along the red line in (a). (c) After deconvolution. (d) Line profile long the red line in (b).

00022_PSISDG11113_111131I_page_18_2.jpg

5.

CONCLUSION

In summary, we present digital approaches to tackle three major problems of a scintillator-coupled detector in an X-ray micro-tomography system which are: the radial lens distortion, the irregular response, and the scattering of the scintillation photons. To correct the radial lens distortion, an image of a grid pattern is required. Our approach extracts information from this grid pattern to calculate the parameters of the correction model. The effect of applying distortion correction to the tomographic data is shown. Correcting the irregular response of the detecting system, which causes ring artifacts in reconstructed images, is very challenging as there are many types of responses and their occurrences are sample-dependent. We provide a number of techniques and different ways of combining them to remove various types of ring artifacts. Furthermore, the efficiency of one of these techniques is demonstrated on the data from the cone-beam tomography system. The scattering of the scintillation photons is not well-known but has many profound effects to the quality of the reconstructed images as demonstrated in section 4.1. This problem is a major reason why some X-ray phase imaging techniques, that rely on the linearity of an indirect X-ray detector, are still far from being used as daily techniques at synchrotron facilities. The results of our ongoing research on tackling this problem are promising. However, approaches to determine a more generic MTF with low computational cost are still desirable.

ACKNOWLEDGMENTS

This work was carried out with the support of the Diamond Light Source. We thank Rachel W. Obbard and Philippe Sarrazin, SETI Institute, for providing the data of the cone-beam CT system.

REFERENCES

[1] 

M. Drakopoulos, T. Connolley, C. Reinhard, R. Atwood, O. Magdysyuk, N. Vo, M. Hart, L. Connor, B. Humphreys, G. Howell, S. Davies, T. Hill, G. Wilkin, U. Pedersen, A. Foster, N. De Maio, M. Basham, F. Yuan, and K. Wanelik, “I12: the Joint Engineering, Environment and Processing (JEEP) beamline at Diamond Light Source,” J. Synchrotron Rad, 22 (3), 828 –838 (2015). https://doi.org/10.1107/S1600577515003513 Google Scholar

[2] 

C. Rau, U. Wagner, Z. Pesic, and A. De Fanis, “Coherent imaging at the Diamond beamline I13,” Phys. Status Solidi A, 208 2522 –2525 (2011). https://doi.org/10.1002/pssa.v208.11 Google Scholar

[3] 

F. De Carlo and B. Tieman, “High-throughput x-ray microtomography system at the Advanced Photon Source beamline 2-BM,” (2004). https://doi.org/10.1117/12.559223 Google Scholar

[4] 

A. A. MacDowell, D. Y. Parkinson, A. Haboub, E. Schaible, J. R. Nasiatka, C. A. Yee, J. R. Jameson, J. B. Ajo-Franklin, C. R. Brodersen, and A. J. McElrone, “X-ray micro-tomography at the Advanced Light Source,” 850618 (2012). https://doi.org/10.1117/12.930243 Google Scholar

[5] 

N. T. Vo, R. C. Atwood, and M. Drakopoulos, “Radial lens distortion correction with sub-pixel accuracy for X-ray micro-tomography,” Opt. Express, 23 (25), 32859 –32868 (2015). https://doi.org/10.1364/OE.23.032859 Google Scholar

[6] 

F. Remondino and C. Fraser, “Digital camera calibration methods: considerations and comparisons,” International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, 36 (5), 266 –272 (2006). Google Scholar

[7] 

T. Martin and A. Koch, “Recent developments in X-ray imaging with micrometer spatial resolution,” J. Synchrotron Rad., 13 180 –194 (2006). https://doi.org/10.1107/S0909049506000550 Google Scholar

[8] 

N. Vo, R. Atwood, and M. Drakopoulos, “Superior techniques for eliminating ring artifacts in X-ray microtomography,” Opt. Express, 26 28396 –28412 (2018). https://doi.org/10.1364/OE.26.028396 Google Scholar

[9] 

M. Nikl, “Scintillation detectors for x-rays,” Meas. Sci. Technol, 17 R37 –R54 (2006). https://doi.org/10.1088/0957-0233/17/4/R01 Google Scholar

[10] 

H. G. Chotas, J. T. Dobbins, and C. E. Ravin, “Principles of digital radiography with large-area, electronically readable detectors: A review of the basics,” Radiology, 210 595 –599 (1999). https://doi.org/10.1148/radiology.210.3.r99mr15595 Google Scholar

[11] 

Bernhard J. Illerhaus, Yener Onel, and Jurgen Goebbels, “Correction techniques for 2D detectors to be used with high-energy x-ray sources for CT, part II,” (2004). https://doi.org/10.1117/12.562284 Google Scholar

[12] 

N. T. Vo, M. Drakopoulos, R. C. Atwood, and C. Reinhard, “Reliable method for calculating the center of rotation in parallel-beam tomography,” Opt. Express, 22 (16), 19078 –19086 (2014). https://doi.org/10.1364/OE.22.019078 Google Scholar

[14] 

W. Van Aarle, W. J. Palenstijn, J. Cant, E. Janssens, F. Bleichrodt, A. Dabravolski, J. De Beenhouwer, K. J. Batenburg, and J. Sijbers, “Fast and flexible X-ray tomography using the ASTRA toolbox,” Opt. Express, 24 (22), 25129 –25147 (2016). https://doi.org/10.1364/OE.24.025129 Google Scholar

[15] 

G. N. Ramachandran and A. V. Lakshminarayanan, “Three dimensional reconstructions from radiographs and electron micrographs: Application of convolution instead of Fourier transforms,” Proc. Nat. Acad. Sci, 68 2236 –2240 (1971). https://doi.org/10.1073/pnas.68.9.2236 Google Scholar

[16] 

N. T. Vo, “Python implementation of distortion correction methods for X-ray tomography,” Zenodo, (2018) https://doi.org/10.5281/zenodo.1322720 Google Scholar

[17] 

D. F. Swinehart, “The Beer-Lambert Law,” J. Chem. Educ, 39 (7), 333 (1962). https://doi.org/10.1021/ed039p333 Google Scholar

[18] 

D. Paganin, S. C. Mayo, T. E. Gureyev, P. R. Miller, and S. W. Wilkins, “Simultaneous phase and amplitude extraction from a single defocused image of a homogeneous object,” J. Microsc., 206 33 –40 (2002). https://doi.org/10.1046/j.1365-2818.2002.01010.x Google Scholar

[19] 

M. Rivers, “Tutorial Introduction to X-ray Computed Microtomography Data Processing,” http://www.mcs.anl.gov/research/projects/X-ray-cmt/rivers/tutorial.html Google Scholar

[20] 

S. Titarenko, P. J. Withers, and A. Yagola, “An analytical formula for ring artefact suppression in X-ray tomography,” Appl. Math. Lett, 23 1489 –1495 (2010). https://doi.org/10.1016/j.aml.2010.08.022 Google Scholar

[21] 

Y. Kim, J. Baek, and D. Hwang, “Ring artifact correction using detector line-ratios in computed tomography,” Opt. Express, 22 13380 –13392 (2014). https://doi.org/10.1364/OE.22.013380 Google Scholar

[22] 

C. Raven, “Numerical Removal of Ring Artifacts in Microtomography,” Rev. Sci. Instrum, 69 2978 –2980 (1998). https://doi.org/10.1063/1.1149043 Google Scholar

[23] 

B. Münch, P. Trtik, F. Marone, and M. Stampanoni, “Stripe and ring artifact removal with combined wavelet-Fourier filtering,” Opt. Express, 17 (10), 8567 –8591 (2009). https://doi.org/10.1364/OE.17.008567 Google Scholar

[24] 

J. Sijbers and A. Postnov, “Reduction of ring artefacts in high resolution micro-CT reconstructions,” Phys. Med. Biol, 49 (14), N247 –N253 (2004). https://doi.org/10.1088/0031-9155/49/14/N06 Google Scholar

[25] 

P. Sarrazin, R. Obbard, N. T. Vo, K. Zacny, N. Hinman, and D. Blake, “Planetary in-situ microCT analysis of rock samples,” in Astrobiology Science Conference, (2019). Google Scholar

[26] 

D. Gürsoy, F. De Carlo, X. Xiao, and C. Jacobsen, “Tomopy: a framework for the analysis of synchrotron tomographic data,” J. Synchrotron Rad, 21 (5), 1188 –1193 (2014). https://doi.org/10.1107/S1600577514013939 Google Scholar

[27] 

N. Wadeson and M. Basham, “Savu: a Python-based, MPI framework for simultaneous processing of multiple, N-dimensional, large tomography datasets,” (2016) https://arxiv.org/abs/1610.08015 Google Scholar

[28] 

M. S. Seltzer, “Calculation of Photon Mass Energy-Transfer and Mass Energy-Absorption Coefficients,” Radiation Research, 136 147 –170 (1993). https://doi.org/10.2307/3578607 Google Scholar

[29] 

K. Rossmann, “Point spread-function, line spread-function, and modulation transfer function. Tools for the study of imaging systems,” Radiology, 93 257 –272 (1969). https://doi.org/10.1148/93.2.257 Google Scholar

[30] 

N. T. Vo, M. B. H. Breese, and H. O. Moser, “Feasibility of Simulated Annealing Tomography,” (2014) https://arxiv.org/abs/1411.4622 Google Scholar

[31] 

G. Wolberg, Digital Image Warping, IEEE Computer Society, Los Alamitos, CA (1990). Google Scholar

[32] 

M. J. Pankhurst, N. T. Vo, A. R. Butcher, H. Long, H. Wang, S. Nonni, J. Harvey, G. Gudfinnsson, R. Fowler, R. Atwood, R. Walshaw, and P. D. Lee, “Quantitative measurement of olivine composition in three dimensions using helical-scan X-ray micro-tomography,” American Mineralogist, 103 1800 –1811 (2018). https://doi.org/10.2138/am-2018-6419 Google Scholar
© (2019) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Nghia T. Vo, Robert C. Atwood, and Michael Drakopoulos "Preprocessing techniques for removing artifacts in synchrotron-based tomographic images", Proc. SPIE 11113, Developments in X-Ray Tomography XII, 111131I (10 September 2019); https://doi.org/10.1117/12.2530324
Lens.org Logo
CITATIONS
Cited by 3 scholarly publications.
Advertisement
Advertisement
KEYWORDS
Tomography

Algorithm development

Cameras

Detection and tracking algorithms

Distortion

Imaging systems

Photons

Back to Top