We present a computational approach for fast approximation of nonlinear tomographic reconstruction methods by filtered backprojection (FBP) methods. Algebraic reconstruction algorithms are the methods of choice in a wide range of tomographic applications, yet they require significant computation time, restricting their usefulness. We build upon recent work on the approximation of linear algebraic reconstruction methods and extend the approach to the approximation of nonlinear reconstruction methods which are common in practice. We demonstrate that if a blueprint image is available that is sufficiently similar to the scanned object, our approach can compute reconstructions that approximate iterative nonlinear methods, yet have the same speed as FBP.
A new superresolution algorithm is proposed to reconstruct a high-resolution license plate image from a set of low-resolution camera images. The reconstruction methodology is based on the discrete algebraic reconstruction technique (DART), a recently developed reconstruction method. While DART has already been successfully applied in tomographic imaging, it has not yet been transferred to the field of camera imaging. DART is introduced for camera imaging through a demonstration of how prior knowledge of the colors of the license plate can be directly exploited during the reconstruction of a high-resolution image from a set of low-resolution images. Simulation experiments show that DART can reconstruct images with superior quality compared to conventional reconstruction methods.
Bias field reduction is a common problem in medical imaging. A bias field usually manifests itself as a smooth intensity variation across the image. The resulting image inhomogeneity is a severe problem for posterior image processing and analysis techniques such as registration or segmentation. In this paper, we present a fast debiasing technique based on localized Lloyd-Max quantization. Thereby, the local bias is modelled as a multiplicative field and is assumed to be slowly varying. The method is based on the assumption that the local, undegraded histogram is characterized by a limited number of gray values. The goal is then to find the discrete intensity values such that spreading those values according to the local bias field reproduces the global histogram as good as possible. We show that our method is capable of efficiently reducing (even strong) bias fields in 3D volumes in only a few seconds.
Tomography is an important technique for non-invasive imaging, with applications in medicine, materials research
and industry. Tomographic reconstructions are typically gray-scale images, that can possibly contain a wide
spectrum of grey levels. Segmentation of these grey level images is an important step to obtain quantitative
information from tomographic datasets. Thresholding schemes are often used in practice, as they are easy to
implement and use. However, if the tomogram exhibits variations in the intensity throughout the image, it is not
possible to obtain an accurate segmentation using a single, global threshold. Instead, local thresholding schemes
can be applied that use a varying threshold, depending on local characteristics of the tomogram. Selecting the
best local thresholds is not a straightforward task, as local image features (such as the local histogram) often do
not provide sufficient information for choosing a proper threshold.
In this paper, we propose a new criterion for selecting local thresholds, based on the available projection data,
from which the tomogram was initially computed. By reprojecting the segmented image, a comparison can be
made with the measured projection data. This yields a quantitative measure of the quality of the segmentation.
By minimizing the difference between the computed and measured projections, optimal local thresholds can be
Simulation experiments have been performed, comparing the result of our local thresholding approach with
global thresholding. Our results demonstrate that the local thresholding approach yields segmentations that are
significantly more accurate, in particular when the tomogram contains artifacts.
Scientific data files have been increasing in size during the past decades. In the medical field, for instance,
magnetic resonance imaging and computer aided tomography can yield image volumes of several gigabytes.
While secondary storage (hard disks) increases in capacity and its cost per megabyte slumps over the years,
primary memory (RAM) can still be a bottleneck in the processing of huge amounts of data. This represents
a problem for image processing algorithms, which often need to keep in memory the original image and a copy
of it to store the results. Operating systems optimize memory usage with memory paging and enhanced I/O
operations. Although image processing algorithms usually work on neighbouring areas of a pixel, they follow
pre-determined paths through the image and might not benefit from the memory paging strategies offered by
the operating system, which are general purpose and unidimensional. Having the principles of locality and pre-determined
traversal paths in mind, we developed an algorithm that uses multi-threaded pre-fetching of data
to build a disk cache in memory. Using the concept of a window that slides over the data, we predict the next
block of memory to be read according to the path followed by the algorithm and asynchronously pre-fetch such
block before it is actually requested. While other out-of-core techniques reorganize the original file in order to
optimize reading, we work directly on the original file. We demonstrate our approach in different applications,
each with its own traversal strategy and sliding window structure.
Tomographic reconstructions, which are generally gray-scale images, are often segmented as to extract quantitative
information, such as the shape or volume of image objects. At present, segmentation is usually performed by
thresholding. However, the process of threshold selection is somewhat arbitrary and requires human interaction.
In this paper, we present an algorithmic approach for automatically selecting the segmentation thresholds by
using the available tomographic projection data. Assuming that each material (i.e., tissue type) in the sample
has a characteristic, approximately constant gray value, thresholds are computed for which the segmented image
corresponds optimally with the projection data.
Discrete Tomography (DT) deals with the reconstruction of an image from its projections when this image is known to have only a small number of gray values. The knowledge of the discrete set of gray values can significantly reduce the number of projections required for a high-quality reconstruction. In this paper, a feasibility study is presented of the application of discrete tomography to micro-CT data from a mouse leg as to study the structural properties of the trabecular bone. The set of gray values is restricted to only three values, for the air background, the soft tissue background, and the trabecular bone structure. Reconstructions of the trabecular bone structure are usually obtained by computing a continuous reconstruction. To extract morphometric information from the reconstruction, the image must be segmented into the different tissue types, which is commonly done by thresholding. In the DT approach such a segmentation step is no longer necessary, as the reconstruction already contains a single gray value for each tissue type. Our results show that by using discrete tomography, a much better reconstruction of the trabecular bone structure can be obtained than by thresholding a continuous reconstruction from the same number of projections.