Terahertz (THz) time-domain spectroscopy is considered as an attractive tool for the analysis of chemical composition. The traditional methods for identification and quantitative analysis of chemical compounds by THz spectroscopy are all based on full-spectrum data. However, intrinsic features of the THz spectrum only lie in absorption peaks due to existence of disturbances, such as unexpected components, scattering effects, and barrier materials. We propose a strategy that utilizes Lorentzian parameters of THz absorption peaks, extracted by a multiscale linear fitting method, for both identification of pure chemicals and quantitative analysis of mixtures. The multiscale linear fitting method can automatically remove background content and accurately determine Lorentzian parameters of the absorption peaks. The high recognition rate for 16 pure chemical compounds and the accurate predicted concentrations for theophylline-lactose mixtures demonstrate the practicability of our approach.
This paper discusses the development of data acquisition and control system (DACS) for a portable terahertz time-domain spectrometer (THz-TDS). In this system, field programmable gate array (FPGA) severed as main control unit (MCU), which controls and harmonizes other functional modules of the spectrometer, including the linear delay stage, high voltage modulation and the AD converter. A digital lock-in amplifier implemented within the FPGA is employed to restore the weak THz signal. USB is used as the communication interface between the FPGA and the computer, which transfers commands and THz waveform data. The spectrometer can scan a waveform within a 30-ps time window in about 10 seconds, which has a spectral resolution better than 50GHz and a dynamic range up to 49dB.
Continuous-wave (CW) terahertz (THz) imaging system has advantages of high power, compact structure and low cost, thus having been investigated for widespread applications. In typical reflection mode of CW imaging, the obtained image is usually degraded by repeated fringes, which is caused by interference phenomenon. The undesired interference signal originates from the reflection of surfaces of samples and lenses. When the samples are titled placed or their surfaces are uneven, the detected signal intensity is fluctuant even if the same sample lies in different positions. Therefore, small-sized or weekly absorbing objects are hard to be distinguished. Based on cartoon-texture decomposition, we propose a practical method to restore CW THz reflection images. After decomposition, the fringes and the objects are separated. In order to preserve edges, sharpening and fusion steps are employed respectively. The object in the final image is obvious with little loss of information.
At present, there are mainly three x-ray imaging modalities for dental clinical diagnosis: radiography, panorama and computed tomography (CT). We develop a new x-ray digital intra-oral tomosynthesis (IDT) system for quasi-three-dimensional dental imaging which can be seen as an intermediate modality between traditional radiography and CT. In addition to normal x-ray tube and digital sensor used in intra-oral radiography, IDT has a specially designed mechanical device to complete the tomosynthesis data acquisition. During the scanning, the measurement geometry is such that the sensor is stationary inside the patient’s mouth and the x-ray tube moves along an arc trajectory with respect to the intra-oral sensor. Therefore, the projection geometry can be obtained without any other reference objects, which makes it be easily accepted in clinical applications. We also present a compressed sensing-based iterative reconstruction algorithm for this kind of intra-oral tomosynthesis. Finally, simulation and experiment were both carried out to evaluate this intra-oral imaging modality and algorithm. The results show that IDT has its potentiality to become a new tool for dental clinical diagnosis.
The aim of the present study is to investigate a type of Bayesian reconstruction which utilizes partial differential
equations (PDE) image models as regularization. PDE image models are widely used in image restoration and
segmentation. In a PDE model, the image can be viewed as the solution of an evolutionary differential equation. The
variation of the image can be regard as a descent of an energy function, which entitles us to use PDE models in Bayesian
reconstruction. In this paper, two PDE models called anisotropic diffusion are studied. Both of them have the
characteristics of edge-preserving and denoising like the popular median root prior (MRP). We use PDE regularization
with an Ordered Subsets accelerated Bayesian one step late (OSL) reconstruction algorithm for emission tomography.
The OS accelerated OSL algorithm is more practical than a non-accelerated one. The proposed algorithm is called
OSEM-PDE. We validated the OSEM-PDE using a Zubal phantom in numerical experiments with attenuation correction
and quantum noise considered, and the results are compared with OSEM and an OS version of MRP (OSEM-MRP)
reconstruction. OSEM-PDE shows better results both in bias and variance. The reconstruction images are smoother and
have sharper edges, thus are more applicable for post processing such as segmentation. We validate this using a k-means
segmentation algorithm. The classic OSEM is not convergent especially in noisy condition. However, in our experiment,
OSEM-PDE can benefit from OS acceleration and keep stable and convergent while OSEM-MRP failed to converge.
Cosmic ray muon radiography which has a good penetrability and sensitivity to high-Z materials is an effective way for
detecting shielded nuclear materials. Reconstruction algorithm is the key point of this technique. Currently, there are two
main algorithms about this technique. One is the Point of Closest Approach (POCA) reconstruction algorithm which
uses the track information to reconstruct; the other is the Maximum Likelihood estimation, such as the Maximum
Likelihood Scattering (MLS) and the Maximum Likelihood Scattering and Displacement (MLSD) reconstruction
algorithms which are proposed by the Los Alamos National Laboratory (LANL). The performance of MLSD is better
than MLS. Since MLSD reconstruction algorithm includes scattering and displacement information while MLS reconstruction algorithm only includes scattering information. In order to get this Maximum Likelihood estimation, in this paper, we propose to use EM method to get the estimation (MLS-EM and MLSD-EM). Then, in order to saving reconstruction time we use the OS technique to accelerate MLS and MLSD reconstruction algorithm with the initial value set to be the result of the POCA reconstruction algorithm. That is, the Maximum Likelihood Scattering-OSEM (MLS-OSEM) and the Maximum Likelihood Scattering and Displacement-OSEM (MLSD-OSEM). Numerical simulations show that the MLSD-OSEM is an effective algorithm and the performance of MLSD-OSEM is better than MLS-OSEM.
Beam-hardening is caused by the filtering of a polychromatic X-ray beam by the objects in the scan field. In industrial field, both the X-ray source and the attenuation characteristics of the materials are different with those in medical field. Methods that work in medical field cannot give satisfying results here. The author has developed a computer software, named simulative tomographic machine (STM) platform. STM platform is designed to simulate the procedure of high-energy ICT scanning. It is also the platform for developing data process algorithm. With the STM platform, this paper presents an efficient correction technique, which can eliminate beam-hardening artifacts efficiently in high-energy ICT. The new algorithm is based on the following facts: the attenuation coefficient of each substance is precisely known; the polychromatic spectrum of accelerator can be computed with Monte Carlo (MC) method; the total photon interaction cross-section of most inspected object can be treated as constant in the energy region between 1.5 and 9MeV. The monochromatic projection can be computed from the polychromatic projection with an iterative algorithm. So we can reconstruct perfect image from the projection made only by high-energy photons.
In this paper we discuss image reconstruction algorithms in super-short-scan fan-beam and cone-beam computed tomography (CT). We propose a new fan-beam filtered back-projection algorithm which can obtain exact region of interest (ROI) reconstruction if and only if every projecting line passing through the ROI intersects the source trajectory, even if the scanning range is smaller than the half-scan. And we prove the algorithm is approximate when the projections are truncated. Furthermore, we expand the algorithm to cone-beam reconstruction. Then we simulate the algorithm on the computer and evaluate the noise properties of the new algorithm and the other algorithms. Numerical results in our work suggest that the new algorithm is generally less susceptible to data noise and less artifacts than the before algorithms. In particular, the new algorithm is easily and successfully expanded to cone-beam tomography when the source trajectory is a short-arc on the single circle or on the helical trajectory.