The ability to search for radiation sources is of interest to the Homeland Security community. The hope is to find any radiation sources which may pose a reasonable chance for harm in a terrorist act. The best chance of success for search operations generally comes with fielding as many detection systems as possible. In doing this, the hoped for encounter with the threat source will inevitably be buried in an even larger number of encounters with non-threatening radiation sources commonly used for many medical and industrial use. The problem then becomes effectively filtering the non-threatening sources, and presenting the human-in-the-loop with a modest list of potential threats. Our approach is to field a collection of detection systems which utilize soft-sensing algorithms for the purpose of discriminating potential threat and non-threat objects, based on a variety of machine learning techniques.
An ALISA Vector Module (AVM) is trained on the discrete gamma-ray emission spectra of 61 commonly occurring radioisotopes generated by an analytical model. The trained AVM is then used to decompose the spectra captured from actual sources in the field using low-resolution thallium-activated sodium-iodide (NaI) detectors and/or high-resolution high-purity germanium (HPGe) detectors using QR Factorization to find the optimal least-squares solution for an over-specified system of equations, even if inconsistent. For low-resolution NaI detectors, formal experiments conducted under carefully controlled laboratory conditions yield average classification (spectral decomposition) errors less than 6% in mixtures with up to 10 components in test samples consisting of 1,000 photonic events, which requires just a few seconds to obtain in typical situations. Preliminary experiments with the high-resolution HPGe detector yield dramatically smaller errors than with the NaI detector. Further improvements in the accuracy and precision of the training data, as well as fusion with other powerful classification methods, are expected to reduce the error without prohibitively increasing the computation time.