Lawrence Livermore National Laboratory is a large, multidisciplinary institution that conducts fundamental
and applied research in the physical sciences. Research programs at the Laboratory run the
gamut from theoretical investigations, to modeling and simulation, to validation through experiment.
Over the years, the Laboratory has developed a substantial research component in the areas of signal
and image processing to support these activities. This paper surveys some of the current research in
signal and image processing at the Laboratory. Of necessity, the paper does not delve deeply into any
one research area, but an extensive citation list is provided for further study of the topics presented.
The development of faster more reliable techniques to detect radioactive contraband in a portal type scenario
is an extremely important problem especially in this era of constant terrorist threats. Towards this goal the
development of a model-based, Bayesian sequential data processor for the detection problem is discussed. In the
sequential processor each datum (detector energy deposit and pulse arrival time) is used to update the posterior
probability distribution over the space of model parameters. The nature of the sequential processor approach
is that a detection is produced as soon as it is statistically justified by the data rather than waiting for a fixed
counting interval before any analysis is performed. In this paper the Bayesian model-based approach, physics
and signal processing models and decision functions are discussed along with the first results of our research.
One of the major purposes of National Ignition Facility at Lawrence Livermore National Laboratory is to accurately focus 192 high energy laser beams on a nanoscale (mm) fusion target at the precise location and time. The automatic alignment system developed for NIF is used to align the beams in order to achieve the required focusing effect. However, if a distorted image is inadvertently created by a faulty camera shutter or some other opto-mechanical malfunction, the resulting image termed "off-normal" must be detected and rejected before further alignment processing occurs. Thus the off-normal processor acts as a preprocessor to automatic alignment image processing. In this work, we discuss the development of an "off-normal" pre-processor capable of rapidly detecting the off-normal images and performing the rejection. Wide variety of off-normal images for each loop is used to develop the criterion for rejections accurately.
In position detection using matched filtering one is faced with the challenge of determining the best position in the presence of distortions such as defocus and diffraction noise. This work evaluates the performance of simulated defocused images as the template against the real defocused beam. It was found that an amplitude modulated phase-only filter is better equipped to deal with real defocused images that suffer from diffraction noise effects resulting in a textured spot intensity pattern. It is shown that the there is a tradeoff of performance dependent upon the type and size of the defocused image. A novel automated system was developed that can automatically select the right template type and size. Results of this automation for real defocused images are presented.
Alignment of laser beams based on video images is a crucial task necessary to automate operation of the 192 beams at the National Ignition Facility (NIF). The final optics assembly (FOA) is the optical element that aligns the beam into the target chamber. This work presents an algorithm for determining the position of a corner cube alignment image in the final optics assembly. The improved algorithm was compared to the existing FOA algorithm on 900 noise-simulated images. While the existing FOA algorithm based on correlation with a synthetic template has a radial standard deviation of 1 pixel, the new algorithm based on classical matched filtering (CMF) and polynomial fit to the correlation peak improves the radial standard deviation performance to less than 0.3 pixels. In the new algorithm the templates are designed from real data stored during a year of actual operation.
An algorithm for determining the position of the KDP back-reflection image was developed. It was compared to a centroid-based algorithm. While the algorithm based on centroiding exhibited a radial standard deviation of 9 pixels, the newly proposed algorithm based on classical matched filtering (CMF) and a Gaussian fit to correlation peak provided a radial standard deviation of less than 1 pixel. The speed of the peak detection was improved from an average of 5.5 seconds for Gaussian fit to 0.022 seconds by using a polynomial fit. The performance was enhanced even further by utilizing a composite amplitude modulated phase only filter; producing a radial standard deviation of 0.27 pixels. The proposed technique was evaluated on 900+ images with varying degrees of noise and image amplitude as well as real National Ignition Facility (NIF) images.
The alignment of high energy laser beams for potential fusion experiments demand high precision and accuracy by the underlying positioning algorithms. This paper discusses the feasibility of employing on-line optimal position estimators in the form of model-based processors to achieve the desired results. Here we discuss the modeling, development, implementation and processing of model-based processors applied to both simulated and actual beam line data.
In contrast to standard reflection ultrasound (US), transmission US holds the promise of more thorough tissue characterization by generating quantitative acoustic parameters. We compare results from a conventional US scanner with data acquired using an experimental circular scanner operating at frequencies of 0.3 - 1.5 MHz. Data were obtained on phantoms and a normal, formalin-fixed, excised breast. Both reflection and transmission-based algorithms were used to generate images of reflectivity, sound speed and attenuation.. Images of the phantoms demonstrate the ability to detect sub-mm features and quantify acoustic properties such as sound speed and attenuation. The human breast specimen showed full field evaluation, improved penetration and tissue definition. Comparison with conventional US indicates the potential for better margin definition and acoustic characterization of masses, particularly in the complex scattering environments of human breast tissue. The use of morphology, in the context of reflectivity, sound speed and attenuation, for characterizing tissue, is discussed.
New ultrasound data, obtained with a circular experimental scanner, are compared with data obtained with standard X-ray CT. Ultrasound data obtained by scanning fixed breast tissue were used to generate images of sound speed and reflectivity. The ultrasound images exhibit approximately 1 mm resolution and about 20 dB of dynamic range. All data were obtained in a circular geometry. X-ray CT scans were used to generate X-ray images corresponding to the same 'slices' obtained with the ultrasound scanner. The good match of sensitivity, resolution and angular coverage between the ultrasound and X-ray data makes possible a direct comparison of the three types of images. We present the results of such a comparison for an excised breast fixed in formalin. The results are presented visually using various types of data fusion. A general correspondence between the sound speed, reflectivity and X-ray morphologies is found. The degree to which data fusion can help characterize tissue is assessed by examining the quantitative correlations between the ultrasound and X-ray images.
The scattering mechanism of diffraction tomography is described by the integral form of the Helmholtz equation. The goal of diffraction tomography is to invert this equation in order to reconstruct the object function from the measured scattered fields. During the forward propagation process, the spatial spectrum of the object under investigation is 'smeared,' by a convolution in the spectral domain, across the propagating and evanescent regions of the received field. Hence, care must be taken in performing the reconstruct, as the object's spectral information has been moved into regions where it may be considered to be noise rather than useful information. This will reduce the quality and resolution of the reconstruction. We show how the object's spectrum can be partitioned into resolvable and non-resolvable parts based upon the cutoff between the propagating and evanescent fields. Operating under the Born approximation, we develop a beam- forming on transmit approach to direct the energy into either the propagating or evanescent parts of the spectrum. In this manner, we may individually interrogate the propagating and evanescent regions of the object spectrum.
This short course provides the participants with the basic concepts of model-based signal processing using an applied approach. The course is designed to take the participant from basic probability and random processes to stochastic model development through the heart of physics-based stochastic modeling---the Gauss-Markov state-space model. Estimation basics will be discussed including maximum likelihood and maximum a-posteriori estimators. The state-space model-based processor (MBP) or equivalently Kalman filter will be investigated theoretically in order to develop an intuition for constructing successful MBP designs using the "minimum error variance approach". Practical aspects of the MBP will be developed to provide a reasonable approach for design and analysis. Overall MBP Design Methodology will be discussed. Extensions of the MBP follow for a variety of cases included prediction, colored noise, identification, linearized and nonlinear filtering using the extended Kalman filter. Applications and case studies will be discussed throughout the lectures including the tracking problem along with an application suite MBP problems. Practical aspects of MBP design using SSPACK_PC, a third party toolbox in MATLAB, will be discussed for "tuning" and processing along with some actual data.
In summary, this course not only provides the participants with the essential theory underlying model-based signal processing techniques, but applied design and analysis.