Automatic Target Recognition (ATR) in Synthetic Aperture Radar (SAR) for wide-area search is a difficult problem for both classic techniques and state-of-the-art approaches. Deep Learning (DL) techniques have been shown to be effective at detection and classification, however they require significant amounts of training data. Sliding window detectors with Convolutional Neural Network (CNN) backbones for classification typically suffer from localization error, poor compute efficiency, and need to be tuned to the size of the target. Our approach to the wide-area search problem is an architecture that combines classic ATR techniques with a ResNet-18 backbone. The detector is dual-stage and consists of an optimized Constant False Alarm Rate (CFAR) screener and a Bayesian Neural Network (BNN) detector which provides a significant speed advantage over standard sliding window approaches. It also reduces false alarms while maintaining a high detection rate. This allows the classifier to run on fewer detections improving processing speed. This paper’s focus tests out the BNN and CNN components of HySARNet through experiments to determine their robustness to variations in graze angle, resolution, and additive noise. Synthetic targets are also experimented with for training the CNN. Synthetic data has the potential to allow for the ability to train on hard to find targets where little or no data exists. SAR simulation software and 3D CAD models are used to generate the synthetic targets. This paper focuses on the utilization of the Moving and Stationary Target Acquisition (MSTAR) dataset which is the widely used, standard data set for SAR ATR publications.
For functional neuroimaging, existing small-animal diffuse optical tomography (DOT) systems either do not provide adequate temporal sampling rates, have sparse spatial sampling, or have limited three-dimensional fields of view. To achieve adequate frame rates (1-10 Hz), we have constructed a system using sCMOS detection-based DOT, with asymmetric measurements, with many (>10,000) detectors and fewer (<100) structured illumination patterns (using digital micromirror devices: DMDs). The system employs multiple views, involving multiple cameras and illuminators, to provide a three-dimensional field of view. To coregister the measurements with the mouse head anatomy, we developed a surface profiling method in which point illumination patterns are scanned over the mouse head and combined with calibration data to create three-dimensional point clouds and meshes representing the head. We applied this method to a 3D-printed figurine, and the resulting mesh had surface vertices whose positions deviated 0.4 ± 0.2 mm (mean ± SD) from the original "ground truth" mesh that had been employed to 3D-print the figurine. To evaluate the imaging system's resolution, field of view, and sensitivity versus depth, we placed simulated activations at different depths within a tissue model of a real mouse head imaged with our surface profiling method. Results indicate that this imaging system is sensitive to absorption changes at depths of >3 mm. In addition, a partial (one-camera, one-illuminator) version of the system successfully imaged neural activations evoked by forepaw stimulation of a live mouse.
Conventional two-photon microscopy (TPM) is capable of imaging neural dynamics with subcellular resolution, but it is limited to a field-of-view (FOV) diameter <1 mm. Although there has been recent progress in extending the FOV in TPM, a principled design approach for developing large FOV TPM (LF-TPM) with off-the-shelf components has yet to be established. Therefore, we present a design strategy that depends on analyzing the optical invariant of commercially available objectives, relay lenses, mirror scanners, and emission collection systems in isolation. Components are then selected to maximize the space-bandwidth product of the integrated microscope. In comparison with other LF-TPM systems, our strategy simplifies the sequence of design decisions and is applicable to extending the FOV in any microscope with an optical relay. The microscope we constructed with this design approach can image <1.7-μm lateral and <28-μm axial resolution over a 7-mm diameter FOV, which is a 100-fold increase in FOV compared with conventional TPM. As a demonstration of the potential that LF-TPM has on understanding the microarchitecture of the mouse brain across interhemispheric regions, we performed in vivo imaging of both the cerebral vasculature and microglia cell bodies over the mouse cortex.
Optical intrinsic signal (OIS) imaging has been a powerful tool for capturing functional brain hemodynamics in rodents. Recent wide field-of-view implementations of OIS have provided efficient maps of functional connectivity from spontaneous brain activity in mice. However, OIS requires scalp retraction and is limited to superficial cortical tissues. Diffuse optical tomography (DOT) techniques provide noninvasive imaging, but previous DOT systems for rodent neuroimaging have been limited either by sparse spatial sampling or by slow speed. Here, we develop a DOT system with asymmetric source–detector sampling that combines the high-density spatial sampling (0.4 mm) detection of a scientific complementary metal-oxide-semiconductor camera with the rapid (2 Hz) imaging of a few (<50) structured illumination (SI) patterns. Analysis techniques are developed to take advantage of the system’s flexibility and optimize trade-offs among spatial sampling, imaging speed, and signal-to-noise ratio. An effective source–detector separation for the SI patterns was developed and compared with light intensity for a quantitative assessment of data quality. The light fall-off versus effective distance was also used for in situ empirical optimization of our light model. We demonstrated the feasibility of this technique by noninvasively mapping the functional response in the somatosensory cortex of the mouse following electrical stimulation of the forepaw.