This paper discusses a system for unassisted monitoring of (a) adult ECG and (b) in case of pregnant mothers, the mother’s and the fetal ECG (fECG), the latter by its extraction from the overshadowing maternal ECG (mECG). We propose to monitor these vital signals from a bathtub at home with no leads placed on the subject’s body, therefore completely unassisted. The leads are passive, and placed permanently on the inner surface of the bathtub. DSP is used both for extracting the orthogonal vector-cardiogram from the bathtub leads and also for efficient separation of the fECG from the combined mother-and-fetal signal (mfECG) – taking into account a composite, highly complex, medium. Presented also is a novel architecture for on-chip processing based on application-specific CMOS VLSI cells developed in our laboratory.
This paper deals with a complex problem in scientific sensing and imaging. To overcome some inherent problems in the conventional ECG (Electrocardiogram), we investigate in depth an ‘unassisted’ approach which enables ECG measurement without the placement of sensing leads on the body. Specifically, it uses a bathtub at home with tap water in it and passive sensing leads placed on its inner surface – while the subject lies in it. In this investigation we use a widely accepted assumption that the electrical activity of the heart may be, largely, represented by a <i>3-D time-varying Current Dipole (3D-CD).</i> To determine the sensing matrix responsible for transforming the 3D-CD into the potential distribution on the bathtub’s internal surface, the 3D-CD signals are applied to a bathtub-containing-ellipsoid model in COMSOL tool. The sensing matrix thereby estimated is then utilized to back reconstruct the 3D-CD signals from the bathtub leads signals. NRMSEs (Normalized Root-Mean-Squared Errors) on the order of 0.02 to 0.05 are observed. The approach is also successfully extended to the case of two ellipsoids, one inside the other, representing a pregnant female subject. Critically important from a practical standpoint, the paper examines sensitivity with respect to the locations of the two 3D-CDs in the bathtub, and reports the encouraging results. Images of the potential distribution in the composite volume in the bathtub are presented as well.
In ‘radiation remote sensing’ multiple unknown high energy sources are generally involved. The detectors, upon sensing the corresponding mixed signals, must separate their contributions blindly for further analysis. A practical way to perform this separation could be through the Independent Component Analysis algorithm. However, the challenge faced is that theoretically there is no correlation among events, even those arising from the same source – thereby disabling meaningful ICA analysis. We overcome this hurdle by use of a thin barrier and by providing wide detector pulses. The radiation events that interact with the barrier take a longer time to reach the detector due to their increased path length. They also lose some energy, which makes them increasingly prone to capture in the barrier once they have scattered. These observations are confirmed through Monte-Carlo simulations upon Gamma-ray sources. Normalized crosscovariance up to 0.22 was found, but is actually controllable through appropriate selection of the detector shaping-pulse width. Experiments on a physical setup confirm these findings. Finally, the application of the ICA approach is demonstrated to demix, or separate, the individual contributions of the sources to the observed detector signals.
This paper presents a microsystem for remote sensing of high energy radiation in extremely low flux density conditions. With wide deployment in mind, potential applications range from nuclear non-proliferation, to hospital radiation-safety. The daunting challenge is the low level of photon flux densities – emerging from a Scintillation Crystal (SC) on to a ~1 mm-square detector, which are a factor of 10000 or so lower than those acceptable to recently reported photonic chips (including ‘single-photon detection’ chips), due to a combination of low Lux, small detector size, and short duration SC output pulses – on the order of 1 μs. These challenges are attempted to be overcome by the design of an innovative ‘System on a Chip’ type microchip, with <i>high detector sensitivity</i>, and effective coupling from the SC to the photodetector. The microchip houses a tiny n+ diff p-epi photodiode (PD) as well as the associated analog amplification and other related circuitry, all fabricated in 0.5micron, 3-metal 2-poly CMOS technology. The amplification, together with pulse-shaping of the photocurrent-induced voltage signal, is achieved through a tandem of two capacitively coupled, double-cascode amplifiers. Included in the paper are theoretical estimates and experimental results.
Biomedical sensors combining microfluidic and electronics capabilities require defect avoidance in both the
electronic processing circuits and microfluidic areas. Microfluidic sensors involve sealed channels through which
sample fluids containing biomedical materials flow. Inserting microchannels between capacitive plates enable the
detection of biomaterials by the changes in capacitance. However, faults occur when foreign particles, or fluid bubbles
get lodged in the paths blocking a channel, thereby affecting the measured C. To achieve fault tolerance we investigate a
Cathedral Chamber design, with pillars supporting the roof at regular intervals. This prevents single blockages from
stopping fluid flow through the system in a channel, as there are many paths. We discuss the potential causes and effects
of such blockages. Monte Carlo simulations show that the Cathedral Chamber design significantly increases lifetime of
the system, an average of 6 times more particles are required before full blockage occurs compared to an array of parallel
channels. Fluid flow modeling shows parallel channels show rapid rise of pressure with the number of blockages while
the Cathedral chamber shows much slower rise, which reaches a plateau pressure until it is blocked. The impact of
defects on the capacitive measurement is also discussed. Finally, an interesting application, one that uses patches of
single chain Fragment variables (scFv's), the active part of antibodies, is also discussed.
Proc. SPIE. 6231, Unattended Ground, Sea, and Air Sensor Technologies and Applications VIII
KEYWORDS: Defense and security, Digital signal processing, Independent component analysis, Sensors, Image processing, Error analysis, Signal processing, Very large scale integration, Reconstruction algorithms, Computer architecture
Several advanced algorithms in defense and security objectives require high-speed computation of nonlinear functions. These include detection, localization, and identification. Increasingly, such computations must be performed in double precision accuracy in real time. In this paper, we develop a significance-based interpolative approach to such evaluations for double precision arguments. It is shown that our approach requires only one major multiplication, which leads to a <i>unified and fast, two-cycle, VLSI architecture</i> for mantissa computations. In contrast, the traditional iterative computations require several cycles to converge and typically these computations vary a lot from one function to another. Moreover, when the evaluation pertains to a compound or concatenated function, the overall time required becomes the sum of the times required by the individual operations. For our approach, the time required remains two cycles even for such compound or concatenated functions. Very importantly, the paper develops a key formula for predicting and bounding the worst case arithmetic error. This new result enables the designer to quickly select the architectural parameters without the expensive and intolerably long simulations, while guaranteeing the desired accuracy. The specific application focus is the mapping of the Independent Component Analysis (ICA) technique to a coarse-grain parallel-processing architecture.
In a previous paper we had described a novel concept on ultra-small, ultra-compact, unattended multi-phenomenological sensor systems for rapid deployment, with integrated classification-and-decision-information extraction capability from the sensed environment. Specifically, we had proposed placing such integrated capability on a 3-D Heterogeneous System on a Chip (HSoC). This paper amplifies two key aspects of that future sensor technology. These are the creation of inter-layer vias by high aspect ratio MPS (Macro Porous Silicon) process, and the adaptation of the TESH (Tori connected mESHes) network to bind the diverse leaf nodes on multiple layers of the 3-D structure. Interesting also is the inter-relationship between these two aspects. In particular, the issue of overcoming via failures, catastrophic as well as high-resistance failures, through the existence of alternative paths in the TESH network and corresponding routing strategies is discussed. A probabilistic model for via failures is proposed and the testing of the vias between the sensor layer and the adjacent processing layer is discussed.
This paper describes a new concept for ultra-small, ultra-compact, unattended multi-phenomenological sensor systems for rapid deployment, with integrated classification-and-decision-information extraction capability from a sensed environment. We discuss a unique approach, namely a 3-D Heterogeneous System on a Chip (HSoC) in order to achieve a minimum 10X reduction in weight, volume, and power and a 10X or greater increase in capability and reliability -- over the alternative planar approaches. These gains will accrue from (a) the avoidance of long on-chip interconnects and chip-to-chip bonding wires, and (b) the cohabitation of sensors, preprocessing analog circuitry, digital logic and signal processing, and RF devices in the same compact volume. A specific scenario is discussed in detail wherein a set of four types of sensors, namely an array of acoustic and seismic sensors, an active pixel sensor array, and an uncooled IR imaging array are placed on a common sensor plane. The other planes include an analog plane consisting of transductors and A/D converters. The digital processing planes provide the necessary processing and intelligence capability. The remaining planes provide for wireless communications/networking capability. When appropriate, this processing and decision-making will be accomplished on a collaborative basis among the distributed sensor nodes through a wireless network.
This paper presents a wavelet based image coding method achieving high levels of compression. A multi-resolution subband decomposition system is constructed using Quadrature Mirror Filters. Symmetric extension and windowing of the multi-scaled subbands are incorporated to minimize the boundary effects. Next, the Embedded Zerotree Wavelet coding algorithm is used for data compression method. Elimination of the isolated zero symbol, for certain subbands, leads to an improved EZW algorithm. Further compression is obtained with an adaptive arithmetic coder. We achieve a PSNR of 26.91 dB at a bit rate of 0.018, 35.59 dB at a bit rate of 0.149, and 43.05 dB at 0.892 bits/pixel for the aerospace image, Refuel.
The paper presents an approach to the design of half-band discrete-time wavelets. This is accomplished through the use of a class of quadrature mirror filters which exhibit near- perfect reconstruction property. In particular, we present a technique for the design of such filters, wherein the designer has the flexibility to make tradeoffs between in- band behavior, out-of-band behavior, and the transition-band behavior. The basic formulation is carried out in the frequency domain, which is shown to translate the design problem into an eigenvalue-eigenvector problem. To find the optimal filter for a specific set of specifications, an optimization algorithm is also presented. Using this algorithm, designs ranging from 4 to 80 taps have been carried out successfully. A fairly complete table of resulting filters, which can be used by signal and image processing engineers, is included in the paper.
A neural-network-based algorithm is proposed for the restoration of nuclear medicine images as required for antibody therapy. The method was designed to address the particular problem of restoration of planar and tomographic bremsstrahlung data acquired with a gamma camera. Restoration was achieved by minimizing the energy function of the Hopfield network using a maximum entropy constraint. The performance of the proposed algorithm was tested on simulated data and planar gamma camera images of pure p-emitting radionuclides used in radioimmunotherapy. The results were compared with those of previously reported restoration techniques based on neural networks or traditional filters. Qualitative and quantitative analysis of the data suggested that the neural network with the maximum entropy constraint has good overall restoration performance; it is stable and robust even in cases where the signal-to-noise ratio is poor and scattering effects are significant. This behavior is particularly important in imaging therapeutic doses of pure β emitters such as yttrium-90 in order to provide accurate <i>in vivo</i> estimates of the radiation dose to the target and/or the critical organs.