PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 9496, including the Title Page, Copyright information, Table of Contents, Introduction, Authors, and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Heart rate variability (HRV) can be an important indicator of several conditions that affect the autonomic nervous system, including traumatic brain injury, post-traumatic stress disorder and peripheral neuropathy. Recent work has shown that some of the HRV features can potentially be used for distinguishing a subject’s normal mental state from a stressed one. In all of these past works, HRV analysis is performed on the cardiac activity data acquired by conventional electrocardiography electrodes, which may introduce additional stress and complexity to the acquired data. In this paper we use remotely acquired time-series data extracted from the human facial skin reflectivity signal during rest and mental stress conditions to compute HRV driven features. We further apply a set of classification algorithms to distinguishing between these two states. To determine heart beat signal from the facial skin reflectivity, we apply Principal Component Analysis (PCA) for denoising and Independent Component Analysis (ICA) for source selection. To determine the signal peaks to extract the RR-interval time-series, we apply a threshold-based detection technique and additional peak conditioning algorithms. To classify RR-intervals, we explored classification algorithms that are commonly used for medical applications such as logistic regression and linear discriminant analysis (LDA). Goodness of each classifier is measured in terms of sensitivity/specificity. Results from each classifier are then compared to find the optimal classifier for stress detection. This work, performed under an IRB approved protocol, provides initial proof that remotely-acquired heart rate signal can be used for stress detection. This result shows promise for further development of a remote-sensing stress detection technique both for medical and deception-detection applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The human cardiovascular system, controlled by the autonomic nervous system (ANS), is one of the first sites where one can see the “fight-or-flight” response due to the presence of external stressors. In this paper, we investigate the possibility of detecting mental stress using a novel measure that can be measured in a contactless manner: Pulse transit time (dPTT), which refers to the time that is required for the blood wave (BW) to cover the distance from the heart to a defined remote location in the body. Loosely related to blood pressure, PTT is a measure of blood velocity, and is also implicated in the “fight-or-flight” response. We define the differential PTT (dPTT) as the difference in PTT between two remote areas of the body, such as the forehead and the palm. Expanding our previous work on remote BW detection from visible spectrum videos, we built a system that remotely measures dPTT. Human subject data were collected under an IRB approved protocol from 15 subjects both under normal and stress states and are used to initially establish the potential use of remote dPPT detection as a stress indicator.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Non-mass enhancing lesions represent a challenge for the radiological reading. They are not well-defined in both
morphology (geometric shape) and kinetics (temporal enhancement) and pose a problem to lesion detection
and classification. To enhance the discriminative properties of an automated radiological workflow, the correct
preprocessing steps need to be taken. In an usual computer-aided diagnosis (CAD) system, motion compensation
plays an important role. To this end, we employ a new high accuracy optical flow based motion compensation
algorithm with robustification variants. An automated computer-aided diagnosis system evaluates the atypical
behavior of these lesions, and additionally considers the impact of non-rigid motion compensation on a correct
diagnosis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Current methods for delivery of an anti-restenosis drug to an arterial vessel wall post-percutaneous transluminal
angioplasty and stent placement are limited in terms of drug choice, dosing level, and ability to assure drug coverage
between the struts of a drug eluting stent. Intravascular ultrasound (IVUS) provides real-time, radiation-free, imaging
and assessment of atherosclerotic disease in terms of anatomical, functional and molecular information. In this
presentation, the design of a dual imaging / therapy IVUS catheter is described and results documenting gene and
drug delivery reported. Microbubbles and drug / gene (shell associated or co-injected) are dispensed from the catheter
tip. Using this approach, it becomes possible to address the need for complete vessel wall coverage and achieve
delivery in regions poorly addressed using conventional stent-based approaches. A range of in vitro, ex vivo and in
vivo results are presented. Our most recent results involve a demonstration in a pig model of coronary balloon angioplasty
that produced a 33% reduction in neointima formation versus a drug plus microbubble, but no ultrasound,
control.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Chromosome 19 is known to be linked to neurodegeneration and many cancers. Glioma-derived cancer stem
cells (GSCs) are tumor-initiating cells and may be refractory to radiation and chemotherapy and thus have
important implications for tumor biology and therapeutics. The analysis and interpretation of large proteomic
data sets requires the development of new data mining and visualization approaches. Traditional techniques
are insufficient to interpret and visualize these resulting experimental data. The emphasis of this paper lies
in the presentation of novel approaches for the visualization, clustering and projection representation to unveil
hidden data structures relevant for the accurate interpretation of biological experiments. These qualitative and
quantitative methods are applied to the proteomic analysis of data sets derived from the GSCs. The achieved
clustering and visualization results provide a more detailed insight into the expression patterns for chromosome
19 proteins.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An important problem in modern therapeutics at the metabolomic, transcriptomic or phosphoproteomic level remains to identify therapeutic targets in a plentitude of high-throughput data from experiments relevant to a variety of diseases. This paper presents the application of novel graph algorithms and modern control solutions applied to the graph networks resulting from specific experiments to discover disease-related pathways and drug targets in glioma cancer stem cells (GSCs). The theoretical frameworks provides us with the minimal number of ”driver nodes” necessary to determine the full control over the obtained graph network in order to provide a change in the network’s dynamics from an initial state (disease) to a desired state (non-disease). The achieved results will provide biochemists with techniques to identify more metabolic regions and biological pathways for complex diseases, and design and test novel therapeutic solutions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new information measure, drawing on the 1-D Cluster Variation Method (CVM), describes local pattern distributions (nearest-neighbor and next-nearest neighbor) in a binary 1-D vector in terms of a single interaction enthalpy parameter h for the specific case where the fractions of elements in each of two states are the same (x1=x2=0.5). An example application of this method would be for EEG interpretation in Brain-Computer Interfaces (BCIs), especially in the frontier of invariant biometrics based on distinctive and invariant individual responses to stimuli containing an image of a person with whom there is a strong affiliative response (e.g., to a person’s grandmother). This measure is obtained by mapping EEG observed configuration variables (z1, z2, z3 for next-nearest neighbor triplets) to h using the analytic function giving h in terms of these variables at equilibrium. This mapping results in a small phase space region of resulting h values, which characterizes local pattern distributions in the source data. The 1-D vector with equal fractions of units in each of the two states can be obtained using the method for transforming natural images into a binarized equi-probability ensemble (Saremi & Sejnowski, 2014; Stephens et al., 2013). An intrinsically 2-D data configuration can be mapped to 1-D using the 1-D Peano-Hilbert space-filling curve, which has demonstrated a 20 dB lower baseline using the method compared with other approaches (cf. SPIE ICA etc. by Hsu & Szu, 2014). This CVM-based method has multiple potential applications; one near-term one is optimizing classification of the EEG signals from a COTS 1-D BCI baseball hat. This can result in a convenient 3-D lab-tethered EEG, configured in a 1-D CVM equiprobable binary vector, and potentially useful for Smartphone wireless display. Longer-range applications include interpreting neural assembly activations via high-density implanted soft, cellular-scale electrodes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we develop a server-client quantization scheme to reduce bit resolution of deep learning architecture, i.e., Convolutional Neural Networks, for image recognition tasks. Low bit resolution is an important factor in bringing the deep learning neural network into hardware implementation, which directly determines the cost and power consumption. We aim to reduce the bit resolution of the network without sacrificing its performance. To this end, we design a new quantization algorithm called supervised iterative quantization to reduce the bit resolution of learned network weights. In the training stage, the supervised iterative quantization is conducted via two steps on server – apply k-means based adaptive quantization on learned network weights and retrain the network based on quantized weights. These two steps are alternated until the convergence criterion is met. In this testing stage, the network configuration and low-bit weights are loaded to the client hardware device to recognize coming input in real time, where optimized but expensive quantization becomes infeasible. Considering this, we adopt a uniform quantization for the inputs and internal network responses (called feature maps) to maintain low on-chip expenses. The Convolutional Neural Network with reduced weight and input/response precision is demonstrated in recognizing two types of images: one is hand-written digit images and the other is real-life images in office scenarios. Both results show that the new network is able to achieve the performance of the neural network with full bit resolution, even though in the new network the bit resolution of both weight and input are significantly reduced, e.g., from 64 bits to 4-5 bits.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Accurate and timely knowledge is critical in intelligent transportation system (ITS) as it leads to improved traffic flow
management. The knowledge of the past can be useful for the future as traffic patterns normally follow a predictable pattern
with respect to time of day, and day of week. In this paper, we systematically evaluated the prediction accuracy and speed
of several supervised machine learning algorithms towards congestion identification based on six weeks real-world traffic
data from August 1st, 2012 to September 12th, 2012 in the Maryland (MD)/Washington DC, and Virginia (VA) area. Our
dataset consists of six months traffic data pattern from July 1, 2012 to December 31, 2012, of which 6 weeks was used as
a representative sample for the purposes of this study on our reference roadway – I-270. Our experimental data shows
that with respect to classification, classification tree (Ctree) could provide the best prediction accuracy with an accuracy
rate of 100% and prediction speed of 0.34 seconds. It is pertinent to note that variations exist respecting prediction accuracy
and prediction speed; hence, a tradeoff is often necessary respecting the priority of the applications in question. It is also
imperative to note from the outset that, algorithm design and calibration are important factors in determining their
effectiveness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
How can the human brain uncover patterns, associations and features in real-time, real-world data? There must be a general strategy used to transform raw signals into useful features, but representing this generalization in the context of our information extraction tool set is lacking. In contrast to Big Data (BD), Large Data Analysis (LDA) has become a reachable multi-disciplinary goal in recent years due in part to high performance computers and algorithm development, as well as the availability of large data sets. However, the experience of Machine Learning (ML) and information communities has not been generalized into an intuitive framework that is useful to researchers across disciplines. The data exploration phase of data mining is a prime example of this unspoken, ad-hoc nature of ML
– the Computer Scientist works with a Subject Matter Expert (SME) to understand the data, and then build tools (i.e. classifiers, etc.) which can benefit the SME and the rest of the researchers in that field. We ask, why is there not a tool to represent information in a meaningful way to the researcher asking the question? Meaning is subjective and contextual across disciplines, so to ensure robustness, we draw examples from several disciplines and propose a generalized LDA framework for independent data understanding of heterogeneous sources which contribute to Knowledge Discovery in Databases (KDD). Then, we explore the concept of adaptive Information resolution through a 6W unsupervised learning methodology feedback system. In this paper, we will describe the general process of man-machine interaction in terms of an asymmetric directed graph theory (digging for embedded knowledge), and model the inverse machine-man feedback (digging for tacit knowledge) as an ANN unsupervised learning methodology. Finally, we propose a collective learning framework which utilizes a 6W semantic topology to organize heterogeneous knowledge and diffuse information to entities within a society in a personalized way.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The earth mover's distance (EMD) measures the difference of two feature vectors that is related to the Wasserstein metric defined for probability distribution functions on a given metric space. The EMD of two vectors is based on the cost of moving the content of individual elements of an anchor vector to match the distribution of a target vector. The EMD is a solution to a transportation problem. We present results of using EMD in large data analysis problems such as those for health data and image data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper is concerned with the basic issue of the robustness of compressive sensing solutions in the presence of uncertainties. In particular, we are interested in robust compressive sensing solutions under unknown modeling and measurement inaccuracies. The problems are formulated as minimax optimization. Exact solutions are derived through the approach of Alternating Direction Method of Multipliers. Numerical examples show the minimax problem formulations indeed improve the robustness of compressive sensing solutions in the presence of model and measurement uncertainties.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Smart Sensor Systems, Miniaturization, and Applications
It is a common knowledge that we see the world because our eyes can perceive an optical image. A digital camera seems
a good example of simulating the eye imaging system. However, the signal sensing and imaging on human retina is very
complicated. There are at least five layers (of neurons) along the signal pathway: photoreceptors (cones and rods),
bipolar, horizontal, amacrine and ganglion cells. To sense an optical image, it seems that photoreceptors (as sensors) plus
ganglion cells (converting to electrical signals for transmission) are good enough. Image sensing does not require
ununiformed distribution of photoreceptors like fovea. There are some challenging questions, for example, why don’t we
feel the “blind spots” (never fibers exiting the eyes)? Similar situation happens to glaucoma patients who do not feel
their vision loss until 50% or more nerves died. Now our hypothesis is that human retina initially senses optical (i.e.,
Fourier) spectrum rather than optical image. Due to the symmetric property of Fourier spectrum the signal loss from a
blind spot or the dead nerves (for glaucoma patients) can be recovered. Eye logarithmic response to input light intensity
much likes displaying Fourier magnitude. The optics and structures of human eyes satisfy the needs of optical Fourier
spectrum sampling. It is unsure that where and how inverse Fourier transform is performed in human vision system to
obtain an optical image. Phase retrieval technique in compressive sensing domain enables image reconstruction even
without phase inputs. The spectrum-based imaging system can potentially tolerate up to 50% of bad sensors (pixels),
adapt to large dynamic range (with logarithmic response), etc.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Boltzmann headstone S = kB Log W turns out to be the Rosette stone for Greek physics translation optical display of the microwave sensing hieroglyphics. The LHS is the molecular entropy S measuring the degree of uniformity scattering off the sensing cross sections. The RHS is the inverse relationship (equation) predicting the Planck radiation spectral distribution parameterized by the Kelvin temperature T. Use is made of the conservation energy law of the heat capacity of Reservoir (RV) change T Δ S = -ΔE equals to the internal energy change of black box (bb) subsystem. Moreover, an irreversible thermodynamics Δ S > 0 for collision mixing toward totally larger uniformity of heat death, asserted by Boltzmann, that derived the so-called Maxwell-Boltzmann canonical probability. Given the zero boundary condition black box, Planck solved a discrete standing wave eigenstates (equation). Together with the canonical partition function (equation) an average ensemble average of all possible internal energy yielded the celebrated Planck radiation spectral (equation) where the density of states (equation). In summary, given the multispectral sensing data (equation), we applied Lagrange Constraint Neural Network (LCNN) to solve the Blind Sources Separation (BSS) for a set of equivalent bb target temperatures. From the measurements of specific value, slopes and shapes we can fit a set of Kelvin temperatures T’s for each bb targets. As a result, we could apply the analytical continuation for each entropy sources along the temperature-unique Planck spectral curves always toward the RGB color temperature display for any sensing probing frequency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
How can we design cameras that image selectively in Full Electro-Magnetic (FEM) spectra? Without selective imaging, we cannot use, for example, ordinary tourist cameras to see through fire, smoke, or other obscurants contributing to creating a Visually Degraded Environment (VDE). This paper addresses a possible new design of selective-imaging cameras at firmware level. The design is consistent with physics of the irreversible thermodynamics of Boltzmann’s molecular entropy. It enables imaging in appropriate FEM spectra for sensing through the VDE, and displaying in color spectra for Human Visual System (HVS). We sense within the spectra the largest entropy value of obscurants such as fire, smoke, etc. Then we apply a smart firmware implementation of Blind Sources Separation (BSS) to separate all entropy sources associated with specific Kelvin temperatures. Finally, we recompose the scene using specific RGB colors constrained by the HVS, by up/down shifting Planck spectra at each pixel and time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The security is crucial for any system nowadays, especially communications. One of the most successful protocols in the field of communication over IP networks is Session Initiation Protocol. It is an open-source project used by different kinds of applications, both open-source and proprietary. High penetration and text-based principle made SIP number one target in IP telephony infrastructure, so security of SIP server is essential. To keep up with hackers and to detect potential malicious attacks, security administrator needs to monitor and evaluate SIP traffic in the network. But monitoring and following evaluation could easily overwhelm the security administrator in networks, typically in networks with a number of SIP servers, users and logically or geographically separated networks. The proposed solution lies in automatic attack detection systems. The article covers detection of VoIP attacks through a distributed network of nodes. Then the gathered data analyze aggregation server with artificial neural network. Artificial neural network means multilayer perceptron network trained with a set of collected attacks. Attack data could also be preprocessed and verified with a self-organizing map. The source data is detected by distributed network of detection nodes. Each node contains a honeypot application and traffic monitoring mechanism. Aggregation of data from each node creates an input for neural networks. The automatic classification on a centralized server with low false positive detection reduce the cost of attack detection resources. The detection system uses modular design for easy deployment in final infrastructure. The centralized server collects and process detected traffic. It also maintains all detection nodes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Thermal radiation from objects varies within spectral bands according to Planck’s law. By modeling measurements of such radiation as a linear sum of contributions from multiple sources, a thermal image may be separated into multiple images of independent objects that represent the original, composite scene. We pose the scene decomposition as an inverse source separation problem, where multiple spectral images are used to improve temperature resolution of the estimated scene. Based on this concept, a unique algorithm is being developed that will enable thermal imagers to “see through certain obscurants” with image enhancement. Numerical simulations along with real images from multiple bands (MWIR and LWIR) suggest the feasibility of selective source removal and radiative spectral extrapolation, which can lead to thermal image enhancement and improved sensor performance. Practical issues related to the use of multiple spectral images (such as image registration and choice of sensing bands) are also discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we review an auxiliary function approach to independent component analysis (ICA) and independent vector analysis (IVA). The derived algorithm consists of two alternative updates: 1) weighted covariance matrix update and 2) demixing matrix update, which include no tuning parameters such as a step size in the gradient descent method. The monotonic decrease of the objective function is guaranteed by the principle of the auxiliary function method. The experimental evaluation shows that the derived update rules yield faster convergence and better results than natural gradient updates. An efficient implementation on a mobile phone is also presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Spectral interference from overlaps of spectral lines is a well-documented problem optical emission spectrometry. Spectral interference is encountered even when spectrometers with medium to high resolution are used (e.g., with a focal length of 0.75 m 1 m). The adverse effects of spectral interference are more pronounced when portable spectrometers with low resolution are used (e.g., with focal lengths of about 12.5 cm). Portable spectrometers are suited for “taking part of the lab to the sample” types of applications. We used Artificial Neural Networks (ANNs) and Partial Least Squares (PLS) to address spectral interference correction. And our efforts using these methods are described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we discuss a framework for bridging the gap between security and medical Large Data Analysis (LDA) with functional- biomarkers. Unsupervised Learning for individual e-IQ & IQ relying on memory eliciting (i.e. scent, grandmother images) and IQ baseline profiles could further enhance the ability to uniquely identify and properly diagnose individuals. Sub-threshold changes in a common/probable biomedical biomarker (disorders) means that an individual remains healthy, while a martingale would require further investigation and more measurements taken to determine credibility. Empirical measurements of human actions can discover anomalies hidden in data, which point to biomarkers revealed through stimulus response. We review the approach for forming a single-user baseline having 1-d devices and a scale-invariant representation for N users each (i) having N*d(i) total devices. Such a fractal representation of human-centric data provides self-similar levels information and relationships which are useful for diagnosis and identification causality anywhere from a mental disorder to a DNA match. Biomarkers from biomedical devices offer a robust way to collect data. Biometrics could be envisioned as enhanced and personalized biomedical devices (e.g. typing fist), but used for security. As long as the devices have a shared context origin, useful information can be found by coupling the sensors. In the case of the electroencephalogram (EEG), known patterns have emerged in low frequency Delta Theta Alpha Beta-Gamma (DTAB-G) waves when an individual views a familiar picture in the visual cortex which is shown on EEGs as a sharp peak. Using brainwaves as a functional biomarker for security can lead the industry to create more secure sessions by allowing not only passwords but also visual stimuli and/or keystrokes coupled with EEG to capture and stay informed about real time user e-IQ/IQ data changes. This holistic Computer Science (CS) Knowledge Discovery in Databases, Data Mining (KDD, DM) approach seeks to merge the fields having a shared data origin - biomarkers revealed through stimulus response.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A grand challenge in biology is to understand the cellular and molecular basis of tissue and organ level function in mammals. The ultimate goals of such efforts are to explain how organs arise in development from the coordinated actions of their constituent cells and to determine how molecularly regulated changes in cell behavior alter the structure and function of organs during disease processes. Two major barriers stand in the way of achieving these goals: the relative inaccessibility of cellular processes in mammals and the daunting complexity of the signaling environment inside an intact organ in vivo. To overcome these barriers, we have developed a suite of tissue isolation, three dimensional (3D) culture, genetic manipulation, nanobiomaterials, imaging, and molecular analysis techniques to enable the real-time study of cell biology within intact tissues in physiologically relevant 3D environments. This manuscript introduces the rationale for 3D culture, reviews challenges to optical imaging in these cultures, and identifies current limitations in the analysis of complex experimental designs that could be overcome with improved imaging, imaging analysis, and automated classification of the results of experimental interventions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Leadership Award and Smart Brain Computer Interface
This paper gives highlights of the history of the neural network field, stressing the fundamental ideas which have been in play. Early neural network research was motivated mainly by the goals of artificial intelligence (AI) and of functional neuroscience (biological intelligence, BI), but the field almost died due to frustrations articulated in the famous book Perceptrons by Minsky and Papert. When I found a way to overcome the difficulties by 1974, the community mindset was very resistant to change; it was not until 1987/1988 that the field was reborn in a spectacular way, leading to the organized communities now in place. Even then, it took many more years to establish crossdisciplinary research in the types of mathematical neural networks needed to really understand the kind of intelligence we see in the brain, and to address the most demanding engineering applications. Only through a new (albeit short-lived) funding initiative, funding crossdisciplinary teams of systems engineers and neuroscientists, were we able to fund the critical empirical demonstrations which put our old basic principle of “deep learning” firmly on the map in computer science. Progress has rightly been inhibited at times by legitimate concerns about the “Terminator threat” and other possible abuses of technology. This year, at SPIE, in the quantum computing track, we outline the next stage ahead of us in breaking out of the box, again and again, and rising to fundamental challenges and opportunities still ahead of us.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Electroencephalography (EEG) measures voltage fluctuations resulting from ionic current flows within the neurons of the brain. In practice, EEG refers to the recording of the brain's spontaneous electrical activity over a short period of time, several tens of minutes, as recorded from multiple electrodes placed on the scalp. In order to improve the resolution and the distortion cause by the hair and scalp, large array magnetoencephalography (MEG) systems are introduced. The major challenge is to systematically compare the accuracy of epileptic source localization with high electrode density to that obtained with sparser electrode setups. In this report, we demonstrate a two dimension (2D) image Fast Fourier Transform (FFT) analysis along with utilization of Peano (space-filling) curve to further reduce the hardware requirement for high density EEG and improve the accuracy and performance of the high density EEG analysis. The brain–computer interfaces (BCIs) in this work is enhanced by A field-programmable gate array (FPGA) board with optimized two dimension (2D) image Fast Fourier Transform (FFT) analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The history of brain–computer interfaces (BCIs) starts with Hans Berger's discovery of the electrical activity
of the human brain and the development of electroencephalography (EEG). Recent years, BCI researches are
focused on Invasive, Partially invasive, and Non-invasive BCI. Furthermore, EEG can be also applied to
telepathic communication which could provide the basis for brain-based communication using imagined
speech. It is possible to use EEG signals to discriminate the vowels and consonants embedded in spoken and
in imagined words and apply to military product. In this report, we begin with an example of using high
density EEG with high electrode density and analysis the results by using BCIs. The BCIs in this work is
enhanced by A field-programmable gate array (FPGA) board with optimized two dimension (2D) image Fast
Fourier Transform (FFT) analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The growth control singularity model suggests that acupuncture points (acupoints) originate from organizers in
embryogenesis. Organizers are singular points in growth control. Acupuncture can cause perturbation of a system
with effects similar to simulated annealing. In clinical trial, the goal of a treatment is to relieve certain disorder
which corresponds to reaching certain local optimum in simulated annealing. The self-organizing effect of the
system is limited and related to the person’s general health and age. Perturbation at acupoints can lead a stronger
local excitation (analogous to higher annealing temperature) compared to perturbation at non-singular points
(placebo control points). Such difference diminishes as the number of perturbed points increases due to the wider
distribution of the limited self-organizing activity.
This model explains the following facts from systematic reviews of acupuncture trials:
1. Properly chosen single acupoint treatment for certain disorder can lead to highly repeatable efficacy above
placebo
2. When multiple acupoints are used, the result can be highly repeatable if the patients are relatively healthy and
young but are usually mixed if the patients are old, frail and have multiple disorders at the same time as the
number of local optima or comorbidities increases.
3. As number of acupoints used increases, the efficacy difference between sham and real acupuncture often
diminishes.
It predicted that the efficacy of acupuncture is negatively correlated to the disease chronicity, severity and patient’s
age. This is the first biological – physical model of acupuncture which can predict and guide clinical acupuncture
research.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Impact of changes in blood pressure and pulse from human speech is disclosed in this article. The symptoms of increased physical activity are pulse, systolic and diastolic pressure. There are many methods of measuring and indicating these parameters. The measurements must be carried out using devices which are not used in everyday life. In most cases, the measurement of blood pressure and pulse following health problems or other adverse feelings. Nowadays, research teams are trying to design and implement modern methods in ordinary human activities. The main objective of the proposal is to reduce the delay between detecting the adverse pressure and to the mentioned warning signs and feelings. Common and frequent activity of man is speaking, while it is known that the function of the vocal tract can be affected by the change in heart activity. Therefore, it can be a useful parameter for detecting physiological changes. A method for detecting human physiological changes by speech processing and artificial neural network classification is described in this article. The pulse and blood pressure changes was induced by physical exercises in this experiment. The set of measured subjects was formed by ten healthy volunteers of both sexes. None of the subjects was a professional athlete. The process of the experiment was divided into phases before, during and after physical training. Pulse, systolic, diastolic pressure was measured and voice activity was recorded after each of them. The results of this experiment describe a method for detecting increased cardiac activity from human speech using artificial neural network.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Principal component analysis (PCA) is a popular technique in reducing the dimension of a large data set so that more informed conclusions can be made about the relationship between the values in the data set. Blind source separation (BSS) is one of the many applications of PCA, where it is used to separate linearly mixed signals into their source signals. This project attempts to implement a BSS system in hardware. Due to unique characteristics of hardware implementation, the Generalized Hebbian Algorithm (GHA), a learning network model, is used. The FPGA used to compile and test the system is the Altera Cyclone III EP3C120F780I7.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The conditions that arise in the Cocktail Party Problem prevail across many fields creating a need for of Blind Source Separation. The need for BSS has become prevalent in several fields of work. These fields include array processing, communications, medical signal processing, and speech processing, wireless communication, audio, acoustics and biomedical engineering. The concept of the cocktail party problem and BSS led to the development of Independent Component Analysis (ICA) algorithms. ICA proves useful for applications needing real time signal processing. The goal of this research was to perform an extensive study on ability and efficiency of Independent Component Analysis algorithms to perform blind source separation on mixed signals in software and implementation in hardware with a Field Programmable Gate Array (FPGA). The Algebraic ICA (A-ICA), Fast ICA, and Equivariant Adaptive Separation via Independence (EASI) ICA were examined and compared. The best algorithm required the least complexity and fewest resources while effectively separating mixed sources. The best algorithm was the EASI algorithm. The EASI ICA was implemented on hardware with Field Programmable Gate Arrays (FPGA) to perform and analyze its performance in real time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.