It is well known from transformation optics that a light pathway can be designed with artificial materials. When a coordinate transform technique is applied to optically resonating dielectric structures, interesting phenomena can be observed as well. Generally, a long-lived whispering gallery mode (WGM) has no preferential direction of radiation because of its rotationally symmetric structure. However, if the space inside the resonator is transformed so that the discontinuity of coordinates exists, it becomes possible to reconcile directional emission with WGMs. Here, we transform only the inner space of a deformed optical cavity, e.g., the Limaçon cavity into a virtual perfect disk via a conformal mapping and show these two seemingly incompatible behaviors can be observed simultaneously. The refractive index profiles that realize the transformed space can be obtained from the conformal space transformation. The resonant mode calculated with a transformed boundary element method shows that the WGMs can restore for the deformed cavity. The Husimi function calculated for this transformed cavity shows a weighted band-like profile, which implies that the optical rays inside the cavity is maintaining its reflecting angle as is for the original cavity. However, the far-field pattern shows anisotropic emission of radiation because it is determined by tunneling through the rotationally asymmetric boundary. For example, the conformal WGMs in Limaçon and center-shifted triangular cavities exhibit bidirectional and uni-directional emission patterns in the far-field, respectively. These conformal WGM cavities with both the ultra-high quality factor and the directional light emission may be used in the realization of efficient directional light sources.
Since the Brain Order Disorder (BOD) group reported on a high density Electroencephalogram (EEG) to capture
the neuronal information using EEG to wirelessly interface with a Smartphone [1,2], a larger BOD group has been
assembled, including the Obama BRAIN program, CUA Brain Computer Interface Lab and the UCSD Swartz
Computational Neuroscience Center. We can implement the pair-electrodes correlation functions in order to operate
in a real time daily environment, which is of the computation complexity of O(N3) for N=102~3 known as functional
f-EEG. The daily monitoring requires two areas of focus. Area #(1) to quantify the neuronal information flow under
arbitrary daily stimuli-response sources. Approach to #1: (i) We have asserted that the sources contained in the EEG
signals may be discovered by an unsupervised learning neural network called blind sources separation (BSS) of
independent entropy components, based on the irreversible Boltzmann cellular thermodynamics(ΔS < 0), where the
entropy is a degree of uniformity. What is the entropy? Loosely speaking, sand on the beach is more uniform at a higher
entropy value than the rocks composing a mountain – the internal binding energy tells the paleontologists the existence of
information. To a politician, landside voting results has only the winning information but more entropy, while a non-uniform
voting distribution record has more information. For the human’s effortless brain at constant temperature, we can solve the
minimum of Helmholtz free energy (H = E − TS) by computing BSS, and then their pairwise-entropy source correlation
function. (i) Although the entropy itself is not the information per se, but the concurrence of the entropy sources is the
information flow as a functional-EEG, sketched in this 2nd BOD report. Area #(2) applying EEG bio-feedback will
improve collective decision making (TBD). Approach to #2: We introduce a novel performance quality metrics, in terms
of the throughput rate of faster (Δt) & more accurate (ΔA) decision making, which applies to individual, as well as team
brain dynamics. Following Nobel Laureate Daniel Kahnmen’s novel “Thinking fast and slow”, through the brainwave
biofeedback we can first identify an individual’s “anchored cognitive bias sources”. This is done in order to remove the
biases by means of individually tailored pre-processing. Then the training effectiveness can be maximized by the collective
product Δt * ΔA. For Area #1, we compute a spatiotemporally windowed EEG in vitro average using adaptive time-window
sampling. The sampling rate depends on the type of neuronal responses, which is what we seek. The averaged traditional
EEG measurements and are further improved by BSS decomposition into finer stimulus-response source mixing matrix [A]
having finer & faster spatial grids with rapid temporal updates. Then, the functional EEG is the second order co-variance
matrix defined as the electrode-pair fluctuation correlation function C(s~, s~’) of independent thermodynamic source
components. (1) We define a 1-D Space filling curve as a spiral curve without origin. This pattern is historically known
as the Peano-Hilbert arc length a. By taking the most significant bits of the Cartesian product a≡ O(x * y * z), it
represents the arc length in the numerical size with values that map the 3-D neighborhood proximity into a 1-D neighborhood
arc length representation. (2) 1-D Fourier coefficients spectrum have no spurious high frequency contents, which typically
arise in lexicographical (zig-zag scanning) discontinuity [Hsu & Szu, “Peano-Hilbert curve,” SPIE 2014]. A simple Fourier
spectrum histogram fits nicely with the Compressive Sensing CRDT Mathematics. (3) Stationary power spectral density is
a reasonable approximation of EEG responses in striate layers in resonance feedback loops capable of producing a 100, 000
neuronal collective Impulse Response Function (IRF). The striate brain layer architecture represents an ensemble <IRF<
e.g. at V1-V4 of Brodmann areas 17-19 of the Cortex, i.e. stationary Wiener-Kintchine-Einstein Theorem. Goal#1:
functional-EEG: After taking the 1-D space-filling curve, we compute the ensemble averaged 1-D Power Spectral Density
(PSD) and then make use of the inverse FFT to generate f-EEG. (ii) Goal#2 individual wellness baseline (IWB): We need
novel change detection, so we derive the ubiquitous fat-tail distributions for healthy brains PSD in outdoor environments
(Signal=310°C; Noise=27°C: SNR=310/300; 300°K=(1/40)eV). The departure from IWB might imply stress, fever, a sports
injury, an unexpected fall, or numerous midnight excursions which may signal an onset of dementia in Home Alone Senior
(HAS), discovered by telemedicine care-giver networks. Aging global villagers need mental healthcare devices that are
affordable, harmless, administrable (AHA) and user-friendly, situated in a clothing article such as a baseball hat and able
to interface with pervasive Smartphones in daily environment.
We propose to use EEG signals to make user authentication for requiring high security. EEG signals were measured
while the subjects saw several images in sequences. Since subjects‘ EEG signals are different for known and unknown
images, these EEG sequences may be used to identify each subject. Correlation analysis and classification results show
the feasibility of user authentication from EEG signals.
Spatiotemporal sparseness is generated naturally by human visual system based on artificial neural network
modeling of associative memory. Sparseness means nothing more and nothing less than the compressive sensing achieves merely the information concentration. To concentrate the information, one uses the spatial correlation or spatial FFT or DWT or the best of all adaptive wavelet transform (cf. NUS, Shen Shawei). However, higher dimensional spatiotemporal information concentration, the mathematics can not do as flexible as a living human sensory system. The reason is obviously for survival reasons. The rest of the story is given in the paper.
Standard unsupervised feature extraction methods such as PCA and ICA provide representative features and latent
variables which minimizes the data reconstruction error. These generative features may be common to all data, and may
not be optimal for classification tasks. The discriminate ICA (dICA) and discriminant NMF (dNMF) had recently been
proposed which jointly maximizes Fisher linear discriminant and Negentropy of the extracted features. Motivated by
independence among features and modified Fisher linear discriminant, the new algorithm extracts features with both
generative and discriminant powers. Then, the features are further fine-tuned by supervised learning. Experimental
results show excellent recognition performance with these features.
The Artificial Cognitive Systems (ACS) will be developed for human-like functions such as vision, auditory, inference,
and behavior. Especially, computational models and artificial HW/SW systems will be devised for Proactive Learning
(PL) and Self-Identity (SI). The PL model provides bilateral interactions between robot and unknown environment
(people, other robots, cyberspace). For the situation awareness in unknown environment it is required to receive audiovisual
signals and to accumulate knowledge. If the knowledge is not enough, the PL should improve by itself though
internet and others. For human-oriented decision making it is also required for the robot to have self-identify and
emotion. Finally, the developed models and system will be mounted on a robot for the human-robot co-existing society.
The developed ACS will be tested against the new Turing Test for the situation awareness. The Test problems will consist of several video clips, and the performance of the ACSs will be compared against those of human with several levels of cognitive ability.
In noisy environment the human speech perception utilizes visual lip-reading as well as audio phonetic classification.
This audio-visual integration may be done by combining the two sensory features at the early stage. Also, the top-down
attention may integrate the two modalities. For the sensory feature fusion we introduce mapping functions between the
audio and visual manifolds. Especially, we present an algorithm to provide one-to-many mapping function for the videoto-
audio mapping. The top-down attention is also presented to integrate both the sensory features and classification
results of both modalities, which is able to explain McGurk effect. Each classifier is separately implemented by the
Hidden-Markov Model (HMM), but the two classifiers are combined at the top level and interact by the top-down attention.
Classification performance of recognition tasks can be improved by selection of highly discriminative features
from the low-dimensional linear representation of data. High-dimensional multivariate data can be represented
in lower dimensions by unsupervised feature extraction techniques which attempts to remove the redundancy
in the data and/or resolve the multivariate prediction problems. These extracted low-dimensional features of
raw data may not ensure good class discrimination, therefore, supervised feature selection methods motivated
by information-theoretic approaches can improve the recognition performance with lesser number of features.
Proposed hybrid feature selection methods efficiently selects features with higher class discrimination in comparison
to feature-class mutual information (MI), Fisher criterion or unsupervised selection using variance; thus, resulting in much improved recognition performance. Feature-class MI criterion and hybrid feature selection methods are computationally scalable and optimal selectors for statistically independent features.
Biomolecular detection using Localized Surface Plasmon Resonances (LSPR) has been extensively investigated
because these techniques enable label-free detection. The high-density metal nanopatterns with tunable LSPR
characteristics have been used as refractive index sensing because LSPR property is highly sensitive to refractive index
change of surroundings. Meanwhile, Colloidal lithography is a robust method for fabricating regularly ordered
nanostructures in a controlled and reproducible way using spontaneous assembly of colloidal particles. In this study,
nanopatterns on UV-curable polymer were prepared via colloidal lithography. Then, metallic nanograil arrays with high
density were fabricated by sputtering noble metals such as gold and subsequent removal of residual polymers and
colloidal particles. From Finite-Difference Time-Domain Method (FDTD) simulations and reflectance spectra, we found
that multiple dipolar plasmon modes were induced by gold nanograil arrays and each mode was closely related with
structural parameters. LSPR characteristics of gold nanograil arrays could be tuned by varying the fabrication conditions
to obtain optimal structures for LSPR sensing. Sensing behavior of gold nanograil arrays was tested by applying various
solvents with different refractive indices and measuring the variations of LSPR dips. Finally, gold nanograil arrays as
LSPR sensors were integrated in optofluidic devices and used to achieve real-time label-free monitoring of biomolecules.
Fractional correlation is an extension of the conventional correlation. It employs fractional Fourier transform (FRFT) that includes the conventional Fourier transform as a special case where the order of the FRFT equals one. Because of the FRFT's lack of the shift-invariant property, the FRFT is not applicable to the conventional joint transform correlator, but to the nonconventional joint transform correlator (NJTC) that have been proposed by F. T. S. Yu et al., in which separate lenses transform the input signals and their spectral distributions overlap on the square-law detector. This provides an optical implementation of the fractional correlation. The conventional Fourier transform generally yields a high peak at the center of the spectral plane. But the FRFT gives a spectral distribution with no high peak, which is desirable because the square-law detector has a finite dynamic range for the linearity. Moreover, we prove that the fractional correlation produces a narrower output distribution and has the same correlation value at the center of the output plane as the conventional correlation. The conventional correlation has the shift-invariant property, but the fractional correlation has not.
A new optical architecture is developed, based on fractional Fourier transforms, that compromises between shift-invariant (frequency) and position-dependent filtering. The analogy of this architecture to wavelet transforms and adaptive neural networks is also presented. The ambiguity and Wigner distribution functions are obtainable from special cases of the filter. The filter design corresponds to the training of the neural networks, and an adaptive learning algorithm is developed based on gradient-descent error minimization and error back propagation. The extension to multilayer architecture is straightforward.
Fully programmable higher order optical interconnections are described using holographic lenslet arrays and spatial light modulators. Adaptive neural net models can be implemented with this interconnection scheme. To demonstrate its feasibility, basic experiments conducted for adaptive neural network models are reported.
Training by adaptive gain (TAG) neural network model, which had been developed for optical implementation of large-scale artificial neural networks, is further extended for better performance and its feasibility is demonstrated by a small-scale electro-optic implementation. For fully interconnected single-layer neural networks with N input and M output neurons the modified TAG model contains two different types of interconnections, i.e., MN fixed global interconnections and (beta) N + M adaptive local interconnections. For the original TAG model the number of adaptive local interconnections (beta) was set to 1, and the interconnections were understood as adaptive gain. For 2-dimensional input and output patterns the fixed global interconnections may be achieved by page-oriented holograms, and the adaptive local interconnections by spatial light modulators. The original and modified TAG models require much less adaptive elements than the popular perceptron model with fully adaptive global interconnections, and provide possibilities of implementing large-scale artificial neural networks with some sacrifice in performance. The training algorithm is based on gradient descent and error back-propagation, and is easily extensible to multi-layer architecture. Computer simulation and electro-optic implementation demonstrate much better performance of the modified TAG model compared to the original TAG model.
SC715: Independent Component Analysis and Beyond: Blind Signal Processing and its Applications
Blind Signal Processing (BSP) is an emerging area of research and technology with solid theoretical foundations and many potential applications. The problems of separating or extracting of the source signals from sensor arrays, without knowledge of the transmission channel characteristics and the real sources, can be expressed briefly as a number of blind source separation (BSS) or related generalized component analysis (GCA) methods: Independent Component Analysis (ICA) (and its extensions), Sparse Component Analysis (SCA), Sparse Principal Component Analysis (SPCA), Non-negative Matrix Factorization (NMF), Time-Frequency Component Analyzer (TFCA) and Multichannel Blind Deconvolution (MBD). BSP is not limited to ICA or BSS. With BSP we aim to discover and validate principles or laws which govern relationships between inputs (hidden components) and outputs (observations) when the information about the propagation Multi-Input Multi-Output (MIMO) system and its inputs are limited or hindered. BSP incorporates many problems, like blind identification of channels of unknown systems or a problem of suitable decomposition of signals into basic latent (hidden) components which do not necessary represent true sources but rather some of their features or sub-components.
This four-hour course presents the fundamentals of blind signal processing, especially blind source separation and extraction, and in the remaining time discusses their applications in several important signal processing areas including estimation of sources, novel enhancement, denoising, artifact removal, filtering, detection, classification of multi-sensory signals and data, especially in biomedical applications and Brain Computer Interface (BCI).