Assorted technologies such as; EEG, MEG, fMRI, BEM, MRI, TMS and BCI are being integrated to understand how
human visual cortical areas interact during controlled laboratory and natural viewing conditions. Our focus is on the
problem of separating signals from the spatially close early visual areas. The solution involves taking advantage of
known functional anatomy to guide stimulus selection and employing principles of spatial and temporal response
properties that simplify analysis. The method also unifies MEG and EEG recordings and provides a means for improving
existing boundary element head models. In going beyond carefully controlled stimuli, in natural viewing with scanning
eye movements, assessing brain states with BCI is a most challenging task. Frequent eye movements contribute artifacts
to the recordings. A linear regression method is introduced that is shown to effectively characterize these frequent
artifacts and could be used to remove them. In free viewing, saccadic landings initiate visual processing epochs and
could be used to trigger strictly time based analysis methods. However, temporal instabilities indicate frequency based
analysis would be an important adjunct. The class of Cauchy filter functions is introduced that have narrow time and
frequency properties well matched to the EEG/MEG spectrum for avoiding channel leakage.
The pupil dilation reflex is mediated by inhibition of the parasympathetic Edinger-Westphal oculomotor complex and
sympathetic activity. It has long been documented that emotional and sensory events elicit a pupillary reflex dilation. Is
the pupil response a reliable marker of a visual detection event? In two experiments where viewers were asked to report
the presence of a visual target during rapid serial visual presentation (RSVP), pupil dilation was significantly associated
with target detection. The amplitude of the dilation depended on the frequency of targets and the time of the detection.
Larger dilations were associated with trials having fewer targets and with targets viewed earlier during the trial. We also
found that dilation was strongly influenced by the visual task.
The scanpath theory was defined in 1971 by David Noton and Lawrence Stark in two articles that appeared in Science<sup>1</sup> and Scientific American<sup>2</sup>, and since then it has been considered one of the most influential theory of vision and eye movements. The scanpath theory explains the vision process in a topdown fashion by proposing that an internal cognitive representation controls not only the visual perception but also the related mechanism of active looking eye movements. Evidence supporting the scanpath theory comes from experiments showing the repetitive and idiosyncratic nature of eye movements during experiments with ambiguous figures, visual imagery and dynamic scenes. Similarity metrics were defined in our analysis procedures to quantitatively compare and measure the sequence of eye fixations in different experimental conditions. More recent scanpath experiments performed using different motor read-out systems have served to better understand the structure of the visual image representation in the brain and the presence of several levels of binding. A special emphasis must be given to the role of bottom-up conspicuity elaboration in the control of the scanpath sequence and interconnection of conspicuity with such higher level cognitive representations.
Eye movements, EMs, are one important component of vision: only specific regions of the visual input are fixated and processed by the brain at high resolution. The rest of the image is viewed at lower and coarser resolution by the retina, but the image is still perceived and recognized uniformly and clearly. We embodied this sampling characteristic of human vision within a computational model, A*, based on a collection of image processing algorithms that are able to predict regions of visual interest. Several web-related applications are presented and discussed in this paper.
We have studied four approaches to segmentation of images: three automatic ones using image processing algorithms and a fourth approach, human manual segmentation. We were motivated toward helping with an important NASA Mars rover mission task -- replacing laborious manual path planning with automatic navigation of the rover on the Mars terrain. The goal of the automatic segmentations was to identify an obstacle map on the Mars terrain to enable automatic path planning for the rover. The automatic segmentation was first explored with two different segmentation methods: one based on pixel luminance, and the other based on pixel altitude generated through stereo image processing. The third automatic segmentation was achieved by combining these two types of image segmentation. Human manual segmentation of Martian terrain images was used for evaluating the effectiveness of the combined automatic segmentation as well as for determining how different humans segment the same images. Comparisons between two different segmentations, manual or automatic, were measured using a similarity metric, S<SUB>AB</SUB>. Based on this metric, the combined automatic segmentation did fairly well in agreeing with the manual segmentation. This was a demonstration of a positive step towards automatically creating the accurate obstacle maps necessary for automatic path planning and rover navigation.
In parallel with our studies on human eye movements, we have investigated image processing algorithms that predict where human eyes fixate. These loci of fixations, traditionally named Regions-of-Interest, ROIs, are strategically important both for computer applications and for cognitive studies of human visual processing. A very important aspect of our methodology, beyond the specific image processing algorithms used, is how to select from a large initial set of candidates, usually local maxima in the processed image, a final set of few ROIs. In this paper we analyze this latter aspect, proposing and comparing different clustering procedures and study how different procedures may affect the fidelity of comparisons with human selected ROIs.
The scanpath theory proposed that an internal spatial- cognitive model controls perception and the active looking eye movements, EMs, of the scanpath sequence. Evidence for this came from new quantitative methods, experiments with ambiguous figures and visual imagery and from MRI studies, all on cooperating human subjects. Besides recording EMs, we introduce other experimental techniques wherein the subject must depend upon memory bindings as in visual imagery, but may call upon other motor behaviors than EMs to read-out the remembered patterns. How is the internal model distributed and operationally assembled. The concept of binding speaks to the assigning of values for the model and its execution in various parts of the brain. Current neurological information helps to localize different aspects of the spatial-cognitive model in the brain. We suppose that there are several levels of 'binding' -- semantic or symbolic binding, structural binding for the spatial locations of the regions-of-interest and sequential binding for the dynamic execution program that yields the sequence of EMs. Our aim is to dissect out respective contributions of these different forms of binding.
We have developed a focused-procedure based upon a collection of image processing algorithms that serve to identify regions-of-interest (ROIs), over a digital image. To loci of these ROIs are quantitatively compared with ROIs identified by human eye fixations or glimpses while subjects were looking at the same digital images. The focused- procedure is applied to adjust and adapt the compression ratio over a digital image: - high resolution and poor compression for ROIs; low resolution and strong compression for the major expanse of the entire image. In this way, an overall high compression ratio can be achieved, while at the same time preserving, important visual information within particularly relevant regions of the image. We have bundled the focused-procedures with JPEG, so that the JPEG version allows the result of the compression to be formatted into a file compatible for standard JPEG decoding. Thus, once the image has been compressed, it can be read without difficulty.