Associating schizophrenia with disrupted functional connectivity is a central idea in schizophrenia research. However, identifying neuroimaging-based features that can serve as reliable “statistical biomarkers” of the disease remains a challenging open problem. We argue that generalization accuracy and stability of candidate features (“biomarkers”) must be used as additional criteria on top of standard significance tests in order to discover more robust biomarkers. Generalization accuracy refers to the utility of biomarkers for making predictions about individuals, for example discriminating between patients and controls, in novel datasets. Feature stability refers to the reproducibility of the candidate features across different datasets. Here, we extracted functional connectivity network features from fMRI data at both high-resolution (voxel-level) and a spatially down-sampled lower-resolution (“supervoxel” level). At the supervoxel level, we used whole-brain network links, while at the voxel level, due to the intractably large number of features, we sampled a subset of them. We compared statistical significance, stability and discriminative utility of both feature types in a multi-site fMRI dataset, composed of schizophrenia patients and healthy controls. For both feature types, a considerable fraction of features showed significant differences between the two groups. Also, both feature types were similarly stable across multiple data subsets. However, the whole-brain supervoxel functional connectivity features showed a higher cross-validation classification accuracy of 78.7% vs. 72.4% for the voxel-level features. Cross-site variability and heterogeneity in the patient samples in the multi-site FBIRN dataset made the task more challenging compared to single-site studies. The use of the above methodology in combination with the fully data-driven approach using the whole brain information have the potential to shed light on “biomarker discovery” in schizophrenia.
The objective of this study is to investigate effects of methylphenidate on brain activity in individuals with cocaine use disorder (CUD) using functional MRI (fMRI). Methylphenidate hydrochloride (MPH) is an indirect dopamine agonist commonly used for treating attention deficit/hyperactivity disorders; it was also shown to have some positive effects on CUD subjects, such as improved stop signal reaction times associated with better control/inhibition,1 as well as normalized task-related brain activity2 and resting-state functional connectivity in specific areas.3 While prior fMRI studies of MPH in CUDs have focused on mass-univariate statistical hypothesis testing, this paper evaluates multivariate, whole-brain effects of MPH as captured by the generalization (prediction) accuracy of different classification techniques applied to features extracted from resting-state functional networks (e.g., node degrees). Our multivariate predictive results based on resting-state data from3 suggest that MPH tends to normalize network properties such as voxel degrees in CUD subjects, thus providing additional evidence for potential benefits of MPH in treating cocaine addiction.
This paper focuses on discovering statistical biomarkers (features) that are predictive of schizophrenia, with a
particular focus on topological properties of fMRI functional networks. We consider several network properties,
such as node (voxel) strength, clustering coefficients, local efficiency, as well as just a subset of pairwise correlations.
While all types of features demonstrate highly significant statistical differences in several brain areas,
and close to 80% classification accuracy, the most remarkable results of 93% accuracy are achieved by using
a small subset of only a dozen of most-informative (lowest p-value) correlation features. Our results suggest
that voxel-level correlations and functional network features derived from them are highly informative about
schizophrenia and can be used as statistical biomarkers for the disease.
One of key topics in fMRI analysis is discovery of task-related brain areas. We focus on predictive accuracy
as a better relevance measure than traditional univariate voxel activations that miss important multivariate
voxel interactions. We use sparse regression (more specifically, the Elastic Net1) to learn predictive models
simultaneously with selection of predictive voxel subsets, and to explore transition from task-relevant to task-irrelevant
areas. Exploring the space of sparse solutions reveals a much wider spread of task-relevant information
in the brain than it is typically suggested by univariate correlations. This happens for several tasks we considered,
and is most noticeable in case of complex tasks such as pain rating; however, for certain simpler tasks, a clear
separation between a small subset of relevant voxels and the rest of the brain is observed even with multivariate
approach to measuring relevance.
The recent deployment of functional networks to analyze fMRI images has been very promising. In this method,
the spatio-temporal fMRI data is converted to a graph-based representation, where the nodes are voxels and edges
indicate the relationship between the nodes, such as the strength of correlation or causality. Graph-theoretic
measures can then be used to compare different fMRI scans.
However, there is a significant computational bottleneck, as the computation of functional networks with
directed links takes several hours on conventional machines with single CPUs. The study in this paper shows
that a GPU can be advantageously used to accelerate the computation, such that the network computation takes
a few minutes. Though GPUs have been used for the purposes of displaying fMRI images, their use in computing
functional networks is novel.
We describe specific techniques such as load balancing, and the use of a large number of threads to achieve the
desired speedup. Our experience in utilizing the GPU for functional network computations should prove useful
to the scientific community investigating fMRI as GPUs are a low-cost platform for addressing the computational
Functional neuroimaging research is moving from the study of "activations" to the study of "interactions" among
brain regions. Granger causality analysis provides a powerful technique to model spatio-temporal interactions
among brain regions. We apply this technique to full-brain fMRI data without aggregating any voxel data into
regions of interest (ROIs). We circumvent the problem of dimensionality using sparse regression from machine
learning. On a simple finger-tapping experiment we found that (1) a small number of voxels in the brain have
very high prediction power, explaining the future time course of other voxels in the brain; (2) these voxels occur
in small sized clusters (of size 1-4 voxels) distributed throughout the brain; (3) albeit small, these clusters overlap
with most of the clusters identified with the non-temporal General Linear Model (GLM); and (4) the method
identifies clusters which, while not determined by the task and not detectable by GLM, still influence brain
In this paper we investigate the spatial correlational structure of orientation and color information in natural
images. We compare these with the spatial correlation structure of optical recordings of macaque monkey
primary visual cortex, in response to oriented and color stimuli. We show that the correlation of orientation falls
off rapidly over increasing distance. By using a color metric based on the a-b coordinates in the CIE-Lab color
space, we show that color information, on the other hand, is more highly correlated over larger distances. We
also show that orientation and color information are statistically independent in natural images. We perform
a similar spatial correlation analysis of the cortical responses to orientation and color. We observe a similar
behavior to that of natural images, in that the correlation of orientation-specific responses falls off; more rapidly
than the correlation of color-specific responses. Our findings suggest that: (a) orientation and color information
should be processed in separate channels, and (b) the organization of cortical color responses at a lower spatial
frequency compared to orientation is a reflection of the statistical structure of visual world.
One of the important features of the human visual system is that it is able to recognize objects in a scale and translational invariant manner. However, achieving this desirable behavior through biologically realistic networks is a challenge. The synchronization of neuronal firing patterns has been suggested as a possible solution to the binding problem (where a biological mechanism is sought to explain how features that represent an object can be scattered across a network, and yet be unified). This observation has led to neurons being modeled as oscillatory dynamical units. It is possible for a network of these dynamical units to exhibit synchronized oscillations under the right conditions. These network models have been applied to solve signal deconvolution or blind source separation problems. However, the use of the same network to achieve properties that the visual sytem exhibits, such as scale and translational invariance have not been fully explored. Some approaches investigated in the literature (Wallis, 1996) involve the use of non-oscillatory elements that are arranged in a hierarchy of layers. The objects presented are allowed to move, and the network utilizes a trace learning rule, where a time averaged output value is used to perform Hebbian learning with respect to the input value. This is a modification of the standard Hebbian learning rule, which typically uses instantaneous values of the input and output. In this paper we present a network of oscillatory amplitude-phase units connected in two layers. The types of connections include feedforward, feedback and lateral. The network consists of amplitude-phase units that can
exhibit synchronized oscillations. We have previously shown that such a network can segment the components of each input object that most contribute to its classification. Learning is unsupervised and based on a Hebbian update, and the architecture is very simple. We extend the ability of this network to address the problem of translational invariance. We show that by adopting a specific treatment of the phase values of the output layer, the network exhibits translational invariant object representation. The scheme used in training is as follows. The network is presented with an input, which then moves. During the motion the amplitude and phase of the upper layer units is not reset, but continues with the past value before the introduction of the object in the new position. Only the input layer is changed
instantaneously to reflect the moving object. The network behavior is such that it categorizes the translated objects with the same label as the stationary object, thus establishing an invariant categorization with respect to translation. This is a promising result as it uses the same framework of oscillatory units that achieves synchrony, and introduces motion to achieve translational invariance.
We present a modelling framework for cortical processing aimed at understanding how, maintaining biological plausibility, neural network models can: (a) approximate general inference algorithms like belief propagation, combining bottom-up and top-down information, (b) solve Rosenblatt's classical superposition problem, which we link to the binding problem, and (c) do so based on an unsupervised learning approach. The framework leads to two related models: the first model shows that the use of top-down feedback significantly improves the
network's ability to perform inference of corrupted inputs; the second model, including oscillatory behavior in the processing units, shows that the superposition problem can be efficiently solved based on the unit's phases.
In this paper we address the problem of understanding the cortical
processing of color information. Unravelling the cortical
representation of color is a difficult task, as the neural pathways for color processing have not been fully mapped, and there are few computational modelling efforts devoted to color. Hence, we first present a conjecture for an ideal target color map based on principles of color opponency, and constraints such as retinotopy and the two dimensional nature of the map. We develop a computational model for the cortical processing of color information that seeks to produce this target color map in a self-organized manner. The input model consists of a luminance channel and opponent color channels, comprising red-green and blue-yellow signals. We use an optional stage consisting of applying an antagonistic center-surround filter to these channels. The input is projected to a restricted portion of the cortical network in a topographic way. The units in the cortical map receive the color opponent input, and compete amongst each other to represent the input. This competition is carried out through the determination of a local winner. By simulating a self-organizing map for color according to this scheme, we are largely able to achieve the desired target color map. According to recent neurophysiological findings, there is evidence for the representation of color mixtures in the cortex, which is consistent with our model. Furthermore, an
orderly traversal of stimulus hues in the CIE chromaticity map
correspond to an orderly spatial traversal in the primate cortical
area V2. Our experimental results are also consistent with this