Neural fields model the large-scale dynamics of spatially structured cortical networks in terms of continuum integro-differential equations, whose associated integral kernels represent the spatial distribution of neuronal synaptic connections. The advantage of a continuum rather than a discrete representation of spatially structured networks is that various techniques from the analysis of PDEs can be adapted to study the nonlinear dynamics of cortical patterns, oscillations and waves. In this talk we consider a neural field model of binocular rivalry waves in primary visual cortex (V1), which are thought to be the neural correlate of the wave-like propagation of perceptual dominance during binocular rivalry. Binocular rivalry is the phenomenon where perception switches back and forth between different images presented to the two eyes. The resulting fluctuations in perceptual dominance and suppression provide a basis for non-invasive studies of the human visual system and the identification of possible neural mechanisms underlying conscious visual awareness. We derive an analytical expression for the speed of a binocular rivalry wave as a function of various neurophysiological parameters, and show how properties of the wave are consistent with the wave-like propagation of perceptual dominance observed in recent psychophysical experiments. In addition to providing an analytical framework for studying binocular rivalry waves, we show how neural field methods provide insights into the mechanisms underlying the generation of the waves. In particular, we highlight the important role of slow adaptation in providing a "symmetry breaking mechanism" that allows waves to propagate. We end by discussing recent extensions of the work that incorporate the effects of noise, and the detailed functional architecture of V1.
A number of related neurocomputational models have proposed that brain computations rely on the evolving dynamics of recurrent neural networks. In contrast to conventional attractor models, in this framework (which includes state-dependent networks, liquid-state machines, and echo-state machines) computations arise from the voyage through state space rather than the arrival at a given location. To date however, these models have been limited by two facts. First, the regimes that are potentially the most powerful from a computational perspective are generally chaotic. Second, while synapses in cortical networks are plastic, incorporating robust forms of plasticity in simulated recurrent networks (that exhibit self-perpetuating dynamics) has proved very challenging. We address both these problems by demonstrating how random recurrent networks (RRNs) that initially exhibit self-perpetuating and chaotic dynamics can be tuned through a supervised learning rule to generate locally stable neural patterns of activity. The outcome is a novel neural network regime that exhibits both transiently stable and chaotic trajectories. We further show that the recurrent learning rule dramatically increased the ability of RRNs to generate complex spatiotemporal motor patterns, and accounts for recent experimental data showing a decrease in neural variability in response to stimulus onset.
Inhibition is a component of nearly every neural system, and increasingly prevalent component in theoretical network models. However, its role in sensory processing is often difficult to directly measure and/or infer. Using a nonlinear modeling framework that can infer the presence and stimulus tuning of inhibition using extracellular and intracellular recordings, I will both describe different forms of inferred inhibition (subtractive and multiplicative), and suggest multiple roles in sensory processing. I will primarily refer to studies in the retina, where it likely contributes to contrast adaptation, the generation of precise timing, and also to diversity of computation among different retinal ganglion cell types. I will also describe roles of shaping sensory processing in other areas, including the auditory areas and the visual cortex. Understanding the role of inhibition in neural processing both can inform a richer view of how single neuron processing can contribute to network behavior, as well as provide tools to validate network models using neural data.
Recent experimental and computational evidence suggests that the brain may operate at a critical state characterized by complex dynamics, significant higher-order correlations, and optimal computational properties. We investigate the emergence of critical activity in a homogeneous, feedforward network of McCullochs-Pitts neurons. By applying an eigenstructure analysis of a mean-field Markov chain model, we explain the emergence of persistence, complexity, and higher-order correlations characteristic of criticality. We extend our analysis from an initial treatment of purely excitatory networks to more complex models that include inhibition and noise: excitatory-inhibitory networks display enhanced robustness of the critical state, and noisy networks exhibit stochastic resonance as the addition of some noise mitigates the effect of harsh thresholding.
The dynamics of neural networks have traditionally been analyzed for small systems or in the infinite size mean field limit. While both of these approaches have made great strides in understanding these systems, large but finite-sized networks have not been explored as much analytically. Here, I will show how the dynamical behavior of finite-sized systems can be inferred by expanding in the inverse system-size around the mean field solution. The approach can also be used to solve the inverse problem of inferring the effective dynamics of a single neuron embedded in a large network where only incomplete information is available. The formalism I will outline can be generalized to any high dimensional dynamical system.
Spatially structured networks, such as bump attractor networks, have enjoyed considerable success in modeling a wide range of phenomena in cortical and hippocampal networks. A key question that arises in the case of hippocampal models, however, is how such a spatial organization of the synaptic connectivity matrix can arise in the absence of any a priori topographic structure of the network. Here we demonstrate a simple mechanism by which robust sequences of neuronal activation, such as those observed during hippocampal sharp waves, can lead to the formation of spatially structured networks that exhibit robust bump attractor dynamics.
Neural networks can integrate sensory information and generate continuously varying outputs, even though individual neurons communicate only with spikes---all-or-none events. Here we show how this can be done efficiently if spikes communicate "prediction errors" between neurons. We focus on the implementation of linear dynamical systems and derive a spiking network model from a single optimization principle. Our model naturally accounts for two puzzling aspects of cortex. First, it provides a rationale for the tight balance and correlations between excitation and inhibition. Second, it predicts asynchronous and irregular firing as a consequence of predictive population coding, even in the limit of vanishing noise. We show that our spiking networks have error-correcting properties that make them far more accurate and robust than comparable rate models. Our approach suggests spike times do matter when considering how the brain computes, and that the reliability of cortical representations could have been strongly under-estimated.
Anatomical studies show that excitatory connections in cortex are not uniformly distributed across a network but instead exhibit clustering into groups of highly connected neurons. The implications of clustering for cortical activity are unclear. We study the effect of clustered excitatory connections on the dynamics of neuronal networks that exhibit high spike time variability due to a balance between excitation and inhibition. Even modest clustering substantially changes the behavior of these networks, introducing slow dynamics where clusters of neurons transiently increase or decrease their firing rate. Consequently, neurons exhibit both short timescale spiking variability and long timescale firing rate fluctuations. We show that stimuli bias networks toward particular activity states, suppressing the mechanisms underlying slow timescale dynamics. This thereby reduces firing rate variability in evoked compared to spontaneous states, as observed experimentally in many cortical systems. Our model thus relates cortical architecture to the reported variability in spontaneous and evoked spiking activity.
We study the effects of noise on stationary pulse solutions (bumps) in spatially extended neural fields. The dynamics of a neural field is described by an integrodifferential equation whose integral term characterizes synaptic interactions between neurons in different spatial locations of the network. Translationally symmetric neural fields support a continuum of stationary bump solutions, which may be centered at any spatial location. Random fluctuations are introduced by modeling the system as a spatially extended Langevin equation whose noise term we take to be additive. For nonzero noise, bumps are shown to wander about the domain in a purely diffusive way. We can approximate the associated diffusion coefficient using a small noise expansion. Upon breaking the (continuous) translation symmetry of the system using spatially heterogeneous inputs or synapses, bumps in the stochastic neural field can become temporarily pinned to a finite number of locations in the network. As a result, the effective diffusion of the bump is reduced, in comparison to the homogeneous case. As the modulation frequency of this heterogeneity increases, the effective diffusion of bumps in the network approaches that of the network with spatially homogeneous weights. We end with some simulations of spiking models which show the same dynamics (This is joint work with Zachary Kilpatrick, UH)
A wide array of psychology experiments have revealed remarkable regularities in the developmental time course of infant semantic cognition, as well as its progressive disintegration in adult dementia. For example, infants tend to acquire the ability to make broad categorical distinctions between concepts before they can make finer scale distinctions, and this process is reversed in dementia, where finer scale categorical distinctions are lost before broad distinctions. We develop a phenomenological, mathematical theory of this process through an analysis of the learning dynamics of multilayer networks exposed to hierarchically structured data. We find new exact solutions to the nonlinear dynamics of error corrective learning in deep, 3 layer networks. These solutions reveal that networks learn input-output covariation structure on a time scale that is inversely proportional to their statistical strength. We further analyze the covariance structure of hierarchical generative models, and show how data generated from such models yield a hierarchy of input-output modes, leading to a hierarchy of time-scales over which such modes are learned. When combined, these results provide a unified, phenomenological account of the time-course of acquisition and disintegration of semantic knowledge.
Joint work with: Andrew Saxe and Jay McClelland.
We consider a coupled, heterogeneous population of relaxation oscillators used to model rhythmic oscillations in the pre-Botzinger complex. By choosing specific values of the parameter used to describe the heterogeneity, sampled from the probability distribution of the values of that parameter, we show how the effects of heterogeneity can be studied in a computationally efficient manner. When more than one parameter is heterogeneous, full or sparse tensor product grids are used to select appropriate parameter values. The method allows us to effectively reduce the dimensionality of the model, and it provides a means for systematically investigating the effects of heterogeneity in coupled systems, linking ideas from uncertainty quantification to those for the study of network dynamics.
Persistent neural activity in the absence of a stimulus has been identified as a neural correlate of working memory, but how such activity is maintained by neocortical circuits remains unknown. Here we show that the inhibitory and excitatory microcircuitry of neocortical memory-storing regions is sufficient to implement a corrective feedback mechanism that enables persistent activity to be maintained stably for prolonged durations. When recurrent excitatory and inhibitory inputs to memory neurons are balanced in strength, but offset in time, drifts in activity trigger a corrective signal that counteracts memory decay. Circuits containing this mechanism temporally integrate their inputs, generate the irregular neural firing observed during persistent activity, and are robust against common perturbations that severely disrupt previous models of short-term memory storage. This work reveals a mechanism for the accumulation and storage of memories in neocortical circuits based upon principles of corrective negative feedback widely used in engineering applications.
Neurons in sensory cortex integrate multiple influences to parse objects and support perception. For weak stimuli, responses to multiple driving stimuli can add supralinearly and modulatory contextual influences can facilitate. Stronger stimuli yield sublinear response summation ("normalization"), which also shapes attentional influences, and contextual suppression. Understanding the circuit operations underlying these diverse phenomena is critical to understanding cortical function and disease. I will present a simple, general theory, showing that a wealth of integrative properties -- including the above, certain spatially periodic behaviors, and stimulus-evoked noise suppression -- arise robustly from dynamics induced by three properties of cortical circuitry: (1) short-range inhibitory and longer-range excitatory connections; (2) strong feedback inhibition; (3) supralinear neuronal input/output functions. The supralinear input/output function quite generally creates a transition from supralinear response summation for weak stimuli to sublinear summation for stronger stimuli, as the subnetwork of excitatory neurons becomes increasingly unstable for stronger stimuli but is dynamically stabilized by feedback inhibition. In new recordings in visual cortex we have confirmed key model predictions.
We introduce a random network model in which one can prescribe the frequency of second order edge motifs. We derive effective equations for the activity of spiking neuron models coupled via such networks. A key consequence of the motif-induced edge correlations is that one cannot derive closed equations for average activity of the nodes (the average firing rate neurons) but instead must develop the equations in terms of the average activity of the edges (the synaptic drives). As a result, the network topology increases the dimension of the effective dynamics and allows for a larger repertoire of behavior. We demonstrate this behavior through simulations of spiking neuronal networks.
Neuronal oscillators, especially the central pattern generator circuits that control rhythmic behaviors such as breathing, need to function reliably throughout life despite ongoing turnover of their molecular components and other perturbations. How is this stability achieved? I will discuss recent results that show how parameter non-uniqueness, membrane conductance co-regulation, and activity- and modulator-dependent homeostatic regulation through negative feedback loops act together to ensure reliable neuronal network function. My presentation will highlight how fruitful interactions between electrophysiology experiments and numerical modeling can advance our understanding of complex system dynamics at the intersection of physics and biology.
Spike train correlations reflect the structure of the network. Correlations are caused, for instance, by direct synaptic interaction and by shared input. In recent work, we considered the contributions of more indirect, multi-synaptic pathways by accounting for the connectivity motifs that arise in recurrent networks of arbitrary topology. Mathematical analysis using Hawkes processes allowed us to relate rates and correlations of spike trains to the fine-scale anatomical structure of the network. Numerical simulations demonstrate that the dynamic point process model also provides an excellent approximation to networks of LIF neurons, via linear response theory. Specifically, we considered power series expansions of firing rates and pairwise correlations, respectively, in terms of the kernel matrix encoding synaptic connectivity. Its components correspond directly to the relevant structural motifs of the network. Depending on the degree of recurrence, one can predict the influence of multi-synaptic pathways on activity dynamics, and thus identify those network motifs that make significant contributions to spike train correlations. In recent work we were able to demonstrate that the inverse problem of inferring (directed) connectivity from (undirected) correlations can be approximately solved, provided that the networks are sparsely coupled, but the level of sparsity is not too low.
In deterministic dynamics, a stable limit cycle is a closed, isolated periodic orbit that attracts nearby trajectories. Points in its basin of attraction may be disambiguated by their asympototic phase. In stochastic systems with approximately periodic trajectories, asymptotic phase is no longer well defined, because all initial densities typically converge to the same stationary measure. We explore circumstances under which one may nevertheless define an analog of the "asymptotic phase". In particular, we consider jump Markov process models incorporating ion channel noise, and study a stochastic version of the classical Morris-Lecar system in this framework. And we show that the stochastic asymptotic phase can be defined even for some systems in which no underlying deterministic limit cycle exists, such as an attracting heteroclinic cycle.
Brain computational challenges vary between behavioral states. Engaged animals react according to incoming sensory information, while in relaxed and sleeping states consolidation of the learned information is believed to take place. Different states are characterized by different forms of cortical activity. We study a possible neuronal mechanism for generating these diverse dynamics and suggest their possible functional significance. Previous studies demonstrated that brief synchronized increase in a neural firing (Population Spikes) can be generated in homogenous recurrent neural networks with short-term synaptic depression. Here we consider more realistic networks with clustered architecture. We show that the level of synchronization in neural activity can be controlled smoothly by network parameters. The network shifts from asynchronous activity to a regime in which clusters synchronized separately, then, the synchronization between the clusters increases gradually to fully synchronized state. We examine the effects of different synchrony levels on the transmission of information by the network. We find that the regime of intermediate synchronization is preferential for the flow of information between sparsely connected areas. Based on these results, we suggest that the regime of intermediate synchronization corresponds to engaged behavioral state of the animal, while global synchronization is exhibited during relaxed and sleeping states.
Neurons in primary visual cortex (V1) display substantial orientation selectivity even in species where V1 lacks an orientation map, such as in mice and rats. The mechanism underlying orientation selectivity in V1 with such a salt-and-pepper organization is unknown; it is unclear whether a connectivity that depends on feature similarity is required, or a random connectivity suffices. Here we argue for the latter. We studied the response to a drifting grating of a network model of layer 2/3 with random recurrent connectivity and feedforward input from layer 4 neurons with random preferred orientations. We show that even though the total feedforward and total recurrent excitatory and inhibitory inputs all have a very weak orientation selectivity, strong selectivity emerges in the neuronal spike responses if the network operates in the balanced excitation/inhibition regime. This is because in this regime the (large) untuned components in the excitatory and inhibitory contributions approximately cancel. As a result the untuned part of the input into a neuron as well as its modulation with orientation and time all have a size comparable to the neuronal threshold. However the tuning of the F0 and F1 components are uncorrelated and the high frequency fluctuations are not tuned. This is reflected in the subthreshold voltage response. Remarkably, due to the non-linear voltage-firing rate transfer function, the preferred orientation of the F0 and F1 components of the spike response are highly correlated.
I will report on recent work which proposes that the network dynamics of the mammalian visual cortex are neither homogeneous nor synchronous but highly structured and strongly shaped by temporally localized barrages of excitatory and inhibitory firing we call `multiple-firing events' (MFEs). Our proposal is based on careful study of a network of spiking neurons built to reflect the coarse physiology of a small patch of layer 2/3 of V1. When appropriately benchmarked this network is capable of reproducing the qualitative features of a range of phenomena observed in the real visual cortex, including orientation tuning, spontaneous background patterns, surround suppression and gamma-band oscillations. Detailed investigation into the relevant regimes reveals causal relationships among dynamical events driven by a strong competition between the excitatory and inhibitory populations. Testable predictions are proposed; challenges for mathematical neuroscience will also be discussed. This is joint work with Aaditya Rangan.
Binding theory and sensory integration address similar problems in different ways. Binding involves building a coherent percept out of many neurons responding to the same stimulus. Sensory integration involves the study of the influences one sense has on another. Binding theory is currently based on oscillatory neural synchronization, while sensory integration is largely thought of in terms of neurons that respond to more than one sense. It would be desirable to bridge these two domains so that insights from each area could be employed by the other. A possible bridge is provided by 'modulated unisensory neurons' (which are driven by only one sensory system but can be strongly modulated by another). Here, we model these cells as members of a pair of excitatory-coupled synchronized neurons. In particular we model mutual enhancement of ordinary vision by heat vision in rattlesnake optic tectum neurons and enhancement of visual responses by auditory stimulation in cat cortical neurons. The model assumes strong coupling with a fidelity constraint: coupling can't be so strong as to reduce the amplitude of either neurons' spikes by more than a criterion amount. This constraint leads the modeled system to follow the Principle of Inverse Enhancement, a key principle of sensory integration.
While there are clear definitions of what it means for a deterministic dynamical system to be periodic, quasiperiodic, or chaotic, it is unclear how to de fine such notions for a noisy system. We study Markov chains on the circle, which is a natural stochastic analogue of deterministic dynamical systems. The main tool is spectral analysis of the transition operator of the Markov chain. We analyze path-wise dynamic properties of the Markov chain, such as stochastic periodicity (or phase locking) and stochastic quasiperiodicity, and show how these properties are read o ff of the geometry of the spectrum of the transition operator.
Recent studies of a reduced firing rate model (of two populations) for binocular rivalry show the existence of several dynamical regimes, including of a complex pattern of mixed-mode oscillations (MMOs). We extend these results to a neuronal network with Mexican-hat type of connectivity and slow negative feedback in the form of adaptation, and show that intricate spatio-temporal patterns occur in a parameter regime associated with the MMOS seen in the reduced model.
While isolated cells of the SCN show oscillations of varying periods and amplitudes, the intact SCN shows remarkable synchrony. Many studies have proposed roles for diffusible intercellular signaling molecules such as VIP, GRP, and GABA in creating synchronization of these non-uniform oscillators to obtain a coherent circadian rhythm. In this preliminary computational study, we aimed to explore the conditions under which a diffusible signal can synchronize the SCN, or create other collective rhythms seen experimentally such as traveling waves of Per/Cry transcription across the SCN. We developed a mathematical model for the SCN by extending a new, simplified model of the primary Per/Cry transcription/translation negative feedback loop in a single cell to include a diffusible signaling molecule. Approximately 19,000 individual cells are simulated with parameters chosen from distributions so that the mean period of oscillation is 24 hours. Our results show that while synchronization can be achieved, it is more difficult than has been previously reported. The architecture of the SCN and its interaction with the extra-SCN space play essential roles in determining whether any coherent rhythm is attained. Furthermore, the way in which the signaling molecule affects transcription has a strong effect on the type of rhythms attained, including determining if synchronization and/or traveling waves are seen.
The question of reliability arises for any dynamical system driven by an input signal: if the same signal is presented many times with different initial conditions, will the system entrain to the signal in a repeatable way? Reliability is of particular interest in large, randomly coupled networks of excitatory and inhibitory units. Such networks are ubiquitous in neuroscience, but are known to autonomously produce strongly chaotic dynamics – an obvious threat to reliability. Here, we show that such chaos also occurs in the presence of weak and strong stimuli. However, even in the chaotic regime, intermittent periods of highly reliable spiking often coexist with unreliable activity. We argue that the sustained coexistence of chaos and reliable spike events is due to the interaction of global state space expansion and dynamics local to individual cells, and interpret our findings within the framework of random dynamical systems theory.
Throughout the central nervous system, the timescale over which pairs of neural spike trains are correlated is shaped by stimulus structure and behavioral context. Such shaping is thought to underlie important changes in the neural code, but the neural circuitry responsible is largely unknown. In this study, we investigate a stimulus-induced shaping of pairwise spike train correlations in the electrosensory system of weakly electric fish. Simultaneous single unit recordings of principal electrosensory cells show that an increase in the spatial extent of stimuli increases correlations at short ( ~ 10 ms) timescales while simultaneously reducing correlations at long ( ~ 100 ms) timescales. A spiking network model of the first two stages of electrosensory processing replicates this correlation shaping, under the assumptions that spatially broad stimuli both saturate feedforward afferent input and recruit an open-loop inhibitory feedback pathway. Our model predictions are experimentally verified using both the natural heterogeneity of the electrosensory system and pharmacological blockade of descending feedback projections. For weak stimuli, linear response analysis of the spiking network shows that the reduction of long timescale correlation for spatially broad stimuli is similar to correlation cancellation mechanisms previously suggested to be operative in mammalian cortex. The mechanism for correlation shaping supports population-level filtering of irrelevant distractor stimuli, thereby enhancing the population response to relevant prey and conspecific communication inputs.
This work is based on recent experimental results using optogenetic tools to stimulate both pyramidal cells (PYR) and parvalbumin-immunoreative interneurons (INT) in the hippocampus of freely-behaving rodents. 'In vitro', PYR exhibit theta range subthreshold (membrane potential) resonance. Whether this translates to spiking resonance in behaving animals is unknown. 'In vivo', Individual directly stimulated PYR cells exhibited narrow-band spiking centered on a wide range of frequencies rather than spiking predominantly at theta. In contrast, 'in vivo' INT photostimulation indirectly induced theta band-limited spiking in pyramidal cells accompanied by post-inhibitory rebound spiking. We present a minimal, biophysical (conductance-based) model of a CA1 hippocampal network that qualitatively reproduces the experimental results. This model includes three cell types: PYR, INT and OLM (oriens-lacunosum moleculare) cells. The presence of subthreshold resonance in isolated PYR cells is not enough to generate robust theta-band spiking resonance in PYR cells embeded in this network. Theta-band spiking resonance was especially robust when the model included a timing mechanism, implemented by either a network-mediated time inhibition provided by the OLM cell or synaptic depression of the INT synapses.
The balanced state, first proposed in cortical network models of integrator-type neurons, is a robust collective state that keeps the mean excitatory and inhibitory input to each cell equal on average, leaving the fluctuations to drive spiking. Despite the resulting irregularity, an effectively instantaneous response in the population rate and a stationary spiking statistics suggest an attractor state  whose capacity for sensory processing is currently being explored. In particular, a characteristic boundary marking the extent of its robustness to finite-sized perturbations has been found that scales with the network parameters [2,3]. Such a boundary should determine the capacity of the 'coding with trajectories' function of the network, and tuning it may be useful for downstream learning . A candidate network for this function is the olfactory bulb. Since the principal cells there display a variety of resonator-like properties, in this work we extend the research of the balanced state to cover both modes of single neuron dynamics using a 2D linear threshold neuron in a spiking network model roughly analogous to the olfactory bulb in Zebrafish.
Large scale studies of spiking neural networks are a key part of modern approaches to understanding the dynamics of biological neural tissue. One approach in computational neuroscience has been to consider the detailed electrophysiological properties of neurons and build vast computational compartmental models. An alternative has been to develop minimal models of spiking neurons with a reduction in the dimensionality of both parameter and variable space that facilitates more effective simulation studies. In this latter case the single neuron model of choice is often a variant of the classic integrate-and-fire model, which is described by a nonsmooth dynamical system. In this paper we review some of the more popular spiking models of this class and describe the types of spiking pattern that they can generate (ranging from tonic to burst firing). We show that a number of techniques originally developed for the study of impact oscillators are directly relevant to their analysis, particularly those for treating grazing bifurcations. Importantly we highlight one particular single neuron model, capable of generating realistic spike trains, that is both computationally cheap and analytically tractable. This is a planar nonlinear integrate-and-fire model with a piecewise linear vector field and a state dependent resetuponspiking.We call this the PWL-IF model and analyse it at both the single neuron and network level. The techniques and terminology of nonsmooth dynamical systems are used to flesh out the bifurcation structure of the single neuron model, as well as to develop the notion of Lyapunov exponents. We also show how to construct the phase response curve for this system, emphasising that techniques in mathematical neurosciencemay also translate back to the field of nonsmooth dynamical systems. The stability of periodic spiking orbits is assessed using a linear stability analysis of spiking times. At the network levelweconsider linear coupling between voltage variables, as would occur in neurobiological networks with gap-junction coupling, and show how to analyse the properties (existence and stability) of both the asynchronous and synchronous states. In the former case we use a phase-density technique that is valid for any large system of globally coupled limit cycle oscillators, whilst in the latter we develop a novel technique that can handle the nonsmooth reset of the model upon spiking. Finally, we discuss other aspects of neuroscience modelling that may benefit from further translation of ideas from the growing body of knowledge on nonsmooth dynamics.This message and any attachment are intended solely for the addressee and may contain confidential information. If you have received this message in error, please send it back to me, and immediately delete it. Please do not use, copy or disclose the information contained in this message or in any attachment. Any views or opinions expressed by the author of this email do not necessarily reflect the views of the University of Nottingham.
Motifs are patterns of subgraphs of complex networks. We studied the impact of such patterns of connectivity on the level of correlated, or synchronized, spiking ac- tivity among pairs of cells in a recurrent network model of integrate and fire neurons. For a range of network architectures, we find that the pairwise correlation coefficients, averaged across the network, can be closely approximated using only three statistics of network connectivity. These are the overall network connection probability and the frequencies of two second-order motifs: diverging motifs, in which one cell provides input to two others, and chain motifs, in which two cells are connected via a third intermediary cell. Specifically, the prevalence of diverging and chain motifs tends to increase correlation. Our method is based on linear response theory, which enables us to express spiking statistics using linear algebra, and a resumming technique, which ex- trapolates from second order motifs to predict the overall effect of coupling on network correlation. Our motif-based results seek to isolate the effect of network architecture perturbatively from a known network state.