PUBLICATIONS
Abstract
The stress response necessitates an immediate boost in vital physiological functions from their homeostatic operation to an elevated emergency response. However, the neural mechanisms underlying this state-dependent change remain largely unknown. Using a combination of in vivo and ex vivo electrophysiology with computational modeling, we report that corticotropin releasing hormone (CRH) neurons in the paraventricular nucleus of the hypothalamus (PVN), the effector neurons of hormonal stress response, rapidly transition between distinct activity states through recurrent inhibition. Specifically, in vivo optrode recording shows that under non-stress conditions, CRHPVN neurons often fire with rhythmic brief bursts (RB), which, somewhat counterintuitively, constrains firing rate due to long (~ 2 s) interburst intervals. Stressful stimuli rapidly switch RB to continuous single spiking (SS), permitting a large increase in firing rate. A spiking network model shows that recurrent inhibition can control this activity-state switch, and more broadly the gain of spiking responses to excitatory inputs. In biological CRHPVN neurons ex vivo, the injection of whole-cell currents derived from our computational model recreates the in vivo-like switch between RB and SS, providing direct evidence that physiologically relevant network inputs enable state-dependent computation in single neurons. Together, we present a novel mechanism for state-dependent activity dynamics in CRHPVN neurons.
Abstract
Sleep is generally considered to be a state of large-scale synchrony across thalamus and neocortex; however, recent work has challenged this idea by reporting isolated sleep rhythms such as slow oscillations and spindles. What is the spatial scale of sleep rhythms? To answer this question, we adapted deep learning algorithms initially developed for detecting earthquakes and gravitational waves in high-noise settings for analysis of neural recordings in sleep. We then studied sleep spindles in non-human primate electrocorticography (ECoG), human electroencephalogram (EEG), and clinical intracranial electroencephalogram (iEEG) recordings in the human. Within each recording type, we find widespread spindles occur much more frequently than previously reported. We then analyzed the spatiotemporal patterns of these large-scale, multi-area spindles and, in the EEG recordings, how spindle patterns change following a visual memory task. Our results reveal a potential role for widespread, multi-area spindles in consolidation of memories in networks widely distributed across primate cortex.
Abstract
Populations of cortical neurons generate rhythmic fluctuations in their ongoing spontaneous activity. These fluctuations can be seen in the local field potential (LFP), which reflects summed return currents from synaptic activity in the local population near a recording electrode. The LFP is spectrally broad, and many researchers view this breadth as containing many narrowband oscillatory components that may have distinct functional roles. This view is supported by the observation that the phase of narrowband oscillations is often correlated with cortical excitability and can relate to the timing of spiking activity and the fidelity of sensory evoked responses. Accordingly, researchers commonly tune in to these channels by narrowband filtering the LFP. Alternatively, neural activity may be fundamentally broadband and composed of transient, nonstationary rhythms that are difficult to approximate as oscillations. In this view, the instantaneous state of the broad ensemble relates directly to the excitability of the local population with no particular allegiance to any frequency band. To test between these alternatives, we asked whether the spiking activity of neocortical neurons in marmoset of either sex is better aligned with the phase of the LFP within narrow frequency bands or with a broadband measure. We find that the phase of broadband LFP fluctuations provides a better predictor of spike timing than the phase after filtering in narrow bands. These results challenge the view of the neocortex as a system composed of narrowband oscillators and supports a view in which neural activity fluctuations are intrinsically broadband.
Abstract
We study the spectrum of the join of several circulant matrices. We apply our results to compute explicitly the spectrum of certain graphs obtained by joining several circulant graphs.
Abstract
One of the simplest mathematical models in the study of nonlinear systems is the Kuramoto model, which describes synchronization in systems from swarms of insects to superconductors. We have recently found a connection between the original, real-valued nonlinear Kuramoto model and a corresponding complex-valued system that permits describing the system in terms of a linear operator and iterative update rule. We now use this description to investigate three major synchronization phenomena in Kuramoto networks (phase synchronization, chimera states, and traveling waves), not only in terms of steady state solutions but also in terms of transient dynamics and individual simulations. These results provide new mathematical insight into how sophisticated behaviors arise from connection patterns in nonlinear networked systems.
Abstract
Studies of sensory-evoked neuronal responses often focus on mean spike rates, with fluctuations treated as internally-generated noise. However, fluctuations of spontaneous activity, often organized as traveling waves, shape stimulus-evoked responses and perceptual sensitivity. The mechanisms underlying these waves are unknown. Further, it is unclear whether waves are consistent with the low rate and weakly correlated "asynchronous-irregular" dynamics observed in cortical recordings. Here, we describe a large-scale computational model with topographically-organized connectivity and conduction delays relevant to biological scales. We find that spontaneous traveling waves are a general property of these networks. The traveling waves that occur in the model are sparse, with only a small fraction of neurons participating in any individual wave. Consequently, they do not induce measurable spike correlations and remain consistent with locally asynchronous irregular states. Further, by modulating local network state, they can shape responses to incoming inputs as observed in vivo.
Abstract
We study the Kuramoto model with attractive sine coupling. We introduce a complex-valued matrix formulation whose argument coincides with the original Kuramoto dynamics. We derive an exact solution for the complex-valued model, which permits analytical insight into individual realizations of the Kuramoto model. The existence of a complex-valued form of the Kuramoto model provides a key demonstration that, in some cases, reformulations of nonlinear dynamics in higher-order number fields may provide tractable analytical approaches.
Abstract
Many neurodegenerative and neuropsychiatric diseases and other brain disorders are accompanied by impairments in high-level cognitive functions including memory, attention, motivation, and decision-making. Despite several decades of extensive research, neuroscience is little closer to discovering new treatments. Key impediments include the absence of validated and robust cognitive assessment tools for facilitating translation from animal models to humans. In this review, we describe a state-of-the-art platform poised to overcome these impediments and improve the success of translational research, the Mouse Translational Research Accelerator Platform (MouseTRAP), which is centered on the touchscreen cognitive testing system for rodents. It integrates touchscreen-based tests of high-level cognitive assessment with state-of-the art neurotechnology to record and manipulate molecular and circuit level activity in vivo in animal models during human-relevant cognitive performance. The platform also is integrated with two Open Science platforms designed to facilitate knowledge and data-sharing practices within the rodent touchscreen community, touchscreencognition.org and mousebytes.ca. Touchscreencognition.org includes the Wall, showcasing touchscreen news and publications, the Forum, for community discussion, and Training, which includes courses, videos, SOPs, and symposia. To get started, interested researchers simply create user accounts. We describe the origins of the touchscreen testing system, the novel lines of research it has facilitated, and its increasingly widespread use in translational research, which is attributable in part to knowledge-sharing efforts over the past decade. We then identify the unique features of MouseTRAP that stand to potentially revolutionize translational research, and describe new initiatives to partner with similar platforms such as McGill's M3 platform (m3platform.org).
Abstract
Perceptual sensitivity varies from moment to moment. One potential source of this variability is spontaneous fluctuations in cortical activity that can travel as a wave. Spontaneous travelling waves have been reported during anaesthesia, but it is not known whether spontaneous travelling waves have a role during waking perception. Here, using newly developed analytic techniques to characterize the moment-to-moment dynamics of noisy multielectrode data, we identify spontaneous waves of activity in the extrastriate visual cortex of awake, behaving marmosets (Callithrix jacchus). In monkeys trained to detect faint visual targets, the timing and position of spontaneous travelling waves before target onset predict the magnitude of target-evoked activity and the likelihood of target detection; by contrast, spatially disorganized fluctuations of neural activity are much less predictive. These results reveal an important role for spontaneous travelling waves in sensory processing through modulation of neural and perceptual sensitivity.
Abstract
Multichannel recording technologies have revealed travelling waves of neural activity in multiple sensory, motor and cognitive systems. These waves can be spontaneously generated by recurrent circuits or evoked by external stimuli. They travel along brain networks at multiple scales, transiently modulating spiking and excitability as they pass. Here, we review recent experimental findings that have found evidence for travelling waves at single-area (mesoscopic) and whole-brain (macroscopic) scales. We place these findings in the context of the current theoretical understanding of wave generation and propagation in recurrent networks. During the large low-frequency rhythms of sleep or the relatively desynchronized state of the awake cortex, travelling waves may serve a variety of functions, from long-term memory consolidation to processing of dynamic visual stimuli. We explore new avenues for experimental and computational understanding of the role of spatiotemporal activity patterns in the cortex.
Abstract
Sleep spindles are brief oscillatory events during non-rapid eye movement (NREM) sleep. Spindle density and synchronization properties are different in MEG versus EEG recordings in humans and also vary with learning performance, suggesting spindle involvement in memory consolidation. Here, using computational models, we identified network mechanisms that may explain differences in spindle properties across cortical structures. First, we report that differences in spindle occurrence between MEG and EEG data may arise from the contrasting properties of the core and matrix thalamocortical systems. The matrix system, projecting superficially, has wider thalamocortical fanout compared to the core system, which projects to middle layers, and requires the recruitment of a larger population of neurons to initiate a spindle. This property was sufficient to explain lower spindle density and higher spatial synchrony of spindles in the superficial cortical layers, as observed in the EEG signal. In contrast, spindles in the core system occurred more frequently but less synchronously, as observed in the MEG recordings. Furthermore, consistent with human recordings, in the model, spindles occurred independently in the core system but the matrix system spindles commonly co-occurred with core spindles. We also found that the intracortical excitatory connections from layer III/IV to layer V promote spindle propagation from the core to the matrix system, leading to widespread spindle activity. Our study predicts that plasticity of intra- and inter-cortical connectivity can potentially be a mechanism for increased spindle density as has been observed during learning.
Abstract
Voltage-sensitive dye imaging (VSDI) is a key neurophysiological recording tool because it reaches brain scales that remain inaccessible to other techniques. The development of this technique from in vitro to the behaving nonhuman primate has only been made possible thanks to the long-lasting, visionary work of Amiram Grinvald. This work has opened new scientific perspectives to the great benefit to the neuroscience community. However, this unprecedented technique remains largely under-utilized, and many future possibilities await for VSDI to reveal new functional operations. One reason why this tool has not been used extensively is the inherent complexity of the signal. For instance, the signal reflects mainly the subthreshold neuronal population response and is not linked to spiking activity in a straightforward manner. Second, VSDI gives access to intracortical recurrent dynamics that are intrinsically complex and therefore nontrivial to process. Computational approaches are thus necessary to promote our understanding and optimal use of this powerful technique. Here, we review such approaches, from computational models to dissect the mechanisms and origin of the recorded signal, to advanced signal processing methods to unravel new neuronal interactions at mesoscopic scale. Only a stronger development of interdisciplinary approaches can bridge micro- to macroscales.
Abstract
In estimating the frequency spectrum of real-world time series data, we must violate the assumption of infinite-length, orthogonal components in the Fourier basis. While it is widely known that care must be taken with discretely sampled data to avoid aliasing of high frequencies, less attention is given to the influence of low frequencies with period below the sampling time window. Here, we derive an analytic expression for the side-lobe attenuation of signal components in the frequency domain representation. This expression allows us to detail the influence of individual frequency components throughout the spectrum. The first consequence is that the presence of low-frequency components introduces a $1/f^{\alpha}$ component across the power spectrum, with a scaling exponent of $\alpha \approx -2$. This scaling artifact could be composed of diffuse low-frequency components, which can render it difficult to detect a priori. Further, treatment of the signal with standard digital signal processing techniques cannot easily remove this scaling component. While several theoretical models have been introduced to explain the ubiquitous $1/f^{\alpha}$ scaling component in neuroscientific data, we conjecture here that some experimental observations could be the result of such data analysis procedures.
Abstract
The correlation method from brain imaging has been used to estimate functional connectivity in the human brain. However, brain regions might show very high correlation even when the two regions are not directly connected due to the strong interaction of the two regions with common input from a third region. One previously proposed solution to this problem is to use a sparse regularized inverse covariance matrix or precision matrix (SRPM) assuming that the connectivity structure is sparse. This method yields partial correlations to measure strong direct interactions between pairs of regions while simultaneously removing the influence of the rest of the regions, thus identifying regions that are conditionally independent. To test our methods, we first demonstrated conditions under which the SRPM method could indeed find the true physical connection between a pair of nodes for a spring-mass example and an RC circuit example. The recovery of the connectivity structure using the SRPM method can be explained by energy models using the Boltzmann distribution. We then demonstrated the application of the SRPM method for estimating brain connectivity during stage 2 sleep spindles from human electrocorticography (ECoG) recordings using an electrode array. The ECoG recordings that we analyzed were from a 32-year-old male patient with long-standing pharmaco-resistant left temporal lobe complex partial epilepsy. Sleep spindles were automatically detected using delay differential analysis and then analyzed with SRPM and the Louvain method for community detection. We found spatially localized brain networks within and between neighboring cortical areas during spindles, in contrast to the case when sleep spindles were not present.
Abstract
During sleep, the thalamus generates a characteristic pattern of transient, 11-15 Hz sleep spindle oscillations, which synchronize the cortex through large-scale thalamocortical loops. Spindles have been increasingly demonstrated to be critical for sleep-dependent consolidation of memory, but the specific neural mechanism for this process remains unclear. We show here that cortical spindles are spatiotemporally organized into circular wave-like patterns, organizing neuronal activity over tens of milliseconds, within the timescale for storing memories in large-scale networks across the cortex via spike-time dependent plasticity. These circular patterns repeat over hours of sleep with millisecond temporal precision, allowing reinforcement of the activity patterns through hundreds of reverberations. These results provide a novel mechanistic account for how global sleep oscillations and synaptic plasticity could strengthen networks distributed across the cortex to store coherent and integrated memories.
Abstract
Beta (β)- and gamma (γ)-oscillations are present in different cortical areas and are thought to be inhibition-driven, but it is not know if these properties also apply to γ-; oscillations in human. Here, we analyze such oscillations in high-density microelectrode array recordings in human and monkey during the wake-sleep cycle. In these recordings, units were classified as excitatory and inhibitory cells. We find that γ-oscillations in human and β-oscillations in monkey are characterized by a strong implication of inhibitory neurons, both in terms of their firing rate and their phasic firing with the oscillation cycle. The β- and γ-waves systematically propagate across the array, with similar velocities, during both wake and sleep. However, only in slow-wave sleep (SWS) β- and γ-oscillations are associated with highly coherent and functional interactions across several millimeters of the neocortex. This interaction is specifically pronounced between inhibitory cells. These results suggest that inhibitory cells are dominantly involved in the genesis of β- and γ-oscillations, as well as in the organization of their large-scale coherence in the awake and sleeping brain. The highest oscillation coherence found during SWS suggests that fast oscillations implement a highly coherent reactivation of wake patterns that may support memory consolidation during SWS.
Abstract
The central coefficients of powers of certain polynomials with arbitrary degree in $x$ form an important family of integer sequences. Although various recursive equations addressing these coefficients do exist, no explicit analytic representation has yet been proposed. In this article, we present an explicit form of the integer sequences of central multinomial coefficients of polynomials of even degree in terms of finite sums over Dirichlet kernels, hence linking these sequences to discrete $n$th-degree Fourier series expansions. The approach utilizes the diagonalization of circulant boolean matrices, and is generalizable to all multinomial coefficients of certain polynomials with even degree, thus forming the base for a new family of combinatorial identities.
LINK
Abstract
Advancing the size and complexity of neural network models leads to an ever increasing demand for computational resources for their simulation. Neuromorphic devices offer a number of advantages over conventional computing architectures, such as high emulation speed or low power consumption, but this usually comes at the price of reduced configurability and precision. In this article, we investigate the consequences of several such factors that are common to neuromorphic devices, more specifically limited hardware resources, limited parameter configurability and parameter variations due to fixed-pattern noise and trial-to-trial variability. Our final aim is to provide an array of methods for coping with such inevitable distortion mechanisms. As a platform for testing our proposed strategies, we use an executable system specification (ESS) of the BrainScaleS neuromorphic system, which has been designed as a universal emulation back-end for neuroscientific modeling. We address the most essential limitations of this device in detail and study their effects on three prototypical benchmark network models within a well-defined, systematic workflow. For each network model, we start by defining quantifiable functionality measures by which we then assess the effects of typical hardware-specific distortion mechanisms, both in idealized software simulations and on the ESS. For those effects that cause unacceptable deviations from the original network dynamics, we suggest generic compensation mechanisms and demonstrate their effectiveness. Both the suggested workflow and the investigated compensation mechanisms are largely back-end independent and do not require additional hardware configurability beyond the one required to emulate the benchmark networks in the first place. We hereby provide a generic methodological environment for configurable neuromorphic devices that are targeted at emulating large-scale, functional neural networks.
LINK
Abstract
Since its introduction, the "small-world" effect has played a central role in network science, particularly in the analysis of the complex networks of the nervous system. From the cellular level to that of interconnected cortical regions, many analyses have revealed small-world properties in the networks of the brain. In this work, we revisit the quantification of small-worldness in neural graphs. We find that neural graphs fall into the "borderline" regime of small-worldness, residing close to that of a random graph, especially when the degree sequence of the network is taken into account. We then apply recently introduced analytical expressions for clustering and distance measures, to study this borderline small-worldness regime. We derive theoretical bounds for the minimal and maximal small-worldness index for a given graph, and by semi-analytical means, study the small-worldness index itself. With this approach, we find that graphs with small-worldness equivalent to that observed in experimental data are dominated by their random component. These results provide the first thorough analysis suggesting that neural graphs may reside far away from the maximally small-world regime.
LINK
Abstract
In the past two decades, significant advances have been made in understanding the structural and functional properties of biological networks, via graph-theoretic analysis. In general, most graph-theoretic studies are conducted in the presence of serious uncertainties, such as major undersampling of the experimental data. In the specific case of neural systems, however, a few moderately robust experimental reconstructions have been reported, and these have long served as fundamental prototypes for studying connectivity patterns in the nervous system. In this paper, we provide a comparative analysis of these "historical" graphs, both in their directed (original) and symmetrized (a common preprocessing step) forms, and provide a set of measures that can be consistently applied across graphs (directed or undirected, with or without self-loops). We focus on simple structural characterizations of network connectivity and find that in many measures, the networks studied are captured by simple random graph models. In a few key measures, however, we observe a marked departure from the random graph prediction. Our results suggest that the mechanism of graph formation in the networks studied is not well captured by existing abstract graph models in their first- and second-order connectivity.
Abstract
Propagating waves occur in many excitable media and were recently found in neural systems from retina to neocortex. While propagating waves are clearly present under anaesthesia, whether they also appear during awake and conscious states remains unclear. One possibility is that these waves are systematically missed in trial-averaged data, due to variability. Here we present a method for detecting propagating waves in noisy multichannel recordings. Applying this method to single-trial voltage-sensitive dye imaging data, we show that the stimulus-evoked population response in primary visual cortex of the awake monkey propagates as a travelling wave, with consistent dynamics across trials. A network model suggests that this reliability is the hallmark of the horizontal fibre network of superficial cortical layers. Propagating waves with similar properties occur independently in secondary visual cortex, but maintain precise phase relations with the waves in primary visual cortex. These results show that, in response to a visual stimulus, propagating waves are systematically evoked in several visual areas, generating a consistent spatiotemporal frame for further neuronal interactions.
Abstract
One of the simplest polynomial recursions exhibiting chaotic behavior is the logistic map $$x_{n+1} = a x_n ( 1 - x_n )$$ with $x_n, a \in \mathbb{Q}: x_n \in [0,1] \ \forall n \in \mathbb{N}$ and $a \in (0,4]$, the discrete-time model of the differential growth introduced by Verhulst almost two centuries ago (Verhulst, 1838). Despite the importance of this discrete map for the field of nonlinear science, explicit solutions are known only for the special cases $a = 2$ and $a = 4$. In this article, we propose a representation of the Verhulst logistic map in terms of a finite power series in the map's growth parameter $a$ and initial value $x_0$ whose coefficients are given by the solution of a system of linear equations. Although the proposed representation cannot be viewed as a closed-form solution of the logistic map, it may help to reveal the sensitivity of the map on its initial value and, thus, could provide insights into the mathematical description of chaotic dynamics.
A top downloaded open-access article in Elsevier Mathematics
Abstract
We introduce an analytic model for directed Watts-Strogatz small-world graphs and deduce an algebraic expression of its defining adjacency matrix. The latter is then used to calculate the small-world digraph's asymmetry index and clustering coefficient in an analytically exact fashion, valid non-asymptotically for all graph sizes. The proposed approach is general and can be applied to all algebraically well-defined graph-theoretical measures, thus allowing for an analytical investigation of finite-size small-world graphs.
Abstract
Propagating waves of activity have been recorded in many species, in various brain states, brain areas, and under various stimulation conditions. Here, we review the experimental literature on propagating activity in thalamus and neocortex across various levels of anesthesia and stimulation conditions. We also review computational models of propagating waves in networks of thalamic cells, cortical cells and of the thalamocortical system. Some discrepancies between experiments can be explained by the "network state", which differs vastly between anesthetized and awake conditions. We introduce a network model displaying different states and investigate their effect on the spatial structure of self-sustained and externally driven activity. This approach is a step towards understanding how the intrinsically-generated ongoing activity of the network affects its ability to process and propagate extrinsic input.
Abstract
In this article, we present a methodological framework that meets novel requirements emerging from upcoming types of accelerated and highly configurable neuromorphic hardware systems. We describe in detail a device with 45 million programmable and dynamic synapses that is currently under development, and we sketch the conceptual challenges that arise from taking this platform into operation. More specifically, we aim at the establishment of this neuromorphic system as a flexible and neuroscientifically valuable modeling tool that can be used by non-hardware experts. We consider various functional aspects to be crucial for this purpose, and we introduce a consistent workflow with detailed descriptions of all involved modules that implement the suggested steps: The integration of the hardware interface into the simulator-independent model description language PyNN; a fully automated translation between the PyNN domain and appropriate hardware configurations; an executable specification of the future neuromorphic system that can be seamlessly integrated into this biology-to-hardware mapping process as a test bench for all software layers and possible hardware design modifications; an evaluation scheme that deploys models from a dedicated benchmark library, compares the results generated by virtual or prototype hardware devices with reference software simulations and analyzes the differences. The integration of these components into one hardware-software workflow provides an ecosystem for ongoing preparative studies that support the hardware design process and represents the basis for the maturity of the model-to-hardware mapping software. The functionality and flexibility of the latter is proven with a variety of experimental results.
Abstract
In the hippocampus and the neocortex, the coupling between local field potential (LFP) oscillations and the spiking of single neurons can be highly precise, across neuronal populations and cell types. Spike phase (i.e., the spike time with respect to a reference oscillation) is known to carry reliable information, both with phase-locking behavior and with more complex phase relationships, such as phase precession. How this precision is achieved by neuronal populations, whose membrane properties and total input may be quite heterogeneous, is nevertheless unknown. In this note, we investigate a simple mechanism for learning precise LFP-to-spike coupling in feed-forward networks -- the reliable, periodic modulation of presynaptic firing rates during oscillations, coupled with spike-timing dependent plasticity. When oscillations are within the biological range (2-150 Hz), firing rates of the input change on a timescale highly relevant to spike-timing dependent plasticity (STDP). Through analytic and computational methods, we find points of stable phase-locking for a neuron with plastic input synapses. These points correspond to precise phase-locking behavior in the feed-forward network. The location of these points depends on the oscillation frequency of the inputs, the STDP time constants, and the balance of potentiation and de-potentiation in the STDP rule. For a given input oscillation, the balance of potentiation and de-potentiation in the STDP rule is the critical parameter that determines the phase at which an output neuron will learn to spike. These findings are robust to changes in intrinsic post-synaptic properties. Finally, we discuss implications of this mechanism for stable learning of spike-timing in the hippocampus.