Abstract:
Visual processing in the brain seems to provide fast but coarse information before information about fine details. Such dynamics occur also in single neurons at several levels of the visual system. In the dorsal lateral geniculate nucleus (LGN), neurons have a receptive field (RF) with antagonistic center-surround organization, and temporal changes in center-surround organization are generally assumed to be due to a time-lag of the surround activity relative to center activity. Spatial resolution may be measured as the inverse of center size, and in LGN neurons RF-center width changes during static stimulation with durations in the range of normal fixation periods (250–500 ms) between saccadic eye-movements. The RF-center is initially large, but rapidly shrinks during the first ~100 ms to a rather sustained size. We studied such dynamics in anesthetized cats during presentation (250 ms) of static spots centered on the RF with main focus on the transition from the first transient and highly dynamic component to the second more sustained component. The results suggest that the two components depend on different neuronal mechanisms that operate in parallel and with partial temporal overlap rather than on a continuously changing center-surround balance. Results from mathematical modeling further supported this conclusion. We found that existing models for the spatiotemporal RF of LGN neurons failed to account for our experimental results. The modeling demonstrated that a new model, in which the response is given by a sum of an early transient component and a partially overlapping sustained component, adequately accounts for our experimental data.

Abstract:
Firing-rate models provide a practical tool for studying the dynamics of trial- or population-averaged neuronal signals. A wealth of theoretical and experimental studies has been dedicated to the derivation or extraction of such models by investigating the firing-rate response characteristics of ensembles of neurons. The majority of these studies assumes that neurons receive input spikes at a high rate through weak synapses (diffusion approximation). For many biological neural systems, however, this assumption cannot be justified. So far, it is unclear how time-varying presynaptic firing rates are transmitted by a population of neurons if the diffusion assumption is dropped. Here, we numerically investigate the stationary and non-stationary firing-rate response properties of leaky integrate-and-fire neurons receiving input spikes through excitatory synapses with alpha-function shaped postsynaptic currents for strong synaptic weights. Input spike trains are modeled by inhomogeneous Poisson point processes with sinusoidal rate. Average rates, modulation amplitudes, and phases of the period-averaged spike responses are measured for a broad range of stimulus, synapse, and neuron parameters. Across wide parameter regions, the resulting transfer functions can be approximated by a linear first-order low-pass filter. Below a critical synaptic weight, the cutoff frequencies are approximately constant and determined by the synaptic time constants. Only for synapses with unrealistically strong weights are the cutoff frequencies significantly increased. To account for stimuli with larger modulation depths, we combine the measured linear transfer function with the nonlinear response characteristics obtained for stationary inputs. The resulting linear–nonlinear model accurately predicts the population response for a variety of non-sinusoidal stimuli.

Abstract:
Definition. The phase-of-firing code is a neural coding scheme whereby neurons encode information using the time at which they fire spikes within a cycle of the ongoing oscillatory pattern of network activity. This coding scheme may allow neurons to use their temporal pattern of spikes to encode information that is not encoded in their firing rate.

Abstract:
Correlations in spike-train ensembles can seriously impair the encoding of information by their spatio-temporal structure. An inevitable source of correlation in finite neural networks is common presynaptic input to pairs of neurons. Recent studies demonstrate that spike correlations in recurrent neural networks are considerably smaller than expected based on the amount of shared presynaptic input. Here, we explain this observation by means of a linear network model and simulations of networks of leaky integrate-and-fire neurons. We show that inhibitory feedback efficiently suppresses pairwise correlations and, hence, population-rate fluctuations, thereby assigning inhibitory neurons the new role of active decorrelation. We quantify this decorrelation by comparing the responses of the intact recurrent network (feedback system) and systems where the statistics of the feedback channel is perturbed (feedforward system). Manipulations of the feedback statistics can lead to a significant increase in the power and coherence of the population response. In particular, neglecting correlations within the ensemble of feedback channels or between the external stimulus and the feedback amplifies population-rate fluctuations by orders of magnitude. The fluctuation suppression in homogeneous inhibitory networks is explained by a negative feedback loop in the one-dimensional dynamics of the compound activity. Similarly, a change of coordinates exposes an effective negative feedback loop in the compound dynamics of stable excitatory-inhibitory networks. The suppression of input correlations in finite networks is explained by the population averaged correlations in the linear network model: In purely inhibitory networks, shared-input correlations are canceled by negative spike-train correlations. In excitatory-inhibitory networks, spike-train correlations are typically positive. Here, the suppression of input correlations is not a result of the mere existence of correlations between excitatory (E) and inhibitory (I) neurons, but a consequence of a particular structure of correlations among the three possible pairings (EE, EI, II).

Abstract:
GABAergic interneurons (INs) in the dorsal lateral geniculate nucleus (dLGN) shape the information flow from retina to cortex, presumably by controlling the number of visually evoked spikes in geniculate thalamocortical (TC) neurons, and refining their receptive field. The INs exhibit a rich variety of firing patterns: Depolarizing current injections to the soma may induce tonic firing, periodic bursting or an initial burst followed by tonic spiking, sometimes with prominent spike-time adaptation. When released from hyperpolarization, some INs elicit rebound bursts, while others return more passively to the resting potential. A full mechanistic understanding that explains the function of the dLGN on the basis of neuronal morphology, physiology and circuitry is currently lacking. One way to approach such an understanding is by developing a detailed mathematical model of the involved cells and their interactions. Limitations of the previous models for the INs of the dLGN region prevent an accurate？representation of the conceptual framework needed to understand the computational properties of this region. We here present a detailed compartmental model of INs using, for the first time, a morphological reconstruction and a set of active dendritic conductances constrained by experimental somatic recordings from INs under several different current-clamp conditions. The model makes a number of experimentally testable predictions about the role of specific mechanisms for the firing properties observed in these neurons. In addition to accounting for the significant features of all experimental traces, it quantitatively reproduces the experimental recordings of the action-potential- firing frequency as a function of injected current. We show how and why relative differences in conductance values, rather than differences in ion channel composition, could account for the distinct differences between the responses observed in two different neurons, suggesting that INs may be individually tuned to optimize network operation under different input conditions.

Abstract:
A new method is presented for extraction of population firing-rate models for both thalamocortical and intracortical signal transfer based on stimulus-evoked data from simultaneous thalamic single-electrode and cortical recordings using linear (laminar) multielectrodes in the rat barrel system. Time-dependent population firing rates for granular (layer 4), supragranular (layer 2/3), and infragranular (layer 5) populations in a barrel column and the thalamic population in the homologous barreloid are extracted from the high-frequency portion (multi-unit activity; MUA) of the recorded extracellular signals. These extracted firing rates are in turn used to identify population firing-rate models formulated as integral equations with exponentially decaying coupling kernels, allowing for straightforward transformation to the more common firing-rate formulation in terms of differential equations. Optimal model structures and model parameters are identified by minimizing the deviation between model firing rates and the experimentally extracted population firing rates. For the thalamocortical transfer, the experimental data favor a model with fast feedforward excitation from thalamus to the layer-4 laminar population combined with a slower inhibitory process due to feedforward and/or recurrent connections and mixed linear-parabolic activation functions. The extracted firing rates of the various cortical laminar populations are found to exhibit strong temporal correlations for the present experimental paradigm, and simple feedforward population firing-rate models combined with linear or mixed linear-parabolic activation function are found to provide excellent fits to the data. The identified thalamocortical and intracortical network models are thus found to be qualitatively very different. While the thalamocortical circuit is optimally stimulated by rapid changes in the thalamic firing rate, the intracortical circuits are low-pass and respond most strongly to slowly varying inputs from the cortical layer-4 population.

Abstract:
The cable equation is a proper framework for modeling electrical neural signalling that takes place at a timescale at which the ionic concentrations vary little. However, in neural tissue there are also key dynamic processes that occur at longer timescales. For example, endured periods of intense neural signaling may cause the local extracellular K+-concentration to increase by several millimolars. The clearance of this excess K+ depends partly on diffusion in the extracellular space, partly on local uptake by astrocytes, and partly on intracellular transport (spatial buffering) within astrocytes. These processes, that take place at the time scale of seconds, demand a mathematical description able to account for the spatiotemporal variations in ion concentrations as well as the subsequent effects of these variations on the membrane potential. Here, we present a general electrodiffusive formalism for modeling of ion concentration dynamics in a one-dimensional geometry, including both the intra- and extracellular domains. Based on the Nernst-Planck equations, this formalism ensures that the membrane potential and ion concentrations are in consistency, it ensures global particle/charge conservation and it accounts for diffusion and concentration dependent variations in resistivity. We apply the formalism to a model of astrocytes exchanging ions with the extracellular space. The simulations show that K+-removal from high-concentration regions is driven by a local depolarization of the astrocyte membrane, which concertedly (i) increases the local astrocytic uptake of K+, (ii) suppresses extracellular transport of K+, (iii) increases axial transport of K+ within astrocytes, and (iv) facilitates astrocytic relase of K+ in regions where the extracellular concentration is low. Together, these mechanisms seem to provide a robust regulatory scheme for shielding the extracellular space from excess K+.

Abstract:
Despite its century-old use, the interpretation of local field potentials (LFPs), the low-frequency part of electrical signals recorded in the brain, is still debated. In cortex the LFP appears to mainly stem from transmembrane neuronal currents following synaptic input, and obvious questions regarding the ‘locality’ of the LFP are: What is the size of the signal-generating region, i.e., the spatial reach, around a recording contact? How far does the LFP signal extend outside a synaptically activated neuronal population? And how do the answers depend on the temporal frequency of the LFP signal? Experimental inquiries have given conflicting results, and we here pursue a modeling approach based on a well-established biophysical forward-modeling scheme incorporating detailed reconstructed neuronal morphologies in precise calculations of population LFPs including thousands of neurons. The two key factors determining the frequency dependence of LFP are the spatial decay of the single-neuron LFP contribution and the conversion of synaptic input correlations into correlations between single-neuron LFP contributions. Both factors are seen to give low-pass filtering of the LFP signal power. For uncorrelated input only the first factor is relevant, and here a modest reduction (<50%) in the spatial reach is observed for higher frequencies (>100 Hz) compared to the near-DC () value of about . Much larger frequency-dependent effects are seen when populations of pyramidal neurons receive correlated and spatially asymmetric inputs: the low-frequency () LFP power can here be an order of magnitude or more larger than at 60 Hz. Moreover, the low-frequency LFP components have larger spatial reach and extend further outside the active population than high-frequency components. Further, the spatial LFP profiles for such populations typically span the full vertical extent of the dendrites of neurons in the population. Our numerical findings are backed up by an intuitive simplified model for the generation of population LFP.

Abstract:
Multielectrode array recordings of extracellular electrical field potentials along the depth axis of the cerebral cortex are gaining popularity as an approach for investigating the activity of cortical neuronal circuits. The low-frequency band of extracellular potential, i.e., the local field potential (LFP), is assumed to reflect synaptic activity and can be used to extract the laminar current source density (CSD) profile. However, physiological interpretation of the CSD profile is uncertain because it does not disambiguate synaptic inputs from passive return currents and does not identify population-specific contributions to the signal. These limitations prevent interpretation of the CSD in terms of synaptic functional connectivity in the columnar microcircuit. Here we present a novel anatomically informed model for decomposing the LFP signal into population-specific contributions and for estimating the corresponding activated synaptic projections. This involves a linear forward model, which predicts the population-specific laminar LFP in response to synaptic inputs applied at different positions along each population and a linear inverse model, which reconstructs laminar profiles of synaptic inputs from laminar LFP data based on the forward model. Assuming spatially smooth synaptic inputs within individual populations, the model decomposes the columnar LFP into population-specific contributions and estimates the corresponding laminar profiles of synaptic input as a function of time. It should be noted that constant synaptic currents at all positions along a neuronal population cannot be reconstructed, as this does not result in a change in extracellular potential. However, constraining the solution using a priori knowledge of the spatial distribution of synaptic connectivity provides the further advantage of estimating the strength of active synaptic projections from the columnar LFP profile thus fully specifying synaptic inputs.

Abstract:
Power laws, that is, power spectral densities (PSDs) exhibiting behavior for large frequencies f, have been observed both in microscopic (neural membrane potentials and currents) and macroscopic (electroencephalography; EEG) recordings. While complex network behavior has been suggested to be at the root of this phenomenon, we here demonstrate a possible origin of such power laws in the biophysical properties of single neurons described by the standard cable equation. Taking advantage of the analytical tractability of the so called ball and stick neuron model, we derive general expressions for the PSD transfer functions for a set of measures of neuronal activity: the soma membrane current, the current-dipole moment (corresponding to the single-neuron EEG contribution), and the soma membrane potential. These PSD transfer functions relate the PSDs of the respective measurements to the PSDs of the noisy input currents. With homogeneously distributed input currents across the neuronal membrane we find that all PSD transfer functions express asymptotic high-frequency power laws with power-law exponents analytically identified as for the soma membrane current, for the current-dipole moment, and for the soma membrane potential. Comparison with available data suggests that the apparent power laws observed in the high-frequency end of the PSD spectra may stem from uncorrelated current sources which are homogeneously distributed across the neural membranes and themselves exhibit pink () noise distributions. While the PSD noise spectra at low frequencies may be dominated by synaptic noise, our findings suggest that the high-frequency power laws may originate in noise from intrinsic ion channels. The significance of this finding goes beyond neuroscience as it demonstrates how power laws with a wide range of values for the power-law exponent α may arise from a simple, linear partial differential equation.