Abstract:
In order to reduce the magnitude of the force applied to skull for treatment of acute cervical spine dislocation, we developed a method of skeletal traction based on reduction of friction forces under the patient’s head. Traction force was applied to sculls of five patients with cervical fracture-dislocations. A difference in friction interface between the patient’s head and shoulder girdle was created. The traction weight required for the reduction of the vertebral dislocation was significantly lower than an expected minimal traction weight in the commonly used techniques (p = 0.013). The presented method permits an effective and safe reduction of dislocated cervical vertebra by a relatively low traction force.

Abstract:
Background The phenomena that emerge from the interaction of the stochastic opening and closing of ion channels (channel noise) with the non-linear neural dynamics are essential to our understanding of the operation of the nervous system. The effects that channel noise can have on neural dynamics are generally studied using numerical simulations of stochastic models. Algorithms based on discrete Markov Chains (MC) seem to be the most reliable and trustworthy, but even optimized algorithms come with a non-negligible computational cost. Diffusion Approximation (DA) methods use Stochastic Differential Equations (SDE) to approximate the behavior of a number of MCs, considerably speeding up simulation times. However, model comparisons have suggested that DA methods did not lead to the same results as in MC modeling in terms of channel noise statistics and effects on excitability. Recently, it was shown that the difference arose because MCs were modeled with coupled gating particles, while the DA was modeled using uncoupled gating particles. Implementations of DA with coupled particles, in the context of a specific kinetic scheme, yielded similar results to MC. However, it remained unclear how to generalize these implementations to different kinetic schemes, or whether they were faster than MC algorithms. Additionally, a steady state approximation was used for the stochastic terms, which, as we show here, can introduce significant inaccuracies. Main Contributions We derived the SDE explicitly for any given ion channel kinetic scheme. The resulting generic equations were surprisingly simple and interpretable – allowing an easy, transparent and efficient DA implementation, avoiding unnecessary approximations. The algorithm was tested in a voltage clamp simulation and in two different current clamp simulations, yielding the same results as MC modeling. Also, the simulation efficiency of this DA method demonstrated considerable superiority over MC methods, except when short time steps or low channel numbers were used.

Abstract:
Recent experiments have demonstrated that the timescale of adaptation of single neurons and ion channel populations to stimuli slows down as the length of stimulation increases; in fact, no upper bound on temporal timescales seems to exist in such systems. Furthermore, patch clamp experiments on single ion channels have hinted at the existence of large, mostly unobservable, inactivation state spaces within a single ion channel. This raises the question of the relation between this multitude of inactivation states and the observed behavior. In this work we propose a minimal model for ion channel dynamics which does not assume any specific structure of the inactivation state space. The model is simple enough to render an analytical study possible. This leads to a clear and concise explanation of the experimentally observed exponential history-dependent relaxation in sodium channels in a voltage clamp setting, and shows that their recovery rate from slow inactivation must be voltage dependent. Furthermore, we predict that history-dependent relaxation cannot be created by overly sparse spiking activity. While the model was created with ion channel populations in mind, its simplicity and genericalness render it a good starting point for modeling similar effects in other systems, and for scaling up to higher levels such as single neurons which are also known to exhibit multiple time scales.

Abstract:
In recent experiments, synaptically isolated neurons from rat cortical culture, were stimulated with periodic extracellular fixed-amplitude current pulses for extended durations of days. The neuron’s response depended on its own history, as well as on the history of the input, and was classified into several modes. Interestingly, in one of the modes the neuron behaved intermittently, exhibiting irregular firing patterns changing in a complex and variable manner over the entire range of experimental timescales, from seconds to days. With the aim of developing a minimal biophysical explanation for these results, we propose a general scheme, that, given a few assumptions (mainly, a timescale separation in kinetics) closely describes the response of deterministic conductance-based neuron models under pulse stimulation, using a discrete time piecewise linear mapping, which is amenable to detailed mathematical analysis. Using this method we reproduce the basic modes exhibited by the neuron experimentally, as well as the mean response in each mode. Specifically, we derive precise closed-form input-output expressions for the transient timescale and firing rates, which are expressed in terms of experimentally measurable variables, and conform with the experimental results. However, the mathematical analysis shows that the resulting firing patterns in these deterministic models are always regular and repeatable (i.e., no chaos), in contrast to the irregular and variable behavior displayed by the neuron in certain regimes. This fact, and the sensitive near-threshold dynamics of the model, indicate that intrinsic ion channel noise has a significant impact on the neuronal response, and may help reproduce the experimentally observed variability, as we also demonstrate numerically. In a companion paper, we extend our analysis to stochastic conductance-based models, and show how these can be used to reproduce the details of the observed irregular and variable neuronal response.

Abstract:
In this paper we characterize irreducible generic representations of $\SO_{2n+1}(k)$ where $k$ is a $p$-adic field) by means of twisted local gamma factors (the Local Converse Theorem). As applications, we prove that two irreducible generic cuspidal automorphic representations of $\SO_{2n+1}({\Bbb A})$ (where ${\Bbb A}$ is the ring of adeles of a number field) are equivalent if their local components are equivalent at almost all local places (the Rigidity Theorem);and prove the Local Langlands Reciprocity Conjecture for generic supercuspidal representations of $\SO_{2n+1}(k)$.

Abstract:
The evolution of a continuous time Markov process with a finite number of states is usually calculated by the Master equation - a linear differential equations with a singular generator matrix. We derive a general method for reducing the dimensionality of the Master equation by one by using the probability normalization constraint, thus obtaining a affine differential equation with a (non-singular) stable generator matrix. Additionally, the reduced form yields a simple explicit expression for the stationary probability distribution, which is usually derived implicitly. Finally, we discuss the application of this method to stochastic differential equations.

Abstract:
Many systems are modulated by unknown slow processes. This hinders analysis in highly non-linear systems, such as excitable systems. We show that for such systems, if the input matches the sparse `spiky' nature of the output, the spiking input-output relation can be derived. We use this relation to reproduce and interpret the irregular and complex 1/f response observed in isolated neurons stimulated over days. We decompose the neuronal response into contributions from its long history of internal noise and its short (few minutes) history of inputs, quantifying memory, noise and stability.

Abstract:
Cortical neurons include many sub-cellular processes, operating at multiple timescales, which may affect their response to stimulation through non-linear and stochastic interaction with ion channels and ionic concentrations. Since new processes are constantly being discovered, biophysical neuron models increasingly become "too complex to be useful" yet "too simple to be realistic". A fundamental open question in theoretical neuroscience pertains to how this deadlock may be resolved. In order to tackle this problem, we first define the notion of a "excitable neuron model". Then we analytically derive the input-output relation of such neuronal models, relating input spike trains to output spikes based on known biophysical properties. Thus we obtain closed-form expressions for the mean firing rates, all second order statistics (input-state-output correlation and spectra) and construct optimal linear estimators for the neuronal response and internal state. These results are guaranteed to hold, given a few generic assumptions, for any stochastic biophysical neuron model (with an arbitrary number of slow kinetic processes) under general sparse stimulation. This solution suggests that the common simplifying approach that ignores much of the complexity of the neuron might actually be unnecessary and even deleterious in some cases. Specifically, the stochasticity of ion channels and the temporal sparseness of inputs is exactly what rendered our analysis tractable, allowing us to incorporate slow kinetics.

Abstract:
Neurons fire irregularly on multiple timescales when stimulated with a periodic pulse train. This raises two questions: Does this irregularity imply significant intrinsic stochasticity? Can existing neuron models be readily extended to describe behavior at long timescales? We show here that for commonly studied neuronal models, dynamics is not chaotic and can only produce stable and periodic firing patterns. This is done by transforming the neuron model to an analytically tractable piecewise linear discrete map. Thus we answer "yes" and "no" to the above questions, respectively.

Abstract:
Significant success has been reported recently using deep neural networks for classification. Such large networks can be computationally intensive, even after training is over. Implementing these trained networks in hardware chips with a limited precision of synaptic weights may improve their speed and energy efficiency by several orders of magnitude, thus enabling their integration into small and low-power electronic devices. With this motivation, we develop a computationally efficient learning algorithm for multilayer neural networks with binary weights, assuming all the hidden neurons have a fan-out of one. This algorithm, derived within a Bayesian probabilistic online setting, is shown to work well for both synthetic and real-world problems, performing comparably to algorithms with real-valued weights, while retaining computational tractability.