oalib

Publish in OALib Journal

ISSN: 2333-9721

APC: Only $99

Submit

Search Results: 1 - 10 of 4798 matches for " Wolfgang Maass "
All listed articles are free for downloading (OA Articles)
Page 1 /4798
Display every page Item
Ensembles of Spiking Neurons with Noise Support Optimal Probabilistic Inference in a Dynamically Changing Environment
Robert Legenstein ,Wolfgang Maass
PLOS Computational Biology , 2014, DOI: doi/10.1371/journal.pcbi.1003859
Abstract: It has recently been shown that networks of spiking neurons with noise can emulate simple forms of probabilistic inference through “neural sampling”, i.e., by treating spikes as samples from a probability distribution of network states that is encoded in the network. Deficiencies of the existing model are its reliance on single neurons for sampling from each random variable, and the resulting limitation in representing quickly varying probabilistic information. We show that both deficiencies can be overcome by moving to a biologically more realistic encoding of each salient random variable through the stochastic firing activity of an ensemble of neurons. The resulting model demonstrates that networks of spiking neurons with noise can easily track and carry out basic computational operations on rapidly varying probability distributions, such as the odds of getting rewarded for a specific behavior. We demonstrate the viability of this new approach towards neural coding and computation, which makes use of the inherent parallelism of generic neural circuits, by showing that this model can explain experimentally observed firing activity of cortical neurons for a variety of tasks that require rapid temporal integration of sensory information.
Probabilistic Inference in General Graphical Models through Sampling in Stochastic Networks of Spiking Neurons
Dejan Pecevski ,Lars Buesing,Wolfgang Maass
PLOS Computational Biology , 2011, DOI: 10.1371/journal.pcbi.1002294
Abstract: An important open problem of computational neuroscience is the generic organization of computations in networks of neurons in the brain. We show here through rigorous theoretical analysis that inherent stochastic features of spiking neurons, in combination with simple nonlinear computational operations in specific network motifs and dendritic arbors, enable networks of spiking neurons to carry out probabilistic inference through sampling in general graphical models. In particular, it enables them to carry out probabilistic inference in Bayesian networks with converging arrows (“explaining away”) and with undirected loops, that occur in many real-world tasks. Ubiquitous stochastic features of networks of spiking neurons, such as trial-to-trial variability and spontaneous activity, are necessary ingredients of the underlying computational organization. We demonstrate through computer simulations that this approach can be scaled up to neural emulations of probabilistic inference in fairly large graphical models, yielding some of the most complex computations that have been carried out so far in networks of spiking neurons.
Computational Aspects of Feedback in Neural Circuits
Wolfgang Maass ,Prashant Joshi,Eduardo D Sontag
PLOS Computational Biology , 2007, DOI: 10.1371/journal.pcbi.0020165
Abstract: It has previously been shown that generic cortical microcircuit models can perform complex real-time computations on continuous input streams, provided that these computations can be carried out with a rapidly fading memory. We investigate the computational capability of such circuits in the more realistic case where not only readout neurons, but in addition a few neurons within the circuit, have been trained for specific tasks. This is essentially equivalent to the case where the output of trained readout neurons is fed back into the circuit. We show that this new model overcomes the limitation of a rapidly fading memory. In fact, we prove that in the idealized case without noise it can carry out any conceivable digital or analog computation on time-varying inputs. But even with noise, the resulting computational model can perform a large class of biologically relevant real-time computations that require a nonfading memory. We demonstrate these computational implications of feedback both theoretically, and through computer simulations of detailed cortical microcircuit models that are subject to noise and have complex inherent dynamics. We show that the application of simple learning procedures (such as linear regression or perceptron learning) to a few neurons enables such circuits to represent time over behaviorally relevant long time spans, to integrate evidence from incoming spike trains over longer periods of time, and to process new information contained in such spike trains in diverse ways according to the current internal state of the circuit. In particular we show that such generic cortical microcircuits with feedback provide a new model for working memory that is consistent with a large set of biological constraints. Although this article examines primarily the computational role of feedback in circuits of neurons, the mathematical principles on which its analysis is based apply to a variety of dynamical systems. Hence they may also throw new light on the computational role of feedback in other complex biological dynamical systems, such as, for example, genetic regulatory networks.
A Learning Theory for Reward-Modulated Spike-Timing-Dependent Plasticity with Application to Biofeedback
Robert Legenstein ,Dejan Pecevski ,Wolfgang Maass
PLOS Computational Biology , 2008, DOI: 10.1371/journal.pcbi.1000180
Abstract: Reward-modulated spike-timing-dependent plasticity (STDP) has recently emerged as a candidate for a learning rule that could explain how behaviorally relevant adaptive changes in complex networks of spiking neurons could be achieved in a self-organizing manner through local synaptic plasticity. However, the capabilities and limitations of this learning rule could so far only be tested through computer simulations. This article provides tools for an analytic treatment of reward-modulated STDP, which allows us to predict under which conditions reward-modulated STDP will achieve a desired learning effect. These analytical results imply that neurons can learn through reward-modulated STDP to classify not only spatial but also temporal firing patterns of presynaptic neurons. They also can learn to respond to specific presynaptic firing patterns with particular spike patterns. Finally, the resulting learning theory predicts that even difficult credit-assignment problems, where it is very hard to tell which synaptic weights should be modified in order to increase the global reward for the system, can be solved in a self-organizing manner through reward-modulated STDP. This yields an explanation for a fundamental experimental result on biofeedback in monkeys by Fetz and Baker. In this experiment monkeys were rewarded for increasing the firing rate of a particular neuron in the cortex and were able to solve this extremely difficult credit assignment problem. Our model for this experiment relies on a combination of reward-modulated STDP with variable spontaneous firing activity. Hence it also provides a possible functional explanation for trial-to-trial variability, which is characteristic for cortical networks of neurons but has no analogue in currently existing artificial computing systems. In addition our model demonstrates that reward-modulated STDP can be applied to all synapses in a large recurrent neural network without endangering the stability of the network dynamics.
Stochastic Computations in Cortical Microcircuit Models
Stefan Habenschuss ,Zeno Jonke ,Wolfgang Maass
PLOS Computational Biology , 2013, DOI: 10.1371/journal.pcbi.1003311
Abstract: Experimental data from neuroscience suggest that a substantial amount of knowledge is stored in the brain in the form of probability distributions over network states and trajectories of network states. We provide a theoretical foundation for this hypothesis by showing that even very detailed models for cortical microcircuits, with data-based diverse nonlinear neurons and synapses, have a stationary distribution of network states and trajectories of network states to which they converge exponentially fast from any initial state. We demonstrate that this convergence holds in spite of the non-reversibility of the stochastic dynamics of cortical microcircuits. We further show that, in the presence of background network oscillations, separate stationary distributions emerge for different phases of the oscillation, in accordance with experimentally reported phase-specific codes. We complement these theoretical results by computer simulations that investigate resulting computation times for typical probabilistic inference tasks on these internally stored distributions, such as marginalization or marginal maximum-a-posteriori estimation. Furthermore, we show that the inherent stochastic dynamics of generic cortical microcircuits enables them to quickly generate approximate solutions to difficult constraint satisfaction problems, where stored knowledge and current inputs jointly constrain possible solutions. This provides a powerful new computing paradigm for networks of spiking neurons, that also throws new light on how networks of neurons in the brain could carry out complex computational tasks such as prediction, imagination, memory recall and problem solving.
STDP Installs in Winner-Take-All Circuits an Online Approximation to Hidden Markov Model Learning
David Kappel ,Bernhard Nessler,Wolfgang Maass
PLOS Computational Biology , 2014, DOI: doi/10.1371/journal.pcbi.1003511
Abstract: In order to cross a street without being run over, we need to be able to extract very fast hidden causes of dynamically changing multi-modal sensory stimuli, and to predict their future evolution. We show here that a generic cortical microcircuit motif, pyramidal cells with lateral excitation and inhibition, provides the basis for this difficult but all-important information processing capability. This capability emerges in the presence of noise automatically through effects of STDP on connections between pyramidal cells in Winner-Take-All circuits with lateral excitation. In fact, one can show that these motifs endow cortical microcircuits with functional properties of a hidden Markov model, a generic model for solving such tasks through probabilistic inference. Whereas in engineering applications this model is adapted to specific tasks through offline learning, we show here that a major portion of the functionality of hidden Markov models arises already from online applications of STDP, without any supervision or rewards. We demonstrate the emergent computing capabilities of the model through several computer simulations. The full power of hidden Markov model learning can be attained through reward-gated STDP. This is due to the fact that these mechanisms enable a rejection sampling approximation to theoretically optimal learning. We investigate the possible performance gain that can be achieved with this more accurate learning method for an artificial grammar task.
Perpendicular magnetic anisotropy in nanoclusters caused by surface segregation and shape
Stefan Heinrichs,Wolfgang Dieterich,Philipp Maass
Physics , 2005,
Abstract: The growth of binary alloy clusters on a weakly interacting substrate through codeposition of two atomic species is studied by kinetic Monte Carlo simulation. Our model describes salient features of CoPt$_3$-nanoclusters, as obtained recently by the molecular-beam epitaxy technique. The clusters display perpendicular magnetic anisotropy (PMA) in a temperature window of growth favorable for applications. This temperature window is found to arise from the interplay of Pt surface segregation and aspect ratio for cluster shapes. Conclusions are drawn how to optimize growth parameters with respect to PMA.
Colloquium: Cluster growth on surfaces - densities, size distributions and morphologies
Mario Einax,Wolfgang Dieterich,Philipp Maass
Physics , 2014, DOI: 10.1103/RevModPhys.85.921
Abstract: Understanding and control of cluster and thin film growth on solid surfaces is a subject of intensive research to develop nanomaterials with new physical properties. In this Colloquium we review basic theoretical concepts to describe submonolayer growth kinetics under non-equilibrium conditions. It is shown how these concepts can be extended and further developed to treat self-organized cluster formation in material systems of current interest, such as nanoalloys and molecular clusters in organic thin film growth. The presentation is focused on ideal flat surfaces to limit the scope and to discuss key ideas in a transparent way. Open experimental and theoretical challenges are pointed out.
Self-consistent rate theory for submonolayer surface growth of multi-component systems
Mario Einax,Philipp Maass,Wolfgang Dieterich
Physics , 2014, DOI: 10.1103/PhysRevB.90.035441
Abstract: The self-consistent rate theory for surface growth in the submonolayer regime is generalized from mono- to multi-component systems, which are formed by codeposition of different types of atoms or molecules. As a new feature, the theory requires the introduction of pair density distributions to enable a symmetric treatment of reactions among different species. The approach is explicitly developed for binary systems and tested against kinetic Monte Carlo simulations. Using a reduced set of rate equations, only a few differential equations need to be solved to obtain good quantitative predictions for island and adatom densities, as well as densities of unstable clusters.
Wall induced density profiles and density correlations in confined Takahashi lattice gases
Joachim Buschle,Philipp Maass,Wolfgang Dieterich
Physics , 1999, DOI: 10.1023/A:1018652808652
Abstract: We propose a general formalism to study the static properties of a system composed of particles with nearest neighbor interactions that are located on the sites of a one-dimensional lattice confined by walls (``confined Takahashi lattice gas''). Linear recursion relations for generalized partition functions are derived, from which thermodynamic quantities, as well as density distributions and correlation functions of arbitrary order can be determined in the presence of an external potential. Explicit results for density profiles and pair correlations near a wall are presented for various situations. As a special case of the Takahashi model we consider in particular the hard rod lattice gas, for which a system of nonlinear coupled difference equations for the occupation probabilities has been presented previously by Robledo and Varea. A solution of these equations is given in terms of the solution of a system of independent linear equations. Moreover, for zero external potential in the hard rod system we specify various central regions between the confining walls, where the occupation probabilities are constant and the correlation functions are translationally invariant in the canonical ensemble. In the grand canonical ensemble such regions do not exist.
Page 1 /4798
Display every page Item


Home
Copyright © 2008-2017 Open Access Library. All rights reserved.