Publish in OALib Journal

ISSN: 2333-9721

APC: Only $99


Any time

2019 ( 190 )

2018 ( 326 )

2017 ( 299 )

2016 ( 461 )

Custom range...

Search Results: 1 - 10 of 300896 matches for " Karl J. Friston "
All listed articles are free for downloading (OA Articles)
Page 1 /300896
Display every page Item
Free-Energy and Illusions: The Cornsweet Effect
Harriet Brown,Karl J. Friston
Frontiers in Psychology , 2012, DOI: 10.3389/fpsyg.2012.00043
Abstract: In this paper, we review the nature of illusions using the free-energy formulation of Bayesian perception. We reiterate the notion that illusory percepts are, in fact, Bayes-optimal and represent the most likely explanation for ambiguous sensory input. This point is illustrated using perhaps the simplest of visual illusions; namely, the Cornsweet effect. By using plausible prior beliefs about the spatial gradients of illuminance and reflectance in visual scenes, we show that the Cornsweet effect emerges as a natural consequence of Bayes-optimal perception. Furthermore, we were able to simulate the appearance of secondary illusory percepts (Mach bands) as a function of stimulus contrast. The contrast-dependent emergence of the Cornsweet effect and subsequent appearance of Mach bands were simulated using a simple but plausible generative model. Because our generative model was inverted using a neurobiologically plausible scheme, we could use the inversion as a simulation of neuronal processing and implicit inference. Finally, we were able to verify the qualitative and quantitative predictions of this Bayes-optimal simulation psychophysically, using stimuli presented briefly to normal subjects at different contrast levels, in the context of a fixed alternative forced choice paradigm.
Free Energy and Dendritic Self-Organization
Stefan J. Kiebel,Karl J. Friston
Frontiers in Systems Neuroscience , 2011, DOI: 10.3389/fnsys.2011.00080
Abstract: In this paper, we pursue recent observations that, through selective dendritic filtering, single neurons respond to specific sequences of presynaptic inputs. We try to provide a principled and mechanistic account of this selectivity by applying a recent free-energy principle to a dendrite that is immersed in its neuropil or environment. We assume that neurons self-organize to minimize a variational free-energy bound on the self-information or surprise of presynaptic inputs that are sampled. We model this as a selective pruning of dendritic spines that are expressed on a dendritic branch. This pruning occurs when postsynaptic gain falls below a threshold. Crucially, postsynaptic gain is itself optimized with respect to free energy. Pruning suppresses free energy as the dendrite selects presynaptic signals that conform to its expectations, specified by a generative model implicit in its intracellular kinetics. Not only does this provide a principled account of how neurons organize and selectively sample the myriad of potential presynaptic inputs they are exposed to, but it also connects the optimization of elemental neuronal (dendritic) processing to generic (surprise or evidence-based) schemes in statistics and machine learning, such as Bayesian model selection and automatic relevance determination.
Topological inference for EEG and MEG
James M. Kilner,Karl J. Friston
Statistics , 2010, DOI: 10.1214/10-AOAS337
Abstract: Neuroimaging produces data that are continuous in one or more dimensions. This calls for an inference framework that can handle data that approximate functions of space, for example, anatomical images, time--frequency maps and distributed source reconstructions of electromagnetic recordings over time. Statistical parametric mapping (SPM) is the standard framework for whole-brain inference in neuroimaging: SPM uses random field theory to furnish $p$-values that are adjusted to control family-wise error or false discovery rates, when making topological inferences over large volumes of space. Random field theory regards data as realizations of a continuous process in one or more dimensions. This contrasts with classical approaches like the Bonferroni correction, which consider images as collections of discrete samples with no continuity properties (i.e., the probabilistic behavior at one point in the image does not depend on other points). Here, we illustrate how random field theory can be applied to data that vary as a function of time, space or frequency. We emphasize how topological inference of this sort is invariant to the geometry of the manifolds on which data are sampled. This is particularly useful in electromagnetic studies that often deal with very smooth data on scalp or cortical meshes. This application illustrates the versatility and simplicity of random field theory and the seminal contributions of Keith Worsley (1951--2009), a key architect of topological inference.
Canonical Source Reconstruction for MEG
Jérémie Mattout,Richard N. Henson,Karl J. Friston
Computational Intelligence and Neuroscience , 2007, DOI: 10.1155/2007/67613
Abstract: We describe a simple and efficient solution to the problem of reconstructing electromagnetic sources into a canonical or standard anatomical space. Its simplicity rests upon incorporating subject-specific anatomy into the forward model in a way that eschews the need for cortical surface extraction. The forward model starts with a canonical cortical mesh, defined in a standard stereotactic space. The mesh is warped, in a nonlinear fashion, to match the subject's anatomy. This warping is the inverse of the transformation derived from spatial normalization of the subject's structural MRI image, using fully automated procedures that have been established for other imaging modalities. Electromagnetic lead fields are computed using the warped mesh, in conjunction with a spherical head model (which does not rely on individual anatomy). The ensuing forward model is inverted using an empirical Bayesian scheme that we have described previously in several publications. Critically, because anatomical information enters the forward model, there is no need to spatially normalize the reconstructed source activity. In other words, each source, comprising the mesh, has a predetermined and unique anatomical attribution within standard stereotactic space. This enables the pooling of data from multiple subjects and the reporting of results in stereotactic coordinates. Furthermore, it allows the graceful fusion of fMRI and MEG data within the same anatomical framework.
A Hierarchy of Time-Scales and the Brain
Stefan J. Kiebel ,Jean Daunizeau,Karl J. Friston
PLOS Computational Biology , 2008, DOI: 10.1371/journal.pcbi.1000209
Abstract: In this paper, we suggest that cortical anatomy recapitulates the temporal hierarchy that is inherent in the dynamics of environmental states. Many aspects of brain function can be understood in terms of a hierarchy of temporal scales at which representations of the environment evolve. The lowest level of this hierarchy corresponds to fast fluctuations associated with sensory processing, whereas the highest levels encode slow contextual changes in the environment, under which faster representations unfold. First, we describe a mathematical model that exploits the temporal structure of fast sensory input to track the slower trajectories of their underlying causes. This model of sensory encoding or perceptual inference establishes a proof of concept that slowly changing neuronal states can encode the paths or trajectories of faster sensory states. We then review empirical evidence that suggests that a temporal hierarchy is recapitulated in the macroscopic organization of the cortex. This anatomic-temporal hierarchy provides a comprehensive framework for understanding cortical function: the specific time-scale that engages a cortical area can be inferred by its location along a rostro-caudal gradient, which reflects the anatomical distance from primary sensory areas. This is most evident in the prefrontal cortex, where complex functions can be explained as operations on representations of the environment that change slowly. The framework provides predictions about, and principled constraints on, cortical structure–function relationships, which can be tested by manipulating the time-scales of sensory input.
Game Theory of Mind
Wako Yoshida ,Ray J. Dolan,Karl J. Friston
PLOS Computational Biology , 2008, DOI: 10.1371/journal.pcbi.1000254
Abstract: This paper introduces a model of ‘theory of mind’, namely, how we represent the intentions and goals of others to optimise our mutual interactions. We draw on ideas from optimum control and game theory to provide a ‘game theory of mind’. First, we consider the representations of goals in terms of value functions that are prescribed by utility or rewards. Critically, the joint value functions and ensuing behaviour are optimised recursively, under the assumption that I represent your value function, your representation of mine, your representation of my representation of yours, and so on ad infinitum. However, if we assume that the degree of recursion is bounded, then players need to estimate the opponent's degree of recursion (i.e., sophistication) to respond optimally. This induces a problem of inferring the opponent's sophistication, given behavioural exchanges. We show it is possible to deduce whether players make inferences about each other and quantify their sophistication on the basis of choices in sequential games. This rests on comparing generative models of choices with, and without, inference. Model comparison is demonstrated using simulated and real data from a ‘stag-hunt’. Finally, we note that exactly the same sophisticated behaviour can be achieved by optimising the utility function itself (through prosocial utility), producing unsophisticated but apparently altruistic agents. This may be relevant ethologically in hierarchal game theory and coevolution.
Reinforcement Learning or Active Inference?
Karl J. Friston, Jean Daunizeau, Stefan J. Kiebel
PLOS ONE , 2009, DOI: 10.1371/journal.pone.0006421
Abstract: This paper questions the need for reinforcement learning or control theory when optimising behaviour. We show that it is fairly simple to teach an agent complicated and adaptive behaviours using a free-energy formulation of perception. In this formulation, agents adjust their internal states and sampling of the environment to minimize their free-energy. Such agents learn causal structure in the environment and sample it in an adaptive and self-supervised fashion. This results in behavioural policies that reproduce those optimised by reinforcement learning and dynamic programming. Critically, we do not need to invoke the notion of reward, value or utility. We illustrate these points by solving a benchmark problem in dynamic programming; namely the mountain-car problem, using active perception or inference under the free-energy principle. The ensuing proof-of-concept may be important because the free-energy formulation furnishes a unified account of both action and perception and may speak to a reappraisal of the role of dopamine in the brain.
Predictive Coding or Evidence Accumulation? False Inference and Neuronal Fluctuations
Guido Hesselmann,Sepideh Sadaghiani,Karl J. Friston,Andreas Kleinschmidt
PLOS ONE , 2012, DOI: 10.1371/journal.pone.0009926
Abstract: Perceptual decisions can be made when sensory input affords an inference about what generated that input. Here, we report findings from two independent perceptual experiments conducted during functional magnetic resonance imaging (fMRI) with a sparse event-related design. The first experiment, in the visual modality, involved forced-choice discrimination of coherence in random dot kinematograms that contained either subliminal or periliminal motion coherence. The second experiment, in the auditory domain, involved free response detection of (non-semantic) near-threshold acoustic stimuli. We analysed fluctuations in ongoing neural activity, as indexed by fMRI, and found that neuronal activity in sensory areas (extrastriate visual and early auditory cortex) biases perceptual decisions towards correct inference and not towards a specific percept. Hits (detection of near-threshold stimuli) were preceded by significantly higher activity than both misses of identical stimuli or false alarms, in which percepts arise in the absence of appropriate sensory input. In accord with predictive coding models and the free-energy principle, this observation suggests that cortical activity in sensory brain areas reflects the precision of prediction errors and not just the sensory evidence or prediction errors per se.
Information and Efficiency in the Nervous System—A Synthesis
Biswa Sengupta ,Martin B. Stemmler,Karl J. Friston
PLOS Computational Biology , 2013, DOI: 10.1371/journal.pcbi.1003157
Abstract: In systems biology, questions concerning the molecular and cellular makeup of an organism are of utmost importance, especially when trying to understand how unreliable components—like genetic circuits, biochemical cascades, and ion channels, among others—enable reliable and adaptive behaviour. The repertoire and speed of biological computations are limited by thermodynamic or metabolic constraints: an example can be found in neurons, where fluctuations in biophysical states limit the information they can encode—with almost 20–60% of the total energy allocated for the brain used for signalling purposes, either via action potentials or by synaptic transmission. Here, we consider the imperatives for neurons to optimise computational and metabolic efficiency, wherein benefits and costs trade-off against each other in the context of self-organised and adaptive behaviour. In particular, we try to link information theoretic (variational) and thermodynamic (Helmholtz) free-energy formulations of neuronal processing and show how they are related in a fundamental way through a complexity minimisation lemma.
A Bayesian Foundation for Individual Learning Under Uncertainty
Christoph Mathys,Jean Daunizeau,Karl J. Friston,Klaas E. Stephan
Frontiers in Human Neuroscience , 2011, DOI: 10.3389/fnhum.2011.00039
Abstract: Computational learning models are critical for understanding mechanisms of adaptive behavior. However, the two major current frameworks, reinforcement learning (RL) and Bayesian learning, both have certain limitations. For example, many Bayesian models are agnostic of inter-individual variability and involve complicated integrals, making online learning difficult. Here, we introduce a generic hierarchical Bayesian framework for individual learning under multiple forms of uncertainty (e.g., environmental volatility and perceptual uncertainty). The model assumes Gaussian random walks of states at all but the first level, with the step size determined by the next highest level. The coupling between levels is controlled by parameters that shape the influence of uncertainty on learning in a subject-specific fashion. Using variational Bayes under a mean-field approximation and a novel approximation to the posterior energy function, we derive trial-by-trial update equations which (i) are analytical and extremely efficient, enabling real-time learning, (ii) have a natural interpretation in terms of RL, and (iii) contain parameters representing processes which play a key role in current theories of learning, e.g., precision-weighting of prediction error. These parameters allow for the expression of individual differences in learning and may relate to specific neuromodulatory mechanisms in the brain. Our model is very general: it can deal with both discrete and continuous states and equally accounts for deterministic and probabilistic relations between environmental events and perceptual states (i.e., situations with and without perceptual uncertainty). These properties are illustrated by simulations and analyses of empirical time series. Overall, our framework provides a novel foundation for understanding normal and pathological learning that contextualizes RL within a generic Bayesian scheme and thus connects it to principles of optimality from probability theory.
Page 1 /300896
Display every page Item

Copyright © 2008-2017 Open Access Library. All rights reserved.