oalib

Publish in OALib Journal

ISSN: 2333-9721

APC: Only $99

Submit

Any time

2020 ( 2 )

2019 ( 25 )

2018 ( 23 )

2017 ( 12 )

Custom range...

Search Results: 1 - 10 of 10540 matches for " Jonathan Grainger "
All listed articles are free for downloading (OA Articles)
Page 1 /10540
Display every page Item
The Lazy Visual Word Form Area: Computational Insights into Location-Sensitivity
Thomas Hannagan ,Jonathan Grainger
PLOS Computational Biology , 2013, DOI: 10.1371/journal.pcbi.1003250
Abstract: In a recent study, Rauschecker et al. convincingly demonstrate that visual words evoke neural activation signals in the Visual Word Form Area that can be classified based on where they were presented in the visual fields. This result goes against the prevailing consensus, and begs an explanation. We show that one of the simplest possible models for word recognition, a multilayer feedforward network, will exhibit precisely the same behavior when trained to recognize words at different locations. The model suggests that the VWFA initially starts with information about location, which is not being suppressed during reading acquisition more than is needed to meet the requirements of location-invariant word recognition. Some new interpretations of Rauschecker et al.'s results are proposed, and three specific predictions are derived to be tested in further studies.
Deciphering CAPTCHAs: What a Turing Test Reveals about Human Cognition
Thomas Hannagan, Maria Ktori, Myriam Chanceaux, Jonathan Grainger
PLOS ONE , 2012, DOI: 10.1371/journal.pone.0032121
Abstract: Turning Turing's logic on its head, we used widespread letter-based Turing Tests found on the internet (CAPTCHAs) to shed light on human cognition. We examined the basis of the human ability to solve CAPTCHAs, where machines fail. We asked whether this is due to our use of slow-acting inferential processes that would not be available to machines, or whether fast-acting automatic orthographic processing in humans has superior robustness to shape variations. A masked priming lexical decision experiment revealed efficient processing of CAPTCHA words in conditions that rule out the use of slow inferential processing. This shows that the human superiority in solving CAPTCHAs builds on a high degree of invariance to location and continuous transforms, which is achieved during the very early stages of visual word recognition in skilled readers.
The pupillary light response reflects eye-movement preparation
Sebastiaan Matht,Lotje van der Linden,Jonathan Grainger,Franoise Vitu
PeerJ , 2015, DOI: 10.7287/peerj.preprints.238v2
Abstract: When the eyes are exposed to an increased influx of light, the pupils constrict. The pupillary light response (PLR) is traditionally believed to be purely reflexive and not susceptible to cognitive influences. In contrast to this traditional view, we report here that preparation of a PLR occurs in parallel with preparation of a saccadic eye movement towards a bright (or dark) stimulus, even before the eyes set in motion. Participants fixated a central gray area and made a saccade towards a peripheral target. Using gaze-contingent display changes, we manipulated whether or not the brightness of the target background was the same during and after saccade preparation. More specifically, on some trials we changed the brightness of the target background as soon as the eyes set in motion, thus dissociating the preparatory PLR (i.e. to the brightness of the target background before the saccade) from the 'regular' PLR (i.e. to the brightness after the saccade). We show that a PLR to the brightness of the to-be-fixated target background is prepared before the eyes set in motion. This reduces the latency of the PLR by approximately 100 ms. We link our findings to the pre-saccadic shift of attention: The pupil prepares to adjusts its size to the brightness of a to-be-fixated stimulus as soon as attention covertly shifts towards that stimulus, about 100 ms before a saccade is executed. Our findings illustrate that the PLR is a dynamic movement that is tightly linked to visual attention and eye-movement preparation.
Mindfulness as a Factor in the Relationship between Insecure Attachment Style, Neurotic Personality and Disordered Eating Behavior  [PDF]
Aileen Pidgeon, Alexandra Grainger
Open Journal of Medical Psychology (OJMP) , 2013, DOI: 10.4236/ojmp.2013.24B005
Abstract:

Mindfulness, conceptualized as a dispositional trait that differs across individuals, may potentially influence disordered eating behaviors. Previous research has independently identified insecure attachment style and neurotic personality traits as correlates of disordered eating behavior. Thus this current study will investigate whether neurotic personality traits, insecure attachment style and mindfulness predict disordered eating behavior controlling for gender differences. Participants (N = 126) completed the Adult Attachment Scale [1], the Three Factor Eating Questionnaire – Revised 18 [2], The Cognitive and Affective Mindfulness Scale – Revised [3] and the International Personality Item Pool [4]. The results of this cross-sectional study indicated that neurotic personality traits, insecure attachment style and mindfulness were related to disordered eating behaviors. The variance in disordered eating behaviors that was accounted for by neurotic personality traits and insecure attachment style was significantly reduced with the introduction of mindfulness. The results provides preliminary support for the inclusion of mindfulness training in disordered eating behavior interventions for individuals exhibiting an insecure attachment style and neurotic personality traits. Limitations and implications for further research are discussed.

Deep Learning of Orthographic Representations in Baboons
Thomas Hannagan, Johannes C. Ziegler, Stéphane Dufau, Jo?l Fagot, Jonathan Grainger
PLOS ONE , 2014, DOI: 10.1371/journal.pone.0084843
Abstract: What is the origin of our ability to learn orthographic knowledge? We use deep convolutional networks to emulate the primate's ventral visual stream and explore the recent finding that baboons can be trained to discriminate English words from nonwords [1]. The networks were exposed to the exact same sequence of stimuli and reinforcement signals as the baboons in the experiment, and learned to map real visual inputs (pixels) of letter strings onto binary word/nonword responses. We show that the networks' highest levels of representations were indeed sensitive to letter combinations as postulated in our previous research. The model also captured the key empirical findings, such as generalization to novel words, along with some intriguing inter-individual differences. The present work shows the merits of deep learning networks that can simulate the whole processing chain all the way from the visual input to the response while allowing researchers to analyze the complex representations that emerge during the learning process.
The Pupillary Light Response Reveals the Focus of Covert Visual Attention
Sebastiaan Math?t, Lotje van der Linden, Jonathan Grainger, Fran?oise Vitu
PLOS ONE , 2013, DOI: 10.1371/journal.pone.0078168
Abstract: The pupillary light response is often assumed to be a reflex that is not susceptible to cognitive influences. In line with recent converging evidence, we show that this reflexive view is incomplete, and that the pupillary light response is modulated by covert visual attention: Covertly attending to a bright area causes a pupillary constriction, relative to attending to a dark area under identical visual input. This attention-related modulation of the pupillary light response predicts cuing effects in behavior, and can be used as an index of how strongly participants attend to a particular location. Therefore, we suggest that pupil size may offer a new way to continuously track the focus of covert visual attention, without requiring a manual response from the participant. The theoretical implication of this finding is that the pupillary light response is neither fully reflexive, nor under complete voluntary control, but is instead best characterized as a stereotyped response to a voluntarily selected target. In this sense, the pupillary light response is similar to saccadic and smooth pursuit eye movements. Together, eye movements and the pupillary light response maximize visual acuity, stabilize visual input, and selectively filter visual information as it enters the eye.
Evidence for Letter-Specific Position Coding Mechanisms
Stéphanie Massol, Jon Andoni Du?abeitia, Manuel Carreiras, Jonathan Grainger
PLOS ONE , 2013, DOI: 10.1371/journal.pone.0068460
Abstract: The perceptual matching (same-different judgment) paradigm was used to investigate precision in position coding for strings of letters, digits, and symbols. Reference and target stimuli were 6 characters long and could be identical or differ either by transposing two characters or substituting two characters. The distance separating the two characters was manipulated such that they could either be contiguous, separated by one intervening character, or separated by two intervening characters. Effects of type of character and distance were measured in terms of the difference between the transposition and substitution conditions (transposition cost). Error rates revealed that transposition costs were greater for letters than for digits, which in turn were greater than for symbols. Furthermore, letter stimuli showed a gradual decrease in transposition cost as the distance between the letters increased, whereas the only significant difference for digit and symbol stimuli arose between contiguous and non-contiguous changes, with no effect of distance on the non-contiguous changes. The results are taken as further evidence for letter-specific position coding mechanisms.
ERP Evidence for Ultra-Fast Semantic Processing in the Picture–Word Interference Paradigm
Roberto Dell’Acqua,Paola Sessa,Francesca Peressotti,Claudio Mulatti,Eduardo Navarrete,Jonathan Grainger
Frontiers in Psychology , 2010, DOI: 10.3389/fpsyg.2010.00177
Abstract: We used the event-related potential (ERP) approach combined with a subtraction technique to explore the timecourse of activation of semantic and phonological representations in the picture–word interference paradigm. Subjects were exposed to to-be-named pictures superimposed on to-be-ignored semantically related, phonologically related, or unrelated words, and distinct ERP waveforms were generated time-locked to these different classes of stimuli. Difference ERP waveforms were generated in the semantic condition and in the phonological condition by subtracting ERP activity associated with unrelated picture–word stimuli from ERP activity associated with related picture–word stimuli. We measured both latency and amplitude of these difference ERP waveforms in a pre-articulatory time-window. The behavioral results showed standard interference effects in the semantic condition, and facilitatory effects in the phonological condition. The ERP results indicated a bimodal distribution of semantic effects, characterized by the extremely rapid onset (at about 100 ms) of a primary component followed by a later, distinct, component. Phonological effects in ERPs were characterized by components with later onsets and distinct scalp topography of ERP sources relative to semantic ERP components. Regression analyses revealed a covariation between semantic and phonological behavioral effect sizes and ERP component amplitudes, and no covariation between the behavioral effects and ERP component latency. The early effect of semantic distractors is thought to reflect very fast access to semantic representations from picture stimuli modulating on-going orthographic processing of distractor words.
The pupillary light response reflects exogenous attention and inhibition of return
Sebastiaan Matht,Edwin S. Dalmaijer,Jonathan Grainger,Stefan Van der Stigchel
PeerJ , 2015, DOI: 10.7287/peerj.preprints.422v1
Abstract: Here we show that the pupillary light response reflects exogenous (involuntary) shifts of attention and inhibition of return. Participants fixated in the center of a display that was divided into a bright and a dark half. An exogenous cue attracted attention to the bright or dark side of the display. Initially, the pupil constricted when the bright, as compared to the dark side of the display was cued, reflecting a shift of attention towards the exogenous cue. Crucially, this pattern reversed about one second after cue presentation. This later-occurring, relative dilation (when the bright side was cued) reflected disengagement from the previously attended location, analogous to the behavioral phenomenon of inhibition of return. Indeed, we observed a strong correlation between 'pupillary inhibition' and behavioral inhibition of return. We conclude that the pupillary light response is a complex eye movement that reflects how we selectively parse and interpret visual input.
Invariant subspaces for polynomially compact almost superdiagonal operators on
Arthur D. Grainger
International Journal of Mathematics and Mathematical Sciences , 2003, DOI: 10.1155/s0161171203209261
Abstract: It is shown that almost superdiagonal, polynomially compact operators on the sequence space l(pi) have nontrivial, closed invariant subspaces if the nonlocally convex linear topology τ(pi) is locally bounded.
Page 1 /10540
Display every page Item


Home
Copyright © 2008-2017 Open Access Library. All rights reserved.