All Title Author
Keywords Abstract

PLOS ONE  2014 

Deep Learning of Orthographic Representations in Baboons

DOI: 10.1371/journal.pone.0084843

Full-Text   Cite this paper   Add to My Lib

Abstract:

What is the origin of our ability to learn orthographic knowledge? We use deep convolutional networks to emulate the primate's ventral visual stream and explore the recent finding that baboons can be trained to discriminate English words from nonwords [1]. The networks were exposed to the exact same sequence of stimuli and reinforcement signals as the baboons in the experiment, and learned to map real visual inputs (pixels) of letter strings onto binary word/nonword responses. We show that the networks' highest levels of representations were indeed sensitive to letter combinations as postulated in our previous research. The model also captured the key empirical findings, such as generalization to novel words, along with some intriguing inter-individual differences. The present work shows the merits of deep learning networks that can simulate the whole processing chain all the way from the visual input to the response while allowing researchers to analyze the complex representations that emerge during the learning process.

References

[1]  Grainger J, Dufau S, Montant M, Ziegler JC, Fagot J (2012) Orthographic Processing in Baboons (Papio papio). Science 336: , 245–248.
[2]  Ziegler JC, Hannagan T, Dufau S, Montant M, Fagot J, et al.. (2013) Transposed Letter Effects Reveal Orthographic Processing in Baboons Psychological Science 24(8): , 1609–1611.
[3]  Bains W (2012) Comment on “Orthographic Processing in Baboons (Papio papio)”. Science 337: , 1173. Available: www.sciencemag. org/cgi/content/full/337/6099/1173-b.
[4]  Frost R, Keuleers E (2013) What can we learn from monkeys about orthographic processing in humans? A reply to Ziegler et al. Psychological Science 24(9): , 1868–1869.
[5]  Ziegler JC, Goswami U (2005) Reading acquisition, developmental dyslexia, and skilled reading across languages: a psycholinguistic grain size theory. Psychological Bulletin 131(1): , 3–29.
[6]  Ziegler JC, Perry C, Zorzi M (in press) Modelling reading development through phonological decoding and self-teaching: Implications for dyslexia. Philosophical Transactions of the Royal Society B.
[7]  Grainger J (2008) Cracking the orthographic code: An introduction. Language and Cognitive Processes 23(1): , 1–35.
[8]  Hannagan T, Grainger J (2012) Protein Analysis Meets Visual Word Recognition: A Case for String Kernels in the Brain. Cognitive Science 36: , 575–606.
[9]  Binder JR, Medler DA, Westbury CF, Liebenthal E, Buchanan L (2006) Tuning of the human left fusiform gyrus to sublexical orthographic structure. Neuroimage 33: , 739–748.
[10]  Vinckier F, Dehaene S, Jobert A, Dubus JP, Sigman M, et al. (2007) Hierarchical coding of letter strings in the ventral stream: dissecting the inner organization of the visual word-form system. Neuron 55(1): 143–156.
[11]  Cohen L, Dehaene S, Naccache L, Lehéricy S, Dehaene-Lambertz G, et al. (2000) The visual word form area: Spatial and temporal characterization of an initial stage of reading in normal subjects and posterior split-brain patients. Brain 123: 291–307.
[12]  Brincat SL, Connor CE (2006) Dynamic Shape Synthesis in Posterior Inferotemporal Cortex. Neuron 49(1): , 17–24.
[13]  Dehaene S, Cohen L, Sigman M, Vinckier F (2005) The neural code for written words: a proposal. Trends Cogn Sci 9: 335–341.
[14]  Blaizot X, Landeau B, Baron JC, Chavoix C (2000) Mapping the visual recognition memory network with PET in the behaving baboon. Journal of Cerebral Blood Flow and Metabolism 20: , 213–219.
[15]  Fukushima K (1980) Neocognitron: A self-organizing neural network for a mechanism of pattern recognition unaffected by shift in position. Biological Cybernetics 36(4): , 193–202.
[16]  LeCun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition. Proceedings of the IEEE 86(11): , 2278–2324.
[17]  Van Essen DC, Gallant JL (1994) Neural mechanisms of form and motion processing in the primate visual system. Neuron 13: 1–10.
[18]  Ciresan DC, Meier U, Schmidhuber J (2012) Multi-column Deep Neural Networks for Image Classification. IEEE Conf. on Computer Vision and Pattern Recognition.
[19]  Farabet C, Couprie C, Najman L, LeCun Y (2013) Learning hierarchical features for scene labeling. IEEE Transactions on Pattern Analysis and Machine Intelligence.
[20]  Serre T, Oliva A, Poggio T (2007) A Feedforward Architecture Accounts for Rapid Categorization. Proc Natl Acad Sci USA 104(15): 6424–6429.
[21]  Bergstra J, Bastien F, Breuleux O, Lamblin P, Pascanu R, Delalleau O, Desjardins G, Warde-Farley D, Goodfellow I, Bergeron A, Bengio Y (2011) Theano: Deep learning on GPUs with python. In NIPS 2011, BigLearning Workshop, Granada, Spain.
[22]  Perea M, Lupker SJ (2004) Can CANISO activate CASINO? Transposed-letter similarity effects with nonadjacent letter positions. Journal of Memory and Language 51: , 231–246.
[23]  Mechler F, Ringach DL (2002) On the classification of simple and complex cells. Vision research 42: , 1017–1033.
[24]  Hubel DH, Wiesel TN (1968) Receptive fields and functional architecture of monkey striate cortex. Journal of Physiology-London 195: , 215–243.
[25]  Skottun BC, De Valois RL, Grosof DH, Movshon JA, Albrecht DG, et al.. (1991) Classifying simple and complex cells on the basis of response modulation. Vision Research 31: , 1079–1086.
[26]  Léveillé J, Hannagan T (2012) Learning spatial invariance with the trace rule in non-uniform distributions. Neural Computation 25(5): , 1261–1276.
[27]  Erhan D, Courville A, Bengio Y (2010) Understanding Representations Learned in Deep Architectures. Technical Report 1355, Université de Montréal.
[28]  Stoianov I, Zorzi M (2012) Emergence of a “visual number sense” in hierarchical generative models. Nature neuroscience 15(2): 194–6.
[29]  Plaut DC, Shallice T (1993) Deep dyslexia: A case study of connectionist neuropsychology. Cognitive Neuropsychology 10: , 377–500.
[30]  Cohen G, Johnston RA, Plunkett K (2000) Exploring cognition: Damaged brains and neural networks, readings in cognitive neuropsychology and connectionist modelling. Hove, UK: Psychology Press.
[31]  Thomas MSC, Purser HRM, Tomlinson S, Mareschal D (2011) Are imaging and lesioning convergent methods for assessing functional specialisation? Investigations using an artificial neural network. Brain and Cognition 78(1): 38–49.
[32]  Mutch J (2008) Lowe DG (2008) Object class recognition and localization using sparse features with limited receptive fields. International Journal of Computer Vision 80(1): 45–57.
[33]  Hannagan T, Dandurand F, Grainger J (2011) Broken symmetries in a location-invariant word recognition network. Neural Comput 23: 251–283.
[34]  Di Bono MG, Zorzi M (2013) Deep generative learning of location-invariant visual word recognition. Front. Psychol 4: 635.
[35]  Rauschecker AM, Bowen RF, Parvizi J, Wandell BA (2012) Position sensitivity in the visual word form area. Proc Natl Acad Sci USA 109(24): 1568–1577.
[36]  Hannagan T, Grainger J (2013) The Lazy Visual Word Form Area: Computational Insights into Location-Sensitivity. PLoS Comput Biol 9(10): e1003250.
[37]  Pegado F, Nakamura K, Cohen L, Dehaene S (2011) Breaking the Symmetry: Mirror discrimination for single letters but not for pictures in the Visual Word Form Area. NeuroImage 55: 742–9.
[38]  Mozer M (1987) Early parallel processing in reading: A connectionist approach. In M. Coltheart (Ed.) Attention and Perfomance XII: The Psychology of Reading. 83–104. Hillsdale, NJ: Lawrence Erlbaum.
[39]  Whitney CS, Berndt RS (1999) A new model of letter string encoding: Simulating right neglect dyslexia. Progress in Brain Research 121: 143–163.
[40]  Grainger J, van Heuven W (2003) Modeling letter position coding in printed word perception. In Bonin P. (Ed.), Mental lexicon: “Some words to talk about words” 1–23. New York: Nova Science.

Full-Text

comments powered by Disqus

Contact Us

service@oalib.com

QQ:3279437679

微信:OALib Journal