全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

Selection for Reinforcement-Free Learning Ability as an Organizing Factor in the Evolution of Cognition

DOI: 10.1155/2013/841646

Full-Text   Cite this paper   Add to My Lib

Abstract:

This research explores the relation between environmental structure and neurocognitive structure. We hypothesize that selection pressure on abilities for efficient learning (especially in settings with limited or no reward information) translates into selection pressure on correspondence relations between neurocognitive and environmental structure, since such correspondence allows for simple changes in the environment to be handled with simple learning updates in neurocognitive structure. We present a model in which a simple form of reinforcement-free learning is evolved in neural networks using neuromodulation and analyze the effect this selection for learning ability has on the virtual species' neural organization. We find a higher degree of organization than in a control population evolved without learning ability and discuss the relation between the observed neural structure and the environmental structure. We discuss our findings in the context of the environmental complexity thesis, the Baldwin effect, and other interactions between adaptation processes. 1. Introduction This paper explores the relation between the structure of an environment and the structure of cognitions evolved in that environment. Intuitively, one would expect a strong relation between the two. In the past, some have taken this intuition very far. Spencer [1] viewed the evolution of life and mind as a process of internalization of progressively more intricate and abstract features of the environment. He traced the acquisition of such “correspondence” between the internal and external from basic life processes (e.g., the shape of an enzyme molecule has a direct and physical relation to the shape of the molecule whose reactions it evolved to catalyze), all the way up to cognitive processes (such as acquisition of complex causal relations between entities removed in space and time). That a certain correspondence should exist between the shapes of enzyme and substrate will be uncontroversial, but how far can this concept of correspondence take us when cognition is concerned? Certainly, when we hand-code an AI to function within a given environment, we can typically recognize much of the environmental organization in the structure of our AIs' cognitions. However, as the history of connectionism demonstrates, fit behaviour does not necessarily involve intelligible neural structure. More often than not, the neural organization of evolved artificial neural networks (ANNs) allows little if any interpretation in terms of environmental structure. If we demand that models of the mind in

References

[1]  H. Spencer, The Principles of Psychology, Appleton, New York, NY, USA, 3rd edition, 1885.
[2]  R. A. Brooks, “Intelligence without representation,” Artificial Intelligence, vol. 47, no. 1–3, pp. 139–159, 1991.
[3]  J. A. Fodor and Z. W. Pylyshyn, “Connectionism and cognitive architecture: a critical analysis,” Cognition, vol. 28, no. 1-2, pp. 3–71, 1988.
[4]  J. Fodor and B. P. McLaughlin, “Connectionism and the problem of systematicity: why Smolensky's solution doesn't work,” Cognition, vol. 35, no. 2, pp. 183–204, 1990.
[5]  B. P. McLaughlin, “Systematicity redux,” Synthese, vol. 170, no. 2, pp. 251–274, 2009.
[6]  R. A. Jacobs, “Computational studies of the development of functionally specialized neural modules,” Trends in Cognitive Sciences, vol. 3, no. 1, pp. 31–38, 1999.
[7]  J. A. Bullinaria, “Understanding the emergence of modularity in neural systems,” Cognitive Science, vol. 31, no. 4, pp. 673–695, 2007.
[8]  J. A. Bullinaria, “The importance of neurophysiological constraints for modelling the emergence of modularity,” in Computational Modelling in Behavioural Neuroscience: Closing the Gap Between Neurophysiology and Behaviour, D. Heinke and E. Mavritsaki, Eds., pp. 187–208, Psychology Press, 2009.
[9]  S. F. Arnold, R. Suzuki, and T. Arita, “Evolving learning ability in cyclically dynamic environments: the structuring force of environmental heterogeneity,” in Proceedings of Artificial Life XII, pp. 435–436, MIT press, 2010.
[10]  P. M. Todd and G. F. Miller, “Exploring adaptive agency II: simulating the evolution of associative learning,” in Proceedings of the 1st International Conference on Simulation of Adaptive Behavior, pp. 306–315, 1991.
[11]  S. Nolfi, J. L. Elman, and D. Parisi, “Learning and evolution in neural networks,” Adaptive Behavior, vol. 3, no. 1, pp. 5–28, 1994.
[12]  S. Nolfi and D. Parisi, “Learning to adapt to changing environments in evolving neural networks,” Adaptive Behavior, vol. 5, no. 1, pp. 75–98, 1996.
[13]  E. Robinson and J. A. Bullinaria, “Neuroevolution of auto-teaching architectures,” in Connectionist Models of Behavior and Cognition II, J. Mayor, N. Ruh, and K. Plunkett, Eds., pp. 361–372, World Scientific, Singapore, 2009.
[14]  A. Soltoggio, P. Dürr, C. Mattiussi, and D. Floreano, “Evolving neuromodulatory topologies for reinforcement learning-like problems,” in Proceedings of the IEEE Congress on Evolutionary Computation (CEC '07), pp. 2471–2478, September 2007.
[15]  A. Soltoggio, J. A. Bullinaria, C. Mattiussi, P. Dürr, and D. Floreano, “Evolutionary advantages of neuromodulated plasticity in dynamic, reward-based scenarios,” in Proceedings of Artificial Life XI, pp. 569–576, MIT Press, 2008.
[16]  L. J. Heyer, S. Kruglyak, and S. Yooseph, “Exploring expression data identification and analysis of coexpressed genes,” Genome Research, vol. 9, no. 11, pp. 1106–1115, 1999.
[17]  P. Godfrey-Smith, “Spencer and Dewey on Life and Mind,” in Proceedings of the Artificial Life 4, R. Brooks and P. Maes, Eds., pp. 80–89, MIT Press, 1994.
[18]  P. Godfrey-Smith, Complexity and the Function of Mind in Nature, Cambridge University Press, 1996.
[19]  P. Godfrey-Smith, “Environmental complexity and the evolution of cognition,” in The Evolution of Intelligence, R. Sternberg and J. Kaufman, Eds., pp. 233–249, Lawrence Erlbaum, Mahwah, NJ, USA, 2002.
[20]  J. M. Baldwin, “A new factor in evolution,” American Naturalist, vol. 30, pp. 441–451, 1896.
[21]  P. Turney, D. Whitley, and R. W. Anderson, “Evolution, learning, and instinct: 100 years of the baldwin effect,” Evolutionary Computation, vol. 4, no. 3, pp. 4–8, 1996.
[22]  G. E. Hinton and S. J. Nowlan, “How learning can guide evolution,” Complex Systems, vol. 1, pp. 495–502, 1987.
[23]  R. Suzuki and T. Arita, “Repeated occurrences of the baldwin effect can guide evolution on rugged fitness Landscapes,” in Proceedings of the 1st IEEE Symposium on Artificial Life (IEEE-ALife'07), pp. 8–14, April 2007.
[24]  A. Crombach and P. Hogeweg, “Evolution of evolvability in gene regulatory networks,” PLoS Computational Biology, vol. 4, no. 7, article e1000112, 2008.
[25]  N. Kashtan and U. Alon, “Spontaneous evolution of modularity and network motifs,” Proceedings of the National Academy of Sciences of the United States of America, vol. 102, no. 39, pp. 13773–13778, 2005.
[26]  S. F. Arnold, R. Suzuki, and T. Arita, “Modelling mental representation as evolved second order learning,” Proceedings of the 17th International Symposium on Artificial Life and Robotics, pp. 674–677, 2012.
[27]  S. F. Arnold, R. Suzuki, and T. Arita, “Second order learning and the evolution of mental representation,” in Proceedings of Artificial life XIII, pp. 301–308, MIT press, 2012.

Full-Text

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133