All Title Author
Keywords Abstract


Strategic Cognitive Sequencing: A Computational Cognitive Neuroscience Approach

DOI: 10.1155/2013/149329

Full-Text   Cite this paper   Add to My Lib

Abstract:

We address strategic cognitive sequencing, the “outer loop” of human cognition: how the brain decides what cognitive process to apply at a given moment to solve complex, multistep cognitive tasks. We argue that this topic has been neglected relative to its importance for systematic reasons but that recent work on how individual brain systems accomplish their computations has set the stage for productively addressing how brain regions coordinate over time to accomplish our most impressive thinking. We present four preliminary neural network models. The first addresses how the prefrontal cortex (PFC) and basal ganglia (BG) cooperate to perform trial-and-error learning of short sequences; the next, how several areas of PFC learn to make predictions of likely reward, and how this contributes to the BG making decisions at the level of strategies. The third models address how PFC, BG, parietal cortex, and hippocampus can work together to memorize sequences of cognitive actions from instruction (or “self-instruction”). The last shows how a constraint satisfaction process can find useful plans. The PFC maintains current and goal states and associates from both of these to find a “bridging” state, an abstract plan. We discuss how these processes could work together to produce strategic cognitive sequencing and discuss future directions in this area. 1. Introduction Weighing the merits of one scientific theory against another, deciding which plan of action to pursue, or considering whether a bill should become law all require many cognitive acts, in particular sequences [1, 2]. Humans use complex cognitive strategies to solve difficult problems, and understanding exactly how we do this is necessary to understand human intelligence. In these cases, different strategies composed of different sequences of cognitive acts are possible, and the choice of strategy is crucial in determining how we succeed and fail at particular cognitive challenges [3, 4]. Understanding strategic cognitive sequencing has important implications for reducing biases and thereby improving human decision making (e.g., [5, 6]). However, this aspect of cognition has been studied surprisingly little [7, 8] because it is complex. Tasks in which participants tend to use different strategies (and therefore sequences) necessarily produce data that is less clear and interpretable than that from a single process in a simple task [9]. Therefore, cognitive neuroscience tends to avoid such tasks, leaving the neural mechanisms of strategy selection and cognitive sequencing underexplored relative to the

References

[1]  A. M. Owen, “Tuning in to the temporal dynamics of brain activation using functional magnetic resonance imaging (fMRI),” Trends in Cognitive Sciences, vol. 1, no. 4, pp. 123–125, 1997.
[2]  T. Shallice, “Specific impairments of planning,” Philosophical transactions of the Royal Society of London B, vol. 298, no. 1089, pp. 199–209, 1982.
[3]  L. Roy Beach and T. R. Mitchell, “A contingency model for the selection of decision strategies,” The Academy of Management Review, vol. 3, no. 3, pp. 439–449, 1978.
[4]  P. Slovic, B. Fischhoff, and S. Lichtenstein, “Behavioral decision theory,” Annual Review of Psychology, vol. 28, pp. 1–39, 1977.
[5]  M. Chi and K. VanLehn, “Meta-cognitive strategy instruction in intelligent tutoring systems: how, when, and why,” Educational Technology & Society, vol. 13, no. 1, pp. 25–39, 2010.
[6]  J. M. Unterrainer, B. Rahm, R. Leonhart, C. C. Ruff, and U. Halsband, “The tower of London: the impact of instructions, cueing, and learning on planning abilities,” Cognitive Brain Research, vol. 17, no. 3, pp. 675–683, 2003.
[7]  A. Newell, “You can't play 20 questions with nature and win: projective comments on the papers of this symposium,” in Visual Information Processing, W. G. Chase, Ed., pp. 283–308, Academic Press, New York, NY, USA, 1973.
[8]  L. B. Smith, “A model of perceptual classification in children and adults,” Psychological Review, vol. 96, no. 1, pp. 125–144, 1989.
[9]  M. J. Roberts and E. J. Newton, “Understanding strategy selection,” International Journal of Human Computer Studies, vol. 54, no. 1, pp. 137–154, 2001.
[10]  J. Tanji and E. Hoshi, “Role of the lateral prefrontal cortex in executive behavioral control,” Physiological Reviews, vol. 88, no. 1, pp. 37–57, 2008.
[11]  A. Dagher, A. M. Owen, H. Boecker, and D. J. Brooks, “Mapping the network for planning: a correlational PET activation study with the tower of London task,” Brain, vol. 122, no. 10, pp. 1973–1987, 1999.
[12]  O. A. van den Heuvel, H. J. Groenewegen, F. Barkhof, R. H. C. Lazeron, R. van Dyck, and D. J. Veltman, “Frontostriatal system in planning complexity: a parametric functional magnetic resonance version of Tower of London task,” NeuroImage, vol. 18, no. 2, pp. 367–374, 2003.
[13]  A. Dagher, A. M. Owen, H. Boecker, and D. J. Brooks, “The role of the striatum and hippocampus in planning: a PET activation study in Parkinson's disease,” Brain, vol. 124, no. 5, pp. 1020–1032, 2001.
[14]  K. Shima, M. Isoda, H. Mushiake, and J. Tanji, “Categorization of behavioural sequences in the prefrontal cortex,” Nature, vol. 445, no. 7125, pp. 315–318, 2007.
[15]  R. C. O'Reilly and Y. Munakata, Computational Explorations in Cognitive Neuroscience: Understanding the Mind By Simulating the Brain, The MIT Press, Cambridge, Mass, USA, 2000.
[16]  R. C. O'Reilly, T. E. Hazy, and S. A. Herd, “The leabra cognitive architecture: how to play 20 principles with nature and win!,” in The Oxford Handbook of Cognitive Science, S. Chipman, Ed., Oxford University Press, In press.
[17]  A. Newell and H. A. Simon, Human Problem Solving, Prentice Hall, Englewood Cliffs, NJ, USA, 1972.
[18]  J. R. Anderson, Rules of the Mind, Lawrence Erlbaum Associates, Hillsdale, NJ, USA, 1993.
[19]  R. Morris and G. Ward, The Cognitive Psychology of Planning, Psychology Press, 2005.
[20]  C. Lebiere, J. R. Anderson, and D. Bothell, “Multi-tasking and cognitive workload in an act-r model of a simplified air traffic control task,” in Proceedings of the 10th Conference on Computer Generated Forces and Behavioral Representation, 2001.
[21]  T. Suddendorf and M. C. Corballis, “Behavioural evidence for mental time travel in nonhuman animals,” Behavioural Brain Research, vol. 215, no. 2, pp. 292–298, 2010.
[22]  S. J. Shettleworth, “Clever animals and killjoy explanations in comparative psychology,” Trends in Cognitive Sciences, vol. 14, no. 11, pp. 477–481, 2010.
[23]  D. Klahr, P. Langley, and R. Neches, Eds., Production System Models of Learning and Development, The MIT Press, Cambridge, Mass, USA, 1987.
[24]  D. J. Jilk, C. Lebiere, R. C. O'Reilly, and J. R. Anderson, “SAL: an explicitly pluralistic cognitive architecture,” Journal of Experimental and Theoretical Artificial Intelligence, vol. 20, no. 3, pp. 197–218, 2008.
[25]  J. R. Anderson, D. Bothell, M. D. Byrne, S. Douglass, C. Lebiere, and Y. Qin, “An integrated theory of the mind,” Psychological Review, vol. 111, no. 4, pp. 1036–1060, 2004.
[26]  J. R. Anderson, How Can the Human Mind Occur in the Physical Universe?Oxford University Press, New York, NY, USA, 2007.
[27]  H. H. Yin and B. J. Knowlton, “The role of the basal ganglia in habit formation,” Nature Reviews Neuroscience, vol. 7, no. 6, pp. 464–476, 2006.
[28]  M. J. Frank, B. Loughry, and R. C. O'Reilly, “Interactions between frontal cortex and basal ganglia in working memory: a computational model,” Cognitive, Affective and Behavioral Neuroscience, vol. 1, no. 2, pp. 137–160, 2001.
[29]  M. J. Frank, L. C. Seeberger, and R. C. O'Reilly, “By carrot or by stick: cognitive reinforcement learning in Parkinsonism,” Science, vol. 306, no. 5703, pp. 1940–1943, 2004.
[30]  T. E. Hazy, M. J. Frank, and R. C. O'Reilly, “Banishing the homunculus: making working memory work,” Neuroscience, vol. 139, no. 1, pp. 105–118, 2006.
[31]  T. E. Hazy, M. J. Frank, and R. C. O'Reilly, “Towards an executive without a homunculus: computational models of the prefrontal cortex/basal ganglia system,” Philosophical Transactions of the Royal Society B, vol. 362, no. 1485, pp. 1601–1613, 2007.
[32]  R. C. O'Reilly and M. J. Frank, “Making working memory work: a computational model of learning in the prefrontal cortex and basal ganglia,” Neural Computation, vol. 18, no. 2, pp. 283–328, 2006.
[33]  K. Sakai, “Task set and prefrontal cortex,” Annual Review of Neuroscience, vol. 31, pp. 219–245, 2008.
[34]  G. E. Alexander, M. R. DeLong, and P. L. Strick, “Parallel organization of functionally segregated circuits linking basal ganglia and cortex,” Annual Review of Neuroscience, vol. 9, pp. 357–381, 1986.
[35]  M. J. Frank, “Dynamic dopamine modulation in the basal ganglia: a neurocomputational account of cognitive deficits in medicated and nonmedicated Parkinsonism,” Journal of Cognitive Neuroscience, vol. 17, no. 1, pp. 51–72, 2005.
[36]  R. C. O'Reilly, M. J. Frank, T. E. Hazy, and B. Watz, “PVLV: the primary value and learned value Pavlovian learning algorithm,” Behavioral Neuroscience, vol. 121, no. 1, pp. 31–49, 2007.
[37]  T. E. Hazy, M. J. Frank, and R. C. O'Reilly, “Neural mechanisms of acquired phasic dopamine responses in learning,” Neuroscience and Biobehavioral Reviews, vol. 34, no. 5, pp. 701–720, 2010.
[38]  J. M. Fuster and A. A. Uyeda, “Reactivity of limbic neurons of the monkey to appetitive and aversive signals,” Electroencephalography and Clinical Neurophysiology, vol. 30, no. 4, pp. 281–293, 1971.
[39]  E. K. Miller, “The prefrontal cortex and cognitive control,” Nature Reviews Neuroscience, vol. 1, no. 1, pp. 59–65, 2000.
[40]  T. Ono, K. Nakamura, H. Nishijo, and M. Fukuda, “Hypothalamic neuron involvement in integration of reward, aversion, and cue signals,” Journal of Neurophysiology, vol. 56, no. 1, pp. 63–79, 1986.
[41]  S. A. Deadwyler, S. Hayashizaki, J. Cheer, and R. E. Hampson, “Reward, memory and substance abuse: functional neuronal circuits in the nucleus accumbens,” Neuroscience and Biobehavioral Reviews, vol. 27, no. 8, pp. 703–711, 2004.
[42]  W. Schultz, P. Dayan, and P. R. Montague, “A neural substrate of prediction and reward,” Science, vol. 275, no. 5306, pp. 1593–1599, 1997.
[43]  R. S. Sutton and A. G. Barto, “Time-derivative models of pavlovian reinforcement,” in Learning and Computational Neuroscience, J. W. Moore and M. Gabriel, Eds., pp. 497–537, MIT Press, Cambridge, Mass, USA, 1990.
[44]  J. P. O'Doherty, P. Dayan, K. Friston, H. Critchley, and R. J. Dolan, “Temporal difference models and reward-related learning in the human brain,” Neuron, vol. 38, no. 2, pp. 329–337, 2003.
[45]  R. S. Sutton and A. G. Barto, Reinforcement Learning: An Introduction, MIT Press, Cambridge, Mass, USA, 1998.
[46]  R. Stuart Sutton, Temporal credit assignment in reinforcement learning [Ph.D. thesis], University of Massachusetts Amherst, Amherst, Mass, USA, 1984.
[47]  P. Dayan and B. W. Balleine, “Reward, motivation, and reinforcement learning,” Neuron, vol. 36, no. 2, pp. 285–298, 2002.
[48]  R. C. O'Reilly, S. A. Herd, and W. M. Pauli, “Computational models of cognitive control,” Current Opinion in Neurobiology, vol. 20, no. 2, pp. 257–261, 2010.
[49]  C. H. Chatham, S. A. Herd, A. M. Brant et al., “From an executive network to executive control: a computational model of the n-back task,” Journal of Cognitive Neuroscience, vol. 23, no. 11, pp. 3598–3619, 2011.
[50]  D. Joel, Y. Niv, and E. Ruppin, “Actor-critic models of the basal ganglia: new anatomical and computational perspectives,” Neural Networks, vol. 15, no. 4-6, pp. 535–547, 2002.
[51]  W.-T. Fu and J. R. Anderson, “Solving the credit assignment problem: explicit and implicit learning of action sequences with probabilistic outcomes,” Psychological Research, vol. 72, no. 3, pp. 321–330, 2008.
[52]  J. D. Wallis, “Orbitofrontal cortex and its contribution to decision-making,” Annual Review of Neuroscience, vol. 30, pp. 31–56, 2007.
[53]  M. P. Noonan, N. Kolling, M. E. Walton, and M. F. S. Rushworth, “Re-evaluating the role of the orbitofrontal cortex in reward and reinforcement,” European Journal of Neuroscience, vol. 35, no. 7, pp. 997–1010, 2012.
[54]  P. L. Croxson, M. E. Walton, J. X. O'Reilly, T. E. J. Behrens, and M. F. S. Rushworth, “Effort-based cost-benefit valuation and the human brain,” The Journal of Neuroscience, vol. 29, no. 14, pp. 4531–4541, 2009.
[55]  S. W. Kennerley and M. E. Walton, “Decision making and reward in frontal cortex: complementary evidence from neurophysiological and neuropsychological studies,” Behavioral Neuroscience, vol. 125, no. 3, pp. 297–317, 2011.
[56]  M. J. Frank and E. D. Claus, “Anatomy of a decision: striato-orbitofrontal interactions in reinforcement learning, decision making, and reversal,” Psychological Review, vol. 113, no. 2, pp. 300–326, 2006.
[57]  J. M. Hyman, L. Ma, E. Balaguer-Ballester, D. Durstewitz, and J. K. Seamans, “Contextual encoding by ensembles of medial prefrontal cortex neurons,” Proceedings of the National Academy of Sciences of the United States of America, vol. 109, no. 13, pp. 5086–5091, 2012.
[58]  J. J. Day, J. L. Jones, R. M. Wightman, and R. M. Carelli, “Phasic nucleus accumbens dopamine release encodes effort- and delay-related costs,” Biological Psychiatry, vol. 68, no. 3, pp. 306–309, 2010.
[59]  J. Duncan, M. Schramm, R. Thompson, and I. Dumontheil, “Task rules, working memory, and fluid intelligence,” Psychonomic Bulletin & Review, vol. 19, no. 5, pp. 864–8870, 2012.
[60]  J. R. Anderson, The Architecture of Cognition, Harvard University Press, Cambridge, Mass, USA, 1983.
[61]  P. M. Fitts and M. I. Posner, Human Performance, Belmont, Mass, USA, 1967.
[62]  P. Redgrave and K. Gurney, “The short-latency dopamine signal: a role in discovering novel actions?” Nature Reviews Neuroscience, vol. 7, no. 12, pp. 967–975, 2006.
[63]  G. Fernández, H. Weyerts, M. Schrader-B?lsche et al., “Successful verbal encoding into episodic memory engages the posterior hippocampus: a parametrically analyzed functional magnetic resonance imaging study,” The Journal of Neuroscience, vol. 18, no. 5, pp. 1841–1847, 1998.
[64]  D. Kumaran, J. J. Summerfield, D. Hassabis, and E. A. Maguire, “Tracking the emergence of conceptual knowledge during human decision making,” Neuron, vol. 63, no. 6, pp. 889–901, 2009.
[65]  D. C. Noelle and G. W. Cottrell, “A connectionist model of instruction following, pages,” in Proceedings of the 17th Annual Conference of the Cognitive Science Society, J. D. Moore and J. F. Lehman, Eds., pp. 369–374, Lawrence Erlbaum Associates, Mahwah, NJ, USA, January 1995.
[66]  G. Biele, J. Rieskamp, and R. Gonzalez, “Computational models for the combination of advice and individual learning,” Cognitive Science, vol. 33, no. 2, pp. 206–242, 2009.
[67]  B. B. Doll, W. J. Jacobs, A. G. Sanfey, and M. J. Frank, “Instructional control of reinforcement learning: a behavioral and neurocomputational investigation,” Brain Research, vol. 1299, pp. 74–94, 2009.
[68]  J. Li, M. R. Delgado, and E. A. Phelps, “How instructed knowledge modulates the neural systems of reward learning,” Proceedings of the National Academy of Sciences of the United States of America, vol. 108, no. 1, pp. 55–60, 2011.
[69]  M. M. Walsh and J. R. Anderson, “Modulation of the feedback-related negativity by instruction and experience,” Proceedings of the National Academy of Sciences of the United States of America, vol. 108, no. 47, pp. 19048–19053, 2011.
[70]  T. T. Rogers and J. L. McClelland, Semantic Cognition: A Parallel Distributed Processing Approach, MIT Press, Cambridge, Mass, USA, 2004.
[71]  S. Russell and P. Norvig, Artificial Intelligence: A Modern Approach, Prentice Hall, 1995.
[72]  F. Bacchus, “AIPS '00 planning competition: the fifth international conference on Artificial Intelligence Planning and Scheduling systems,” AI Magazine, vol. 22, no. 3, pp. 47–56, 2001.
[73]  E. D. Sacerdoti, “Planning in a hierarchy of abstraction spaces,” Artificial Intelligence, vol. 5, no. 2, pp. 115–135, 1974.
[74]  M. B. Do and S. Kambhampati, “Planning as constraint satisfaction: solving the planning graph by compiling it into CSP,” Artificial Intelligence, vol. 132, no. 2, pp. 151–182, 2001.
[75]  P. Gregory, D. Long, and M. Fox, “Constraint based planning with composable substate graphs,” in Proceedings of the 19th European Conference on Artificial Intelligence (ECAI '10), H. Coelho, R. Studer, and M. Wooldridge, Eds., IOS Press, 2010.
[76]  A. L. Blum and M. L. Furst, “Fast planning through planning graph analysis,” Artificial Intelligence, vol. 90, no. 1-2, pp. 281–300, 1997.
[77]  E. Fink and M. M. Veloso, “Formalizing the prodigy planning algorithm,” Tech. Rep. 1-1-1996, 1996.
[78]  P. Dayan, “Bilinearity, rules, and prefrontal cortex,” Frontiers in Computational Neuroscience, vol. 1, no. 1, pp. 1–14, 2007.
[79]  S. Thrun and L. Pratt, “Learning to learn: introduction and overview,” in Learning To Learn, S. Thrun and L. Pratt, Eds., Springer, New York, NY, USA, 1998.
[80]  J. Baxter, “A bayesian/information theoretic model of learning to learn via multiple task sampling,” Machine Learning, vol. 28, no. 1, pp. 7–39, 1997.
[81]  G. Konidaris and A. Barto, “Building portable options: Skill tran sfer in reinforcement learning,” in Proceedings of the 20th International Joint Conference on Artificial Intelligence, M. M. Veloso, Ed., pp. 895–900, 2006.
[82]  K. Ferguson and S. Mahadevan, “Proto-transfer learning in markov decision processes using spectral methods,” in Proceedings of the Workshop on Structural Knowledge Transfer for Machine Learning (ICML '06), 2006.
[83]  R. C. O'Reilly, D. Wyatte, S. Herd, B. Mingus, and D. J. Jilk, “Recurrent processing during object recognition,” Frontiers in Psychology, vol. 4, article 124, 2013.

Full-Text

comments powered by Disqus