全部 标题 作者
关键词 摘要

PLOS ONE  2008 

Visual Learning in Multiple-Object Tracking

DOI: 10.1371/journal.pone.0002228

Full-Text   Cite this paper   Add to My Lib

Abstract:

Background Tracking moving objects in space is important for the maintenance of spatiotemporal continuity in everyday visual tasks. In the laboratory, this ability is tested using the Multiple Object Tracking (MOT) task, where participants track a subset of moving objects with attention over an extended period of time. The ability to track multiple objects with attention is severely limited. Recent research has shown that this ability may improve with extensive practice (e.g., from action videogame playing). However, whether tracking also improves in a short training session with repeated trajectories has rarely been investigated. In this study we examine the role of visual learning in multiple-object tracking and characterize how varieties of attention interact with visual learning. Methodology/Principal Findings Participants first conducted attentive tracking on trials with repeated motion trajectories for a short session. In a transfer phase we used the same motion trajectories but changed the role of tracking targets and nontargets. We found that compared with novel trials, tracking was enhanced only when the target subset was the same as that used during training. Learning did not transfer when the previously trained targets and nontargets switched roles or mixed up. However, learning was not specific to the trained temporal order as it transferred to trials where the motion was played backwards. Conclusions/Significance These findings suggest that a demanding task of tracking multiple objects can benefit from learning of repeated motion trajectories. Such learning potentially facilitates tracking in natural vision, although learning is largely confined to the trajectories of attended objects. Furthermore, we showed that learning in attentive tracking relies on relational coding of all target trajectories. Surprisingly, learning was not specific to the trained temporal context, probably because observers have learned motion paths of each trajectory independently of the exact temporal order.

References

[1]  Pylyshyn ZW, Storm RW (1988) Tracking multiple independent targets: Evidence for a parallel tracking mechanism. Spatial Vision 3: 179–197.
[2]  Tombu M, Seiffert AE (in press) Attentional costs in multiple object tracking. Cognition.
[3]  Alvarez GA, Franconeri SL (2007) How many objects can you attentively track? Evidence for a resource-limited tracking mechanism. Journal of Vision.
[4]  Shim WM, Alvarez GA, Jiang YV (2008) Spatial separation between targets constraints maintenance of attention on multiple targets. Psychonomic Bulletin and Review 15: 390–397.
[5]  Chun MM, Jiang YH (1998) Contextual cueing: Implicit learning and memory of visual context guides spatial attention. Cognitive Psychology 36(1): 28–71.
[6]  Chun MM, Jiang YH (1999) Top-down attentional guidance based on implicit learning of visual covariation. Psychological Science 10(4): 360–365.
[7]  Fiser J, Aslin RN (2001) Unsupervised statistical learning of higher-order spatial structures from visual scenes. Psychological Science 12(6): 499–504.
[8]  Turk-Browne NB, Junge JA, Scholl BJ (2005) The automaticity of visual statistical learning. Journal of Experimental Psychology-General 134(4): 552–564.
[9]  Kunar MA, Michod KO, Wolfe JM (submitted) When we use the context in contextual cueing: Evidence from multiple target locations.
[10]  Ogawa H, Yagi A (2002) The implicit processing in multiple object tracking. Technical report on Attention and Cognition 1: 1–4.
[11]  Nissen MJ, Bullemer P (1987) Attentional Requirements of Learning - Evidence from Performance-Measures. Cognitive Psychology 19(1): 1–32.
[12]  Pylyshyn ZW (1989) The role of location indexes in spatial perception: A sketch of the FINST spatial index model. Cognition 32: 65–97.
[13]  Yantis S (1992) Multielement visual tracking: Attention and perceptual organization. Cognitive Psychology 24: 295–340.
[14]  Olson IR, Chun MM (2001) Temporal contextual cuing of visual attention. Journal of Experimental Psychology-Learning Memory and Cognition 27(5): 1299–1313.
[15]  Jones J, Pashler H (2007) Is the mind inherently forward looking? Comparing prediction and retrodiction. Psychonomic Bulletin & Review 14(2): 295–300.
[16]  Keane BP, Pylyshyn ZW (2006) Is motion extrapolation employed in multiple object tracking? Tracking as a low-level, non-predictive function. Cognitive Psychology 52(4): 346–368.
[17]  Jiang YH, Wagner LC (2004) What is learned in spatial contextual cuing - configuration or individual locations? Perception & Psychophysics 66(3): 454–463.

Full-Text

comments powered by Disqus