全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...
-  2018 

远距离规则的内隐学习使用了何种记忆存储器:来自神经网络模拟的证据

Keywords: nonlocal dependencies implicit learning memory buffer neural network simulations

Full-Text   Cite this paper   Add to My Lib

Abstract:

摘要: 关于远距离规则的知识是如何被内隐学习的,研究尚未得出结论。该研究通过采用和人类被试相同的实验材料和程序,考察了简单循环网络模型(SRN)对两种汉语声调远距离规则——倒映和逆行规则的内隐学习。结果发现:1.在广泛的参数范围上,SRN能够学会倒映和逆行规则,表明模型的记忆缓冲器可以模拟人类远距离规则的内隐学习;2.SRN对倒映规则的学习比对逆行规则的学习更好,表明在功能上远距离规则的内隐学习可能优先使用了先进先出的记忆存储器及信息加工模式。该研究为探究远距离规则内隐学习的机制提供了新的证据和视角。
Abstract: In implicit learning literature, a basic question concerning how knowledge of structures and regularities is learned is whether the learning mechanism uses a temporary storage buffer, and, if so, what the nature of the buffer is. Recently, Li et al.(2013) found that people acquired unconscious structural knowledge of both Chinese tonal retrogrades and inversions. Moreover, inversions were implicitly learnt more easily than retrogrades, pattern predicted by implicit learning using a first in-first out buffer rather than a last in-?rst out buffer. However, because Chinese Tang poetry uses an inversion, knowledge participants were likely exposed to as children, it is not clear whether prior expectations of structure instantiating inversions could over-ride the effect of what type of buffer the system uses. The network doesn’t have prior knowledge. Accordingly, the present study investigated whether the Simple Recurrent Network (SRN), that used a buffer to allow learning of nonlocal dependencies, could learn tonal inversions and retrogrades and replicate the advantage of inversions over retrogrades. The SRN was tested on the same materials and procedures as Li et al. (2013). The networks were assigned to four cells of two training conditions (trained vs. untrained) by two rules (inversion vs. retrograde) design. The simulations were carried out using all possible permutations of the parameter values, resulting in 150 different models for each group. The materials were strings of tonal syllables. Each string consisted of 10 different tonal syllables, where the tone types (pings and zes) of first five syllables predicted the tone types of following five by forming an inversion or a retrograde. In training phase, 144 grammatical strings were used for two trained groups. In test phase, four groups of networks were presented with 48 test sequences (half grammatical and half ungrammatical), and their ability to predict the next tone in the predictable second five elements was used as an index of performance. T-test (with Bonferronni correction) showed that trained networks performed significantly better than untrained networks for both inversion and retrograde groups, suggesting that the networks possibly learnt the two rules. Moreover, for both trained and untrained groups, inversion group

Full-Text

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133