全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

基于机器阅读理解的生活情景常识预测
General Knowledge Prediction of Life Situations Based on Machine Reading Comprehension

DOI: 10.12677/AIRR.2022.112013, PP. 114-121

Keywords: 机器学习,RNN,LSTM,BERT,生活情境常识
Machine Learning
, RNN, LSTM, BERT, General Knowledge of Life Scenarios

Full-Text   Cite this paper   Add to My Lib

Abstract:

机器学习研究的长期目标是产生适用于推理和自然语言的方法,建立智能对话系统。本实验通过回答日常生活的事件的问答问题来评估阅读理解,使用Facebook AI的BABI tasks中的四种类型数据完成模型训练,采用数字编码稀疏交叉熵损失函数对RNN模型、LSTM模型和BERT模型参数进行设置,采用多分类单标签的categorical_accuracy函数作为评价度量,预测样本数据集中的正确数量。实验结果表明,在RNN模型预测答案的准确率明显高于LSTM和BERT模型。
The long-term goal of machine learning research is to generate methods applicable to reasoning and natural language to build intelligent conversational systems. This experiment evaluates reading comprehension by answering question-and-answer questions about everyday events. Model training is completed using four types of data from Facebook AI’s BABI tasks, and the RNN model, LSTM model, and BERT model parameters are set using a digitally encoded sparse cross-entropy loss function, and a multicategorical single-label categorical_ accuracy function is used as an evaluation metric to predict the number of corrections in the sample dataset. The experimental results show that the accuracy of predicting answers in the RNN model is significantly higher than that of the LSTM and BERT models.

References

[1]  张海涛, 张枭慧, 魏萍, 刘雅姝. 网络用户信息检索行为研究进展[J]. 情报科学, 2020, 38(5): 169-176.
https://doi.org/10.13833/j.issn.1007-7634.2020.05.024
[2]  顾迎捷, 桂小林, 李德福, 沈毅, 廖东. 基于神经网络的机器阅读理解综述[J]. 软件学报, 2020, 31(7): 2095-2126.
https://doi.org/10.13328/j.cnki.jos.006048
[3]  Liu, S., Zhang, X., Zhang, S., et al. (2019) Neural Machine Reading Comprehension: Methods and Trends. Applied Sciences, 9, 3698.
https://doi.org/10.3390/app9183698
[4]  李闪闪, 曹存根. 事件前提和后果常识知识分析方法研究[J]. 计算机科学, 2013(4): 185-192.
[5]  Weston, J., Bordes, A., Chopra, S., et al. (2015) Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks.
https://arxiv.org/abs/1502.05698
[6]  杨丽, 吴雨茜, 王俊丽, 刘义理. 循环神经网络研究综述[J]. 计算机应用, 2018, 38(S2): 1-6.
[7]  刘建伟, 刘媛, 罗雄麟. 深度学习研究进展[J]. 计算机应用研究, 2014, 31(7): 11.
[8]  Sundermeyer, M., Schlüter, R. and Ney, H. (2012) LSTM Neural Networks for Language Modeling. Interspeech.
https://doi.org/10.21437/Interspeech.2012-65
[9]  Devlin, J., Chang, M.W., Lee, K., et al. (2018) BERT: Pre-Training of Deep Bidirectional Transformers for Language Understanding. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Minneapolis, 4171-4186.
[10]  Radford, A., Narasimhan, K., Salimans, T., et al. (2018) Improving Language Understanding by Generative Pre-Training.
https://www.semanticscholar.org/paper/Improving-Language-Understanding-by-Generative-Radford-Narasimhan/cd18800a0fe0b668a1cc19f2ec95b5003d0a5035

Full-Text

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133