|
- 2018
阅读理解中因果关系类选项的研究
|
Abstract:
针对阅读理解选择题中因果关系类选项,该文提出了基于因果关系网的因果关系支持度分析方法。首先,通过线索短语从阅读材料中抽取因果事件对,并计算事件对之间因果关联强度,综合利用抽取到的因果事件对与其对应的因果关联强度构成因果关系网;其次,综合考虑了选项中的每个词在文档中的重要性和整个文档中的区分能力,采用词频-逆向文件频率(term frequency-inverse document frequency,TF-IDF)方法分别从原文中检索与选项中因事件和果事件相关的句子;最后,基于因果关系网和抽取到的相关句计算选项的因果关系支持度。为了验证该方法,实验采用了769篇模拟材料和13篇北京高考语文试卷材料(包括原文与选择题)作为测试数据集,实验结果表明该方法的准确率比Baseline方法提高了约11%。
Abstract:A causal relation support analysis method based on a causal network is presented here to identify the causal relation types in Chinese reading comprehension. Firstly, the causal events are extracted from the literature by clue phrases, the causal relation between the events is given a value, and a causal network is constructed from the causal events and the causal relation. Then, the TF-IDF (term frequency-inverse document frequency) method is used to retrieve related sentences from the document and the importance of each word in the document to characterize the whole document. Finally, the causality network and related sentences are combined to analyze the causal support of the option. The method was evaluated using 769 articles and 13 Beijing colleges entrance examination (including the source text and the selected title) as a test set. This method then gave about 11% better result than the Baseline method.
[1] | DANQI C, JASON B, CHRISTOPHER D M. A thorough examination of the CNN/Daily mail reading comprehension task[C]//Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. Brelin, Germany:ACL, 2016:2359-2367. |
[2] | MRINMAYA S, AVINAVA D, ERIC P X, et al. Learning answer-entailing structures for machine comprehension[C]//Proceedings of the 53th Annual Meeting of the Association for Computational Linguistics. Beijing, China:ACL, 2015:239-249. |
[3] | ADAM T, ZHENG Y, XINGDI Y. A parallel-hierarchical model for machine comprehension on sparse[C]//Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. Brelin, Germany:ACL, 2016:432-441. |
[4] | YIN W P, SEBASTIAN E, HINRICH S. Attention-based convolutional neural network for machine comprehension[C]//Proceedings of the NAACL Human-Computer Question Answering Workshop. San Diego, CA, USA:NAACL, 2016:15-21. |
[5] | KARL M H, TOMAS K, EDWARD G, et al. Teaching machines to read and comprehend[C]//Proceedings of the 28th International Conference on Neural Information Processing Systems. Istanbul, Turkey:MIT Press, 2015:1693-1701. |
[6] | RUDOLF K, MARTIN S, ONDREJ B, et al. Text understanding with the attention sum reader network[J]. arXiv:1603.01547, 2016. |
[7] | MATTHEW R, CHRISTOPHER J, ERIN R. MCTest:A challenge dataset for the open-domain machine comprehension of text[C]//Proceedings of the Empirical Methods in Natural Language Processing. Seattle, Washington, USA:ACL, 2013:193-203. |
[8] | GARCIA D. An NLP system to locate expressions of actions connected by causality links[C]//Proceedings of the 10th European Workshop on Knowledge Acquisition, Modeling and Management. Catalonia, Spain:Springer, 1997:347-352. |
[9] | GIRJU R. Automatic detection of causal relations for question answerling[C]//Proceedings of the 41st ACL Workshop on Multilingual Summarization and Question Answering. Sapporo, Japan:ACL, 2003:76-83. |
[10] | 杨竣辉, 刘宗田, 刘炜, 等. 基于语义事件因果关系识别[J]. 小型微型计算机系统, 2016, 37(3):433-437.YANG J H, LIU Z T, LIU W, et al. Identify causality relationships based on semantic event[J]. Journal of Chinese Computer Systems, 2016, 37(3):433-437. (in Chinese) |
[11] | 张志昌, 张宇, 刘挺, 等. 基于话题和修辞识别的阅读理解why型问题回答[J]. 计算机研究与发展, 2011, 48(2):216-223. ZHANG Z C, ZHANG Y, LIU T, et al. Why-questions answering for reading comprehension based on topic and rhetorical identification[J]. Journal of Computer Research and Development, 2011, 48(2):216-223.(in Chinese) |
[12] | SENDONG Z, QUAN W, SEAN M. Constructing and embedding abstract event causality networks from text snippets[C]//Proceedings of the 10th International Conference on Web Search and Data Mining. Cambridge, UK:ACM, 2017:335-344. |
[13] | 张志昌, 张宇, 刘挺, 等. 基于浅层语义树核的阅读理解答案句抽取[J]. 中文信息学报, 2008, 22(1):80-86.ZHANG Z C, ZHANG Y, LIU T, et al. Answer sentence extraction of reading comprehension based on shallow semantic tree kernel[J]. Journal of Chinese Information Processing, 2008, 22(1):80-86.(in Chinese) |
[14] | 朱征宇, 孙俊华. 改进的基于《知网》的词汇语义相似度计算[J]. 计算机应用, 2013, 33(8):2276-2279.ZHU Z Y, SUN J H. Improved vocabulary semantic similarity calculation based on howNet[J]. Journal of Computer Applications, 2013, 33(8):2276-2279.(in Chinese) |
[15] | CUI Y M, CHEN Z P, WEI S, et al. Attention-over-attention neural networks for reading comprehension[J]. arXiv:1607.04423, 2016b. |
[16] | ALESSANDRO S, PHILLIP B, YOSHUA B. Iterative alternating neural attention for machine reading[J]. arXiv:1606.002245, 2016. |
[17] | 郭少茹, 张虎, 钱揖丽, 等. 面向高考阅读理解的句子语义相关度[J]. 清华大学学报(自然科学版), 2017, 57(6):575-579, 585.GUO S R, ZHANG H, QIAN Y L, et al. Semantic relevancy between sentences for Chinese reading comprehension on college entrance examinations[J]. Journal of Tsinghua University (Science and Technology), 2017, 57(6):575-579, 585. (in Chinese) |
[18] | MARCU D, ECHIHABI A. An unsupervised approach to recognizing discourse relations[C]//Proceedings of the 40th Annual Meeting on Association for Computational Linguistics. Philadelphia, USA:ACL, 2002:368-375. |
[19] | 刘知远, 孙茂松, 林衍凯, 等. 知识表示学习研究进展[J]. 计算机研究与发展, 2016, 53(2):247-261. LIU Z Y, SUN M S, LIN Y K, et al. Knowledge representation learning:A review[J]. Journal of Computer Research and Development, 2016, 53(2):247-261.(in Chinese) |
[20] | MANDAR J, EUNSOL C, DANIEL S W, et al. TriviaQA:A large scale distantly supervised challenge dataset for reading comprehension[C]//Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. Vancouver, Canada:ACL, 2017:1601-1611. |
[21] | CUI Y M, LIU T, CHEN Z P, et al. Consensus attention-based neural networks for Chinese reading comprehension[J]. arXiv:1607.02250, 2016a. |
[22] | KHOO C, KORNFILT J, ODDY R. Automatic extraction of cause-effect information from newspaper text without knowledge-based inferencing[J]. Literary and Linguistic Computing, 1998, 13(4):177-186. |