全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

基于语义联合的知识库问答方法
Knowledge Base Question Answering Method Based on Semantic Fusion

DOI: 10.12677/AIRR.2023.124031, PP. 281-291

Keywords: 知识库问答,对比学习,语义联合,相似度
QA Based on Knowledge Base
, Contrastive Learning, Semantic Fusion, Similarity

Full-Text   Cite this paper   Add to My Lib

Abstract:

基于知识图谱的问答(KBQA)是问答系统的重要组成部分。然而,现有多数知识图谱问答系统侧重于回答单个三元组查询的简单问题,对回答涉及多个实体和关系的复杂问题的正确率较低。为了提高正确率,本文采用对比学习方法来计算语义相似度,并提出了语义联合模型框架,将实体消歧和关系匹配任务进行联合建模,以避免误差传递问题。最终使用本文的方法在CCKS2019KBQA数据集上进行实验,实验结果表明,与BERT模型相比,对比学习模型在计算语义相似度方面更具优势,并且语义联合建模的效果也优于先进行实体消歧再进行关系匹配的方法。
Translation: Knowledge-based question answering (KBQA) is an important component of question answering systems. However, most existing KBQA systems primarily focus on answering simple questions that involve single triple pattern queries, resulting in lower accuracy when it comes to answering complex questions involving multiple entities and relationships. To improve the accuracy, this paper adopts contrastive learning to calculate semantic similarity and proposes a semantic joint modeling framework that combines entity disambiguation and relationship matching tasks to avoid error propagation. The proposed method is evaluated on the CCKS2019KBQA dataset, and the experimental results demonstrate that compared to the BERT model, the contrastive learning model has a greater advantage in computing semantic similarity, and the effectiveness of semantic joint modeling is superior to the approach of first performing entity disambiguation and then relationship matching.

References

[1]  Chen, W., Zha, H., Chen, Z., et al. (2020) HybridQA: A Dataset of Multi-Hop Question Answering over Tabular and Textual Data. Findings of the Association for Computational Linguistics: EMNLP 2020. Association for Computational Linguistics, 1026-1036,
https://doi.org/10.18653/v1/2020.findings-emnlp.91
[2]  Lan, Y., He, G., Jiang, J., et al. (2021) A Survey on Complex Knowledge Base Question Answering: Methods, Challenges and Solutions. Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence Survey Track, 4483-4491.
https://doi.org/10.24963/ijcai.2021/611
[3]  Bao, J.W., Duan, N.Y., et al. (2016) Constraint-Based Question Answering with Knowledge Graph. Proceedings of the 26th International Conference on Computational Linguistics Technical Papers, 2503-2514.
[4]  徐增林, 盛泳潘, 贺丽荣, 王雅芳. 知识图谱技术综述[J]. 电子科技大学学报, 2016, 45(4): 589606.
[5]  谢腾, 杨俊安, 刘辉. 基于BERT-BiLSTM-CRF模型的中文实体识别[J]. 计算机系统应用, 2020, 29(7): 48-55.
http://www.c-s-a.org.cn/1003-3254/7525.html
[6]  Reimers, N. and Gurevych, I. (2019). Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, Association for Computational Linguistics, 3982-3992.
https://doi.org/10.18653/v1/D19-1410
[7]  Devlin, J., Chang, M.-W., Lee, K. and Toutanova, K. (2019) BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Vol. 1, Minneapolis, Association for Computational Linguistics, 4171-4186.
[8]  Yan, Y., Li, R., Wang, S., et al. (2021) Consert: A Contrastive Framework for Self-Supervised Sentence Representation Transfer. Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, 5065-5075.
https://doi.org/10.18653/v1/2021.acl-long.393
[9]  苏剑林. CoSENT(一): 比Sentence-BERT更有效的句向量方案[EB/OL].
https://spaces.ac.cn/archives/8847, 2022-1-6.

Full-Text

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133