全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

基于多任务学习的同行评审论文接收预测
Prediction of Peer-Reviewed Paper Acceptance Based on Multi-Task Learning

DOI: 10.12677/mos.2024.133254, PP. 2804-2814

Keywords: 同行评审,BERT,多任务学习,长短期记忆神经网络,卷积神经网络,注意力机制
Peer Review
, BERT, Multi-Task Learning, LSTM, CNN, Attention Mechanisms

Full-Text   Cite this paper   Add to My Lib

Abstract:

同行评审论文接收预测是一项具有重要意义的任务,其有效提升了同行评审的效率和质量。以往的同行评审论文接收预测方法大多以单任务的形式研究,并未充分利用论文评分等其它辅助信息,同时也未有效提取同行评审文本的语义特征。针对上述问题,文中提出了一种多任务同行评审文本分析模型BCLJ(BERT-CNN-LSTM-Joint Model, BCLJ)。首先,使用BERT作为词向量获得文本的矩阵表示;然后,引入卷积神经网络(Convolutional Neural Network, CNN)和长短期记忆网络(Long Short-Term Memory Network, LSTM)进行语义特征的提取,并运用注意力机制增强对文本信息的理解能力;最后,利用不同的全连接层进行多任务学习,获得论文接受预测和评分预测两种输出,通过评分预测任务来优化主分类任务。实验结果表明,多任务模型在论文接收预测任务和评分预测任务中表现出色高于其他的基线模型,在论文接收预测任务中准确率达到了0.7117,F1值达到了0.7101,在论文评分预测任务中MSE,RMSE和MAE分别为1.3690,1.1700和0.9324。
Peer-reviewed paper acceptance prediction is a task of great significance, which effectively improves the efficiency and quality of peer review. Most of the previous peer-reviewed paper acceptance prediction methods are in the form of single-task research, which do not make full use of other auxiliary information such as paper ratings, and do not effectively extract the semantic features of peer-reviewed text. To address the above problems, a multi-task peer-review text analysis model BCLJ (BERT-CNN-LSTM-Joint Model, BCLJ) is proposed in the paper. First, BERT is used as the word vector to obtain the matrix representation of the text; then, Convolutional Neural Network (CNN) and Long Short-Term Memory Network (LSTM) are introduced for semantic feature extraction, and the attention mechanism is applied to enhance the comprehension of the textual information; finally, multi-task learning is performed by utilizing different fully-connected layers, and two outputs of paper acceptance prediction and scoring prediction are obtained to optimize the main classification task through the rating prediction task to optimize the main classification task. The experimental results show that the multi-task model performs better than other baseline models in the paper acceptance prediction task and the rating prediction task, with an accuracy of 0.7117 and an F1 value of 0.7101 in the paper acceptance prediction task, and MSE, RMSE, and MAE of 1.3690, 1.1700 and 0.9324 in the paper rating prediction task, respectively.

References

[1]  Fernandes, G.L. and Vaz-de-Melo, P.O.S. (2022) Between Acceptance and Rejection: Challenges for an Automatic Peer Review Process. Proceedings of the 22nd ACM/IEEE Joint Conference on Digital Libraries, Cologne, 20-24 June 2022, 1-12.
https://doi.org/10.1145/3529372.3530935
[2]  Pang, B. and Lee, L. (2005) Seeing Stars: Exploiting Class Relationships for Sentiment Categorization with Respect to Rating Scales. Proceedings of the 43rd Annual Meeting of the Association for Computation-al Linguistics (ACL’05). Ann Arbor, 25-30 June 2005, 115-124.
https://doi.org/10.3115/1219840.1219855
[3]  Keith, B., Fuentes, E. and Meneses, C. (2017) A Hybrid Approach for Sentiment Analysis Applied to Paper. Proceedings of ACM SIGKDD Conference, Halifax, August 2017, 10.
[4]  Ribeiro, A.C., Sizo, A., Lopes Cardoso, H., et al. (2021) Acceptance Decision Prediction in Peer-Review through Sentiment Analysis. Springer International Publishing, Cham, 766-777.
https://doi.org/10.1007/978-3-030-86230-5_60
[5]  Li, S., Zhao, W.X., Yin, E.J., et al. (2019) A Neural Citation Count Prediction Model Based on Peer Review Text. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, November 2019, 4914-4924.
https://doi.org/10.18653/v1/D19-1497
[6]  Leng, Y., Yu, L., Xiong, J. (2019) Deepreviewer: Collaborative Grammar and Innovation Neural Network for Automatic Paper Review. 2019 International Conference on Multimodal Interaction, Suzhou, October 2019, 395-403.
https://doi.org/10.1145/3340555.3353766
[7]  Deng, Z., Peng, H., Xia, C., et al. (2020) Hierarchical Bi-Directional Self-Attention Networks for Paper Review Rating Recommendation. Proceedings of the 28th International Conference on Computational Linguistics, Barcelona, November 2020, 6302-6314.
https://doi.org/10.18653/v1/2020.coling-main.555
[8]  陈红玉, 胡文俊, 路永和. 开放同行评审中自动评审分类方法研究[J]. 现代情报, 2024, 44(5): 95-106.
[9]  林原, 王凯巧, 丁堃. 学术论文的定性评价定量化研究[J]. 情报理论与实践, 2021, 44(8): 28-34.
[10]  Li, J., Sato, A., Shimura, K., et al. (2020) Multi-Task Peer-Review Score Prediction. Proceedings of the First Workshop on Scholarly Document Processing, Stroudsburg, November 2020, 121-126.
https://doi.org/10.18653/v1/2020.sdp-1.14
[11]  朱金秋, 檀健, 韩斌彬, 等. 基于多任务学习的同行评审细粒度情感分析模型[J]. 中北大学学报(自然科学版), 2024, 45(1): 105-113.
[12]  Vandenhende, S., Georgoulis, S., Van Gansbeke, W., et al. (2021) Multi-Task Learning for Dense Prediction Tasks: A Survey. IEEE Transactions on Pattern Analysis & Machine Intelligence, 44, 3614-3633.
https://doi.org/10.1109/TPAMI.2021.3054719
[13]  Kenton, J.D. and Toutanova, L.K. ()2019 Bert: Pre-Training of Deep Bidirectional Transformers for Language Understanding. Proceedings of NAACL-HLT, Minnesota, June 2019, 4171-4186.
[14]  Hinton, G.E., Srivastava, N., Krizhevsky, A., et al. (2012) Improving Neural Networks by Preventing Co-Adaptation of Feature Detectors.
https://doi.org/10.48550/arXiv.1207.0580
[15]  Kim, Y. (2014) Convolutional neural networks for sen-tence classification. Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Doha, conference date, 1746-1751.
https://doi.org/10.3115/v1/D14-1181
[16]  Zhou, P., Shi, W., Tian, J., et al. (2016) Attention-Based Bidirectional Long Short-Term Memory Networks for Relation Classification. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, conference city, conference date, 207-212.
https://doi.org/10.18653/v1/P16-2034
[17]  Yang, P., Sun, X., Li, W., et al. (2018) Automatic Academic Paper Rating Based on Modularized Hierarchical Convolutional Neural Network. Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, Melbourne, May 2018, 496-502.
https://doi.org/10.18653/v1/P18-2079

Full-Text

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133