全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

融合BERT多层特征的方面级情感分析
Aspect-Based Sentiment Analysis Based on Multi-Layer Features of Bert

DOI: 10.12677/CSA.2020.1012226, PP. 2147-2158

Keywords: 深度学习,方面级情感分析,BERT,微调,方面特征,分类特征,卷积神经网络
Deep Learning
, Aspect-Level Sentiment Analysis, BERT, Fine-Tune, Aspect Feature, Classification Feature, Convolutional Neural Networks

Full-Text   Cite this paper   Add to My Lib

Abstract:

BERT (Bidirectional Encoder Representation from Transformers)预训练语言模型相较于传统词向量(one-hot、word2vec等)可以动态表示词语,并且在11项下游任务中取得了最好的结果。基于BERT在特定语料中微调的方式逐渐被广泛应用于情感分析任务中并且取得了不错的效果,然而仅仅使用BERT最后一个编码层的输出特征用于分类,忽略了其他层学习到的语义特征。不同于以往基于BERT的分类模型,BERT-MLF融合了BERT每一个编码层输出的方面特征,并通过卷积层提取层之间的关键语义特征,减少冗余信息的影响,充分利用了每个编码层学习到的信息。该方法在SemEval-2014 Task 4的Laptop和Restaurant数据集上进行了大量实验,实验结果表明该方法实现了良好的分类性能。
Compared with traditional word vectors (one-hot, word2vec, etc.), BERT (Bidirectional Encoder Representation from Transformers) pretrained language model can dynamically represent words and has achieved the best results in 11 downstream tasks. The method of fine-tuning in specific corpus based on BERT has been widely used in sentiment analysis tasks and achieved good results. However, only the output features of the last coding layer of BERT are used for classification, ignoring the semantic features learned by other layers. Different from the previous classification model based on BERT, BERT-MLF integrates the aspect features of each coding layer of BERT, and extracts the key semantic features between the layers through the convolution layer to reduce the influence of redundant information and make full use of the information learned by each coding layer. A large number of experiments have been carried out on the Laptop and Restaurant datasets of SemEval-2014 Task 4, and the experimental results show that the method achieves good classification performance.

References

[1]  Hu, M. and Liu, B. (2004) Mining and Summarizing Customer Reviews. Proceedings of the Tenth ACM SIGKDD In-ternational Conference on Knowledge Discovery and Data Mining, Seattle, 22-25 August 2004, 168-177.
https://doi.org/10.1145/1014052.1014073
[2]  Liu, B. (2012) Sentiment Analysis and Opinion Mining. Synthesis Lectures on Human Language Technologies, 5, 1-167.
https://doi.org/10.2200/S00416ED1V01Y201204HLT016
[3]  Pang, B. and Lee, L. (2008) Opinion Mining and Sentiment Analysis. Foundations and Trends in Information Retrieval, 2, 1-135.
https://doi.org/10.1561/1500000011
[4]  Mikolov, T., Chen, K., Corrado, G., et al. (2013) Efficient Estimation of Word Representations in Vector Space.
[5]  Pennington, J., Socher, R. and Manning, C.D. (2014) Glove: Global Vec-tors for Word Representation. Proceedings of the 2014 Conference on Empirical Methods in Natural Language Pro-cessing (EMNLP), Doha, October 2014, 1532-1543.
https://doi.org/10.3115/v1/D14-1162
[6]  Peters, M.E., Neumann, M., Iyyer, M., et al. (2018) Deep Contextualized Word Representations. Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), 2227-2237.
https://doi.org/10.18653/v1/N18-1202
[7]  Hochreiter, S. and Schmidhuber, J. (1997) Long Short-Term Memory. Neural Computation, 9, 1735-1780.
https://doi.org/10.1162/neco.1997.9.8.1735
[8]  Elman, J.L. (1990) Finding Structure in Time. Cognitive Science, 14, 179-211.
https://doi.org/10.1207/s15516709cog1402_1
[9]  Radford, A., Narasimhan, K., Salimans, T., et al. (2018) Im-proving Language Understanding by Generative Pre-Training.
https://www.cs.ubc.ca/~amuham01/LING530/papers/radford2018improving.pdf
[10]  Vaswani, A., Shazeer, N., Parmar, N., et al. (2017) Attention Is All You Need. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, 4-9 December 2017, 5998-6008.
[11]  Devlin, J., Chang, M.W., Lee, K., et al. (2018) Bert: Pre-Training of Deep Bidirectional Transformers for Language Understanding.
[12]  Kim, Y. (2014) Convolutional Neural Networks for Sentence Classification. Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Doha, October 2014, 1746-1751.
https://doi.org/10.3115/v1/D14-1181
[13]  Huang, B. and Carley, K.M. (2019) Parameterized Convolutional Neural Networks for Aspect Level Sentiment Classification. Proceedings of the 2018 Conference on Empirical Methods in Nat-ural Language Processing, October-November 2018, Brussels, 1091-1096.
https://doi.org/10.18653/v1/D18-1136
[14]  Tang, D., Qin, B., Feng, X., et al. (2015) Effective LSTMs for Tar-get-Dependent Sentiment Classification.
[15]  Wang, Y., Huang, M. and Zhao, L. (2016) Attention-Based LSTM for Aspect-Level Sentiment Classification. Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Austin, November 2016, 606-615.
https://doi.org/10.18653/v1/D16-1058
[16]  Yang, M., Tu, W., Wang, J., et al. (2017) Attention Based LSTM for Target Dependent Sentiment Classification. Thirty-First AAAI Con-ference on Artificial Intelligence, San Francisco, California, 4-9 February 2017, 5013-5014.
[17]  Liu, J. and Zhang, Y. (2017) Attention Modeling for Targeted Sentiment. Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, Volume 2, Short Papers, 572-577.
https://doi.org/10.18653/v1/E17-2091
[18]  Tang, D., Qin, B. and Liu, T. (2016) Aspect Level Sentiment Classifica-tion with Deep Memory Network. Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Austin, November 2016, 214-224.
https://doi.org/10.18653/v1/D16-1021
[19]  Song, Y., Wang, J., Jiang, T., et al. (2019) Attentional Encoder Net-work for Targeted Sentiment Classification.
[20]  Sun, C., Huang, L. and Qiu, X. (2019) Utilizing Bert for Aspect-Based Sentiment Analysis via Constructing Auxiliary Sentence.
[21]  Gao, Z., Feng, A., Song, X., et al. (2019) Tar-get-Dependent Sentiment Classification with BERT. IEEE Access, 7, 154290-154299.
https://doi.org/10.1109/ACCESS.2019.2946594
[22]  Pontiki, M., Galanis, D., Pavlopoulos, J., Papageorgiou, H., Androutsopoulos, I. and Manandhar, S. (2014) SemEval-2014 Task 4: Aspect Based Sentiment Analysis. In: Proceed-ings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), Association for Computational Linguis-tics, Dublin, 27-35.
https://doi.org/10.3115/v1/S14-2004
[23]  Jawahar, G., Sagot, B. and Seddah, D. (2019) What Does BERT Learn about the Structure of Language? Proceedings of the 57th Annual Meeting of the Association for Computational Linguis-tics, Florence, July 2019, 3651-3657.
https://doi.org/10.18653/v1/P19-1356
[24]  Ma, D., Li, S., Zhang, X., et al. (2017) Interactive Attention Networks for Aspect-Level Sentiment Classification. Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence Main Track, Melbourne, 19-25 August 2017, 4068-4074.
https://doi.org/10.24963/ijcai.2017/568
[25]  Xu, H., Liu, B., Shu, L., et al. (2019) BERT Post-Training for Review Reading Comprehension and Aspect-Based Sentiment Analysis. Proceedings of NAACL-HLT 2019, Minneapolis, 2-7 June 2019, 2324-2335.

Full-Text

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133