%0 Journal Article %T 基于改进Self-MM模型的多模态情感分析
Multimodal Sentiment Analysis Based on Improved Self-MM Model %A 马健兵 %A 沈琪瀚 %A 崔翔浩 %J Computer Science and Application %P 923-931 %@ 2161-881X %D 2023 %I Hans Publishing %R 10.12677/CSA.2023.134090 %X 早期情感分析依托于神经网络在文本、图像或者音频等单个模态做情感分析,虽然在各自模态已经有了不错的效果,但是仅仅通过单模态做情感分析无法充分表达人们的情感,所以本文结合多个模态的信息应用于情感分析领域。该领域中Self-MM模型已经有了较好的实验效果,但是该模型在优化器层面还有提升的空间,本文在此基础上继续做研究,采用更先进的AdamW优化器,在公开数据集CMU-MOSI进行验证,实验结果在Acc-7、Acc-2两个分类精度上分别有0.12%和0.43%的提升。
Early sentiment analysis relies on neural networks to do sentiment analysis in individual modali-ties such as text, image or audio, and although there have been good results in each modality, it is not possible to fully express people’s emotions by only doing sentiment analysis in a single modality, so this paper combines information from multiple modalities to apply to the field of sentiment analysis. The Self-MM model in this field has had good experimental results, but the model has room for improvement at the optimizer level. This paper continues to do research on this basis using the more advanced AdamW optimizer, and validates it in the public data set CMU-MOSI, and the experimental results have an improvement of 0.12% and 0.43% in the classification accuracy of Acc-7 and Acc-2, respectively. %K 多模态,情感分析,神经网络
Multimodal %K Sentiment Analysis %K Neural Networks %U http://www.hanspub.org/journal/PaperInformation.aspx?PaperID=64737