%0 Journal Article %T 一种融合语法信息的句子压缩方法
A Method Incorporated Syntax Attention for Sentence Compression %A 郝志峰 %A 陈诚 %A 蔡瑞初 %A 温雯 %A 王丽娟 %J Computer Science and Application %P 564-574 %@ 2161-881X %D 2020 %I Hans Publishing %R 10.12677/CSA.2020.103058 %X
英文句子压缩任务由于词典容量等限制,使用深度学习方法容易造成压缩后的句意与原句不同并一定程度影响语法逻辑。针对这一问题,文中提出一种融合语法信息的句子压缩方法。首先通过两组编解码器来对单词和词性分别进行编解码,在解码阶段通过带有语法注意力机制的长短期记忆网络(Syntax-LSTM)融合单词和词性信息产生语法注意力机制进而引导输出结果。与现有方法相比,实验结果表明该算法的F1值在领域数据集上达到了0.7742,在跨领域数据集上达到了0.4186,证明了其输出具有更好的可读性和鲁棒性。
The size of dictionary in English Sentence Compression is limited, so using deep learning methods to compress sentences are prone to delete the keywords by mistake, then affect the meaning of the sentences after compression. To address this problem, this paper proposes a method incorporated syntax attention for sentence compression. Firstly, using two sets of encoder-decoder to encode and decode words and syntax, in the decoder stage, the Syntax-LSTM using syntax gates generates a syntax attention mechanism to lead a more grammatical output. The experimental results show that the F1 value reaches 0.7742 on the same domain dataset and 0.4186 on the cross-domain data, which proves that its results are more readable and more robustness compared with the existing methods.
%K 句子压缩,语法注意力机制,长短期以及网络,鲁棒性
Sentence Compression %K Syntax Attention Mechanism %K Long Short-Term Memory %K Robustness %U http://www.hanspub.org/journal/PaperInformation.aspx?PaperID=34795