|
- 2018
跨语言声学模型在维吾尔语语音识别中的应用
|
Abstract:
对维吾尔语而言,由于数据采集和标注存在各种困难,用于训练声学模型的语音数据不够充分。为此,该文研究了基于长短期记忆网络的跨语言声学模型建模方法,利用汉语庞大的训练数据训练深度神经网络声学模型,然后将网络的输出层权重去掉,用随机化的方式产生与维吾尔语输出层对应的权重值,采用反向传播的方式,利用维吾尔语语音数据更新所有权重来训练维吾尔语声学模型。实验结果表明:该方法使维吾尔语转写和听写识别错误率分别比基线系统相对降低了20%和30%。该方法利用汉语大数据来训练神经网络的隐藏层,使维吾尔语声学模型能在一个较好的初始权重网络上进行训练,增强了网络的鲁棒性。
Abstract:The Uyghur language has a little speech data for training acoustic models due to various data acquisition and annotation difficulties. This paper describes a modeling method for crosslingual acoustic models based on long short-term memory models. Mass Chinese language training data is used to train a deep neural network acoustic model. The network output layer weights are then randomly modified to create the output layer for the Uyghur language. A Uyghur language acoustic model is then trained using Uyghur language speech data to update all the weights. Tests show that this method reduces the word error rates of the Uyghur language transcription and dictation recognition by 20% and 30% than the baseline system. Thus, this method improves the Uyghur language acoustic model with better initial weights from the Chinese language data to train hidden layers in the neural network, and enhances the network robustness.
[1] | ROBINSON A J. An application of recurrent nets to phone probability estimation[J]. IEEE Transactions on Neural Networks, 1994, 5(2):298-305. |
[2] | 麦麦提艾力·吐尔逊, 戴礼荣. 深度神经网络在维吾尔语大词汇量连续语音识别中的应用[J]. 数据采集与处理, 2015, 30(2):365-371. MAIMAITIAILI T, DAI L R. Deep neural network based Uyghur large vocabulary continuous speech recognition[J]. Journal of Data Acquisition and Processing, 2015, 30(2):365-371. (in Chinese) |
[3] | 其米克·巴特西, 黄浩, 王羡慧. 基于深度神经网络的维吾尔语语音识别[J]. 计算机工程与设计, 2015, 36(8):2239-2244. QIMIKE B, HUANG H, WANG X H. Uyghur speech recognition based on deep neural network[J]. Computer Engineering and Design, 2015, 36(8):2239-2244. (in Chinese) |
[4] | 刘林泉, 郑方, 吴文虎. 基于小数据量的方言普通话语音识别声学建模[J]. 清华大学学报(自然科学版), 2008, 48(4):604-607. LIU L Q, ZHENG F, WU W H. Small dataset-based acoustic modeling for dialectal Chinese speech recognition[J]. Journal of Tsinghua University (Science and Technology), 2008, 48(4):604-607. (in Chinese) |
[5] | SCHULTZ T, WAIBEL A. Experiments on cross-language acoustic modeling[C]//The 7th European Conference on Speech Communication and Technology. Aalborg, Denmark, 2001:2721-2724. |
[6] | POVEY D, BURGET L, AGARWAL M, et al. The subspace Gaussian mixture model:A structured model for speech recognition[J]. Computer Speech & Language, 2011, 25(2):404-439. |
[7] | BURGET L, SCHWARZ P, AGARWAL M, et al. Multilingual acoustic modeling for speech recognition based on subspace Gaussian mixture models[C]//IEEE International Conference on Acoustics Speech and Signal Processing. Dallas, USA, 2010:4334-4337. |
[8] | STOLCKE A, GREZL F, HWANG M Y, et al. Cross-domain and cross-language portability of acoustic features estimated by multilayer perceptron[C]//IEEE International Conference on Acoustics, Speech and Signal Processing. Toulouse, France, 2006:321-324. |
[9] | VESELy K, KARAFIáT M, GRéZL F, et al. The language-independent bottleneck features[C]//2012 Workshop on Spoken Language Technology. Miami, USA, 2012:336-341. |
[10] | SWIETOJANSKI P, GHOSHAL A, RENALS S. Unsupervised cross-lingual knowledge transfer in DNN-based LVCSR[C]//2012 Workshop on Spoken Language Technology. Miami, USA, 2012:246-251. |
[11] | SIM K C, LI H. Context-sensitive probabilistic phone mapping model for cross-lingual speech recognition[C]//9th Annual Conference of the International Speech Communication Association. Brisbane, Australia, 2008:2715-2718. |
[12] | DO V H, XIAO X, CHNG E S, et al. Context dependant phone mapping for cross-lingual acoustic modeling[C]//20128th International Symposium on Chinese Spoken Language Processing. Hong Kong, China, 2012:16-20. |
[13] | HUANG J T, LI J, YU D, et al. Cross-language knowledge transfer using multilingual deep neural network with shared hidden layers[C]//IEEE International Conference on Acoustics, Speech and Signal Processing. Vancouver, Canada, 2013:7304-7308. |