全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...
软件学报  2005 

Text-To-Visual Speech in Chinese Based on Data-Driven Approach
基于数据驱动方法的汉语文本-可视语音合成

Keywords: text-to-speech (TTS),text-to-visual speech (TTVS),viseme,co-articulation
文-语转换系统(TTS)
,文本-可视语音合成系统(TTVS),视位,协同发音

Full-Text   Cite this paper   Add to My Lib

Abstract:

Text-To-Visual speech (TTVS) synthesis by computer can increase the speech intelligibility and make the human-computer interaction interfaces more friendly. This paper describes a Chinese text-to-visual speech synthesis system based on data-driven (sample based) approach, which is realized by short video segments concatenation. An effective method to construct two visual confusion trees for Chinese initials and finals is developed. A co-articulation model based on visual distance and hardness factor is proposed, which can be used in the recording corpus sentence selection in analysis phase and the unit selection in synthesis phase. The obvious difference between bound ary images of the concatenation video segments is smoothed by image morphing technique. By combining with the acoustic Text-To-Speech (TTS) synthesis, a Chinese text-to-visual speech synthesis system is realized.

Full-Text

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133