Research
研究プロジェクト・論文・書籍等
- テクニカルレポート
[国内学会] Using Continuous Representation of Various Linguistic Units for Recurrent Neural Network based TTS Synthesis
- #音声処理
- #音声合成
情報処理学会 第110回音声言語情報処理研究発表会
Building high-quality text-to-speech (TTS) systems without expert knowledge of the target language and/or manual time-consuming annotation of speech and text data is an important and challenging research topic in speech synthesis. Recently, the distributed representation of raw word inputs, called “word embedding”, have been used in various natural language processing tasks with success. Moreover, the word-embedding vectors have recently been used as the additional or alternative linguistic input features to a neural-network-based acoustic model for TTS systems. Since word-embedding approaches may provide means for obtaining effective linguistic representations from texts without requiring specialized knowledge of the language and/or manual time-consuming annotation, we further investigated the use of the word-embedding for neural-network-based TTS systems in two new directions. First, in addition to the standard word embedding vectors, we attempted to use phoneme, syllable, and phrase embedding vectors to verify whether continuous representations of these linguistic units may improve the segmental and suprasegmental quality of synthetic speech. Second, we examined the impact of normalization methods on the obtained embedded vectors before they were feed into the neural-network-based acoustic model.