Research

研究プロジェクト・論文・書籍等

Share

  • 論文

End-to-End Text-to-Speech using Latent Duration based on VQ-VAE

Author:Yusuke Yasuda, Xin Wang, Junichi Yamagishi

  • #音声処理
  • #音声合成

2021 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2021)

Explicit duration modeling is a key to achieving robust and efficient alignment in text-to-speech synthesis (TTS). We propose a new TTS framework using explicit duration modeling that incorporates duration as a discrete latent variable to TTS and enables joint optimization of whole modules from scratch. We formulate our method based on conditional VQ-VAE to handle discrete duration in a variational autoencoder and provide a theoretical explanation to justify our method. In our framework, a connectionist temporal classification (CTC) -based force aligner acts as the approximate posterior, and text-to-duration works as the prior in the variational autoencoder. We evaluated our proposed method with a listening test and compared it with other TTS methods based on soft-attention or explicit duration modeling. The results showed that our systems rated between soft-attention-based methods (Transformer-TTS, Tacotron2) and explicit duration modeling-based methods (Fastspeech).