• Home
  • Publications
  • AutoTTS: End-to-End Text-to-Speech Synthesis through Differentiable Duration Modeling

Research Area

Author

  • Bac Nguyen, Fabien Cardinaux, Stefan Uhlich
  • * External authors

Company

  • Sony Europe B.V.

Venue

  • ICASSP

Date

  • 2023

Share

AutoTTS: End-to-End Text-to-Speech Synthesis through Differentiable Duration Modeling

View Publication

Abstract

Parallel text-to-speech (TTS) models have recently enabled fast and highly-natural speech synthesis. However, they typically require external alignment models, which are not necessarily optimized for the decoder as they are not jointly trained. In this paper, we propose a differentiable duration method for learning monotonic alignments between input and output sequences. Our method is based on a soft-duration mechanism that optimizes a stochastic process in expectation. Using this differentiable duration method, we introduce AutoTTS, a direct text-to-waveform speech synthesis model. AutoTTS enables high-fidelity speech synthesis through a combination of adversarial training and matching the total ground-truth duration. Experimental results show that our model obtains competitive results while enjoying a much simpler training pipeline. Audio samples are available online.

Share

この記事をシェアする