AutoTTS: End-to-End Text-to-Speech Synthesis through Differentiable Duration ModelingView Publication
Parallel text-to-speech (TTS) models have recently enabled fast and highly-natural speech synthesis. However, they typically require external alignment models, which are not necessarily optimized for the decoder as they are not jointly trained. In this paper, we propose a differentiable duration method for learning monotonic alignments between input and output sequences. Our method is based on a soft-duration mechanism that optimizes a stochastic process in expectation. Using this differentiable duration method, we introduce AutoTTS, a direct text-to-waveform speech synthesis model. AutoTTS enables high-fidelity speech synthesis through a combination of adversarial training and matching the total ground-truth duration. Experimental results show that our model obtains competitive results while enjoying a much simpler training pipeline. Audio samples are available online.
Related PublicationsView All
Improving Self-Supervised Learning for Audio Representations by Feature Diversity and Decorrelation
Bac Nguyen, Stefan Uhlich, Fabien CardinauxSelf-supervised learning (SSL) has recently shown remarkable results in closing the gap between supervised and […]
Music Mixing Style Transfer: A Contrastive Learning Approach to Disentangle Audio Effects
Junghyun Koo, Marco A. Martı́nez-Ramı́rez, Wei-Hsiang Liao, Stefan Uhlich, Kyogu Lee*, Yuki MitsufujiWe propose an end-to-end music mixing style transfer system that converts the mixing style of an input multitr […]