Polyphone Disambiguation and Accent Prediction Using Pre-Trained Language Models in Japanese TTS Front-EndView Publication
Although end-to-end text-to-speech (TTS) models can generate natural speech, challenges still remain when it comes to estimating sentence-level phonetic and prosodic information from raw text in Japanese TTS systems. In this paper, we propose a method for polyphone disambiguation (PD) and accent prediction (AP). The proposed method incorporates explicit features extracted from morphological analysis and implicit features extracted from pre-trained language models (PLMs). We use BERT and Flair embeddings as implicit features and examine how to combine them with explicit features. Our objective evaluation results showed that the proposed method improved the accuracy by 5.7 points in PD and 6.0 points in AP. Moreover, the perceptual listening test results confirmed that a TTS system employing our proposed model as a front-end achieved a mean opinion score close to that of synthesized speech with ground-truth pronunciation and accent in terms of naturalness.
Related PublicationsView All
The Pipeline System of ASR and NLU with MLM-based Data Augmentation toward STOP Low-resource Challenge
Hayato Futami, Jessica Huynh*, Siddhant Arora*, Shih-Lun Wu*, Yosuke Kashiwagi, Yifan Peng*, Brian Yan*, Emiru Tsunoo, Shinji Watanabe*This paper describes our system for the low-resource domain adaptation track (Track 3) in Spoken Language Unde […]
E-Branchformer-Based E2E SLU Toward Stop on-Device Challenge
Yosuke Kashiwagi, Siddhant Arora*, Hayato Futami, Jessica Huynh*, Shih-Lun Wu*, Yifan Peng*, Brian Yan*, Emiru Tsunoo, Shinji Watanabe*In this paper, we report our team’s study on track 2 of the Spoken Language Understanding Grand Challenge, whi […]