Diffiner: A Versatile Diffusion-based Generative Refiner for Speech EnhancementView Publication
Although deep neural network (DNN)-based speech enhancement (SE) methods outperform the previous non-DNN-based ones, they often degrade the perceptual quality of generated outputs. To tackle this problem, we introduce a DNN-based generative refiner, Diffiner, aiming to improve perceptual speech quality pre-processed by an SE method. We train a diffusionbased generative model by utilizing a dataset consisting of clean speech only. Then, our refiner effectively mixes clean parts newly generated via denoising diffusion restoration into the degraded and distorted parts caused by a preceding SE method, resulting in refined speech. Once our refiner is trained on a set of clean speech, it can be applied to various SE methods without additional training specialized for each SE module. Therefore, our refiner can be a versatile post-processing module w.r.t. SE methods and has high potential in terms of modularity. Experimental results show that our method improved perceptual speech quality regardless of the preceding SE methods used. Our code is available at https://github.com/sony/diffiner.
Related PublicationsView All
STARSS23: An Audio-Visual Dataset of Spatial Recordings of Real Scenes with Spatiotemporal Annotations of Sound Events
Kazuki Shimada, Archontis Politis*, Parthasaarathy Sudarsanam*, Daniel Krause*, Kengo Uchida, Sharath Adavanne*, Aapo Hakala*, Yuichiro Koyama, Naoya Takahashi, Shusuke Takahashi, Tuomas Virtanen*, Yuki MitsufujiWhile direction of arrival (DOA) of sound events is generally estimated from multichannel audio data recorded […]
Automatic Piano Transcription with Hierarchical Frequency-Time Transformer
Keisuke Toyama, Taketo Akama, Yukara Ikemiya, Yuhta Takida, Wei-Hsiang Liao, Yuki MitsufujiTaking long-term spectral and temporal dependencies into account is essential for automatic piano transcriptio […]