• Home
  • Publications
  • Music Mixing Style Transfer: A Contrastive Learning Approach to Disentangle Audio Effects

Research Area

Author

  • Junghyun Koo, Marco A. Martı́nez-Ramı́rez, Wei-Hsiang Liao, Stefan Uhlich, Kyogu Lee*, Yuki Mitsufuji
  • * External authors

Company

  • Sony Group Corporation

Venue

  • ICASSP

Date

  • 2023

Share

Music Mixing Style Transfer: A Contrastive Learning Approach to Disentangle Audio Effects

View Publication

Abstract

We propose an end-to-end music mixing style transfer system that converts the mixing style of an input multitrack to that of a reference song. This is achieved with an encoder pre-trained with a contrastive objective to extract only audio effects related information from a reference music recording. All our models are trained in a self-supervised manner from an already-processed wet multitrack dataset with an effective data preprocessing method that alleviates the data scarcity of obtaining unprocessed dry data. We analyze the proposed encoder for the disentanglement capability of audio effects and also validate its performance for mixing style transfer through both objective and subjective evaluations. From the results, we show the proposed system not only converts the mixing style of multitrack audio close to a reference but is also robust with mixture-wise style transfer upon using a music source separation model.

Share