A fully differentiable model for unsupervised singing voice separation - Equipe Signal, Statistique et Apprentissage
Conference Papers Year : 2024

A fully differentiable model for unsupervised singing voice separation

Abstract

A novel model was recently proposed by Schulze-Forster et al. in [1] for unsupervised music source separation. This model allows to tackle some of the major shortcomings of existing source separation frameworks. Specifically, it eliminates the need for isolated sources during training, performs efficiently with limited data, and can handle homogeneous sources (such as singing voice). But, this model relies on an external multipitch estimator and incorporates an Ad hoc voice assignment procedure. In this paper, we propose to extend this framework and to build a fully differentiable model by integrating a multipitch estimator and a novel differentiable assignment module within the core model. We show the merits of our approach through a set of experiments, and we highlight in particular its potential for processing diverse and unseen data.
Fichier principal
Vignette du fichier
main.pdf (1.65 Mo) Télécharger le fichier
Origin Files produced by the author(s)

Dates and versions

hal-04356813 , version 1 (20-12-2023)
hal-04356813 , version 2 (29-01-2024)

Identifiers

Cite

Gael Richard, Pierre Chouteau, Bernardo Torres. A fully differentiable model for unsupervised singing voice separation. IEEE International Conference on Acoustics, Speech, and Signal Processing, Apr 2024, Seoul, South Korea. ⟨hal-04356813v2⟩
366 View
264 Download

Altmetric

Share

More