Papers
arxiv:2103.12864

Learned complex masks for multi-instrument source separation

Published on Mar 23, 2021
Authors:
,
,
,

Abstract

Music source separation in the time-frequency domain is commonly achieved by applying a <PRE_TAG>soft or binary mask</POST_TAG> to the <PRE_TAG>magnitude component</POST_TAG> of (complex) spectrograms. The <PRE_TAG>phase component</POST_TAG> is usually not estimated, but instead copied from the mixture and applied to the magnitudes of the estimated isolated sources. While this method has several practical advantages, it imposes an upper bound on the performance of the system, where the estimated isolated sources inherently exhibit audible "phase artifacts". In this paper we address these shortcomings by directly estimating masks in the complex domain, extending recent work from the speech enhancement literature. The method is particularly well suited for multi-instrument musical source separation since residual phase artifacts are more pronounced for spectrally overlapping instrument sources, a common scenario in music. We show that complex masks result in better separation than masks that operate solely on the magnitude component.

Community

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2103.12864 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2103.12864 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.