Spaces:
Runtime error
Runtime error
<!--Copyright 2021 The HuggingFace Team. All rights reserved. | |
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with | |
the License. You may obtain a copy of the License at | |
http://www.apache.org/licenses/LICENSE-2.0 | |
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on | |
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | |
specific language governing permissions and limitations under the License. | |
--> | |
# UniSpeech-SAT | |
## Overview | |
The UniSpeech-SAT model was proposed in [UniSpeech-SAT: Universal Speech Representation Learning with Speaker Aware | |
Pre-Training](https://arxiv.org/abs/2110.05752) by Sanyuan Chen, Yu Wu, Chengyi Wang, Zhengyang Chen, Zhuo Chen, | |
Shujie Liu, Jian Wu, Yao Qian, Furu Wei, Jinyu Li, Xiangzhan Yu . | |
The abstract from the paper is the following: | |
*Self-supervised learning (SSL) is a long-standing goal for speech processing, since it utilizes large-scale unlabeled | |
data and avoids extensive human labeling. Recent years witness great successes in applying self-supervised learning in | |
speech recognition, while limited exploration was attempted in applying SSL for modeling speaker characteristics. In | |
this paper, we aim to improve the existing SSL framework for speaker representation learning. Two methods are | |
introduced for enhancing the unsupervised speaker information extraction. First, we apply the multi-task learning to | |
the current SSL framework, where we integrate the utterance-wise contrastive loss with the SSL objective function. | |
Second, for better speaker discrimination, we propose an utterance mixing strategy for data augmentation, where | |
additional overlapped utterances are created unsupervisely and incorporate during training. We integrate the proposed | |
methods into the HuBERT framework. Experiment results on SUPERB benchmark show that the proposed system achieves | |
state-of-the-art performance in universal representation learning, especially for speaker identification oriented | |
tasks. An ablation study is performed verifying the efficacy of each proposed method. Finally, we scale up training | |
dataset to 94 thousand hours public audio data and achieve further performance improvement in all SUPERB tasks.* | |
Tips: | |
- UniSpeechSat is a speech model that accepts a float array corresponding to the raw waveform of the speech signal. | |
Please use [`Wav2Vec2Processor`] for the feature extraction. | |
- UniSpeechSat model can be fine-tuned using connectionist temporal classification (CTC) so the model output has to be | |
decoded using [`Wav2Vec2CTCTokenizer`]. | |
- UniSpeechSat performs especially well on speaker verification, speaker identification, and speaker diarization tasks. | |
This model was contributed by [patrickvonplaten](https://huggingface.co/patrickvonplaten). The Authors' code can be | |
found [here](https://github.com/microsoft/UniSpeech/tree/main/UniSpeech-SAT). | |
## Documentation resources | |
- [Audio classification task guide](../tasks/audio_classification) | |
- [Automatic speech recognition task guide](../tasks/asr) | |
## UniSpeechSatConfig | |
[[autodoc]] UniSpeechSatConfig | |
## UniSpeechSat specific outputs | |
[[autodoc]] models.unispeech_sat.modeling_unispeech_sat.UniSpeechSatForPreTrainingOutput | |
## UniSpeechSatModel | |
[[autodoc]] UniSpeechSatModel | |
- forward | |
## UniSpeechSatForCTC | |
[[autodoc]] UniSpeechSatForCTC | |
- forward | |
## UniSpeechSatForSequenceClassification | |
[[autodoc]] UniSpeechSatForSequenceClassification | |
- forward | |
## UniSpeechSatForAudioFrameClassification | |
[[autodoc]] UniSpeechSatForAudioFrameClassification | |
- forward | |
## UniSpeechSatForXVector | |
[[autodoc]] UniSpeechSatForXVector | |
- forward | |
## UniSpeechSatForPreTraining | |
[[autodoc]] UniSpeechSatForPreTraining | |
- forward | |