patrickvonplaten commited on
Commit
375a671
·
1 Parent(s): e517d24

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +24 -10
README.md CHANGED
@@ -1,6 +1,10 @@
1
  ---
2
  language: en
3
  datasets:
 
 
 
 
4
  - librispeech_asr
5
  tags:
6
  - speech
@@ -14,22 +18,32 @@ widget:
14
  license: apache-2.0
15
  ---
16
 
17
- # Wav2Vec2-Large-960h-Lv60 + Self-Training
18
 
19
- [Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/)
20
 
21
- The large model pretrained and fine-tuned on 960 hours of Libri-Light and Librispeech on 16kHz sampled speech audio. Model was trained with [Self-Training objective](https://arxiv.org/abs/2010.11430). When using the model make sure that your speech input is also sampled at 16Khz.
 
22
 
23
- [Paper](https://arxiv.org/abs/2006.11477)
 
 
 
24
 
25
- Authors: Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli
26
 
27
- **Abstract**
28
 
29
- We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data.
30
 
31
- The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
 
 
32
 
 
 
 
 
33
 
34
  # Usage
35
 
@@ -42,8 +56,8 @@ To transcribe audio files the model can be used as a standalone acoustic model a
42
  import torch
43
 
44
  # load model and processor
45
- processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-large-960h-lv60-self")
46
- model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-large-960h-lv60-self")
47
 
48
  # define function to read in sound file
49
  def map_to_array(batch):
 
1
  ---
2
  language: en
3
  datasets:
4
+ - libri_light
5
+ - common_voice
6
+ - switchboard
7
+ - fisher
8
  - librispeech_asr
9
  tags:
10
  - speech
 
18
  license: apache-2.0
19
  ---
20
 
21
+ # Wav2Vec2-Large-Robust finetuned on Librispeech
22
 
23
+ [Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/).
24
 
25
+ This model is a fine-tuned version of the (wav2vec2-large-robust)[https://huggingface.co/facebook/wav2vec2-large-robust] model.
26
+ It has been pretrained on:
27
 
28
+ - [Libri-Light](https://github.com/facebookresearch/libri-light): open-source audio books from the LibriVox project; clean, read-out audio data
29
+ - [CommonVoice](https://huggingface.co/datasets/common_voice): crowd-source collected audio data; read-out text snippets
30
+ - [Switchboard](https://catalog.ldc.upenn.edu/LDC97S62): telephone speech corpus; noisy telephone data
31
+ - [Fisher](https://catalog.ldc.upenn.edu/LDC2004T19): conversational telephone speech; noisy telephone data
32
 
33
+ and subsequently been finetuned on 960 hours of
34
 
35
+ - [Librispeech](https://huggingface.co/datasets/librispeech_asr): open-source read-out audio data.
36
 
37
+ When using the model make sure that your speech input is also sampled at 16Khz.
38
 
39
+ [Paper Robust Wav2Vec2](https://arxiv.org/abs/2104.01027)
40
+
41
+ Authors: Wei-Ning Hsu, Anuroop Sriram, Alexei Baevski, Tatiana Likhomanenko, Qiantong Xu, Vineel Pratap, Jacob Kahn, Ann Lee, Ronan Collobert, Gabriel Synnaeve, Michael Auli
42
 
43
+ **Abstract**
44
+ Self-supervised learning of speech representations has been a very active research area but most work is focused on a single domain such as read audio books for which there exist large quantities of labeled and unlabeled data. In this paper, we explore more general setups where the domain of the unlabeled data for pre-training data differs from the domain of the labeled data for fine-tuning, which in turn may differ from the test data domain. Our experiments show that using target domain data during pre-training leads to large performance improvements across a variety of setups. On a large-scale competitive setup, we show that pre-training on unlabeled in-domain data reduces the gap between models trained on in-domain and out-of-domain labeled data by 66%-73%. This has obvious practical implications since it is much easier to obtain unlabeled target domain data than labeled data. Moreover, we find that pre-training on multiple domains improves generalization performance on domains not seen during training. Code and models will be made available at this https URL.
45
+
46
+ The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
47
 
48
  # Usage
49
 
 
56
  import torch
57
 
58
  # load model and processor
59
+ processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-large-robust-ft-libri-960h")
60
+ model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-large-robust-ft-libri-960h")
61
 
62
  # define function to read in sound file
63
  def map_to_array(batch):