patrickvonplaten commited on
Commit
61a977c
1 Parent(s): 48f7992

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +69 -0
README.md ADDED
@@ -0,0 +1,69 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - ky
4
+ datasets:
5
+ - common_voice
6
+ tags:
7
+ - audio
8
+ - automatic-speech-recognition
9
+ ---
10
+
11
+ # UniSpeech-Large-plus Kyrgyz
12
+
13
+ [Microsoft's UniSpeech](https://www.microsoft.com/en-us/research/publication/unispeech-unified-speech-representation-learning-with-labeled-and-unlabeled-data/)
14
+
15
+ The large model pretrained on 16kHz sampled speech audio and phonetic labels and consequently fine-tuned on 1h of Kyrgyz phonemes.
16
+ When using the model make sure that your speech input is also sampled at 16kHz and your text in converted into a sequence of phonemes.
17
+
18
+ [Paper: UniSpeech: Unified Speech Representation Learning
19
+ with Labeled and Unlabeled Data](https://arxiv.org/abs/2101.07597)
20
+
21
+ Authors: Chengyi Wang, Yu Wu, Yao Qian, Kenichi Kumatani, Shujie Liu, Furu Wei, Michael Zeng, Xuedong Huang
22
+
23
+ **Abstract**
24
+ *In this paper, we propose a unified pre-training approach called UniSpeech to learn speech representations with both unlabeled and labeled data, in which supervised phonetic CTC learning and phonetically-aware contrastive self-supervised learning are conducted in a multi-task learning manner. The resultant representations can capture information more correlated with phonetic structures and improve the generalization across languages and domains. We evaluate the effectiveness of UniSpeech for cross-lingual representation learning on public CommonVoice corpus. The results show that UniSpeech outperforms self-supervised pretraining and supervised transfer learning for speech recognition by a maximum of 13.4% and 17.8% relative phone error rate reductions respectively (averaged over all testing languages). The transferability of UniSpeech is also demonstrated on a domain-shift speech recognition task, i.e., a relative word error rate reduction of 6% against the previous approach.*
25
+
26
+ The original model can be found under https://github.com/microsoft/UniSpeech/tree/main/UniSpeech.
27
+
28
+ # Usage
29
+
30
+ This is an speech model that has been fine-tuned on phoneme classification.
31
+
32
+ ## Inference
33
+
34
+ ```python
35
+ import torch
36
+ from datasets import load_dataset
37
+ from transformers import AutoModelForCTC, AutoProcessor
38
+ import torchaudio.functional as F
39
+
40
+ model_id = "microsoft/unispeech-1350-en-17h-ky-ft-1h"
41
+
42
+ sample = next(iter(load_dataset("common_voice", "ky", split="test", streaming=True)))
43
+ resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
44
+
45
+ model = AutoModelForCTC.from_pretrained(model_id)
46
+ processor = AutoProcessor.from_pretrained(model_id)
47
+
48
+ input_values = processor(resampled_audio, return_tensors="pt").input_values
49
+
50
+ with torch.no_grad():
51
+ logits = model(input_values).logits
52
+
53
+ prediction_ids = torch.argmax(logits, dim=-1)
54
+ transcription = processor.batch_decode(prediction_ids)
55
+ ```
56
+
57
+ # Contribution
58
+
59
+ The model was contributed by [cywang](https://huggingface.co/cywang) and [patrickvonplaten](https://huggingface.co/patrickvonplaten).
60
+
61
+ # License
62
+
63
+ The official license can be found [here](https://github.com/microsoft/UniSpeech/blob/main/LICENSE)
64
+
65
+ # Official Results
66
+
67
+ See *UniSpeeech-L^{+}* - *ky*:
68
+
69
+ ![design](https://raw.githubusercontent.com/patrickvonplaten/scientific_images/master/unispeech_results.png)