nullonesix commited on
Commit
39c6739
·
verified ·
1 Parent(s): 2acde3e

End of training

Browse files
README.md CHANGED
@@ -1,569 +1,91 @@
1
  ---
2
  language:
3
  - en
4
- tags:
5
- - audio
6
- - automatic-speech-recognition
7
- - transformers.js
8
- inference: false
9
- widget:
10
- - src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
11
- example_title: Librispeech sample 1
12
- output:
13
- text: going along slushy country roads and speaking to damp audiences in draughty schoolrooms day after day for a fortnight he'll have to put in an appearance at some place of worship on sunday morning and he can come to us immediately afterwards
14
- - src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
15
- example_title: Librispeech sample 2
16
- output:
17
- text: before he had time to answer a much-encumbered vera burst into the room with the question i say can i leave these here these were a small black pig and a lusty specimen of black-red game-cock
18
- pipeline_tag: automatic-speech-recognition
19
  license: mit
20
- library_name: transformers
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
  ---
22
 
23
- # Distil-Whisper: distil-small.en
24
-
25
- Distil-Whisper was proposed in the paper [Robust Knowledge Distillation via Large-Scale Pseudo Labelling](https://arxiv.org/abs/2311.00430).
26
- It is a distilled version of the Whisper model that is **6 times faster**, 49% smaller, and performs **within 1% WER**
27
- on out-of-distribution evaluation sets.
28
-
29
- This is the repository for distil-small.en, a distilled variant of [Whisper small.en](https://huggingface.co/openai/whisper-small.en).
30
- It is the **smallest Distil-Whisper checkpoint**, with just 166M parameters, making it the ideal choice for memory
31
- constrained applications (e.g. on-device).
32
-
33
- For most other applications, the [distil-medium.en](https://huggingface.co/distil-whisper/distil-medium.en)
34
- or [distil-large-v2](https://huggingface.co/distil-whisper/distil-large-v2) checkpoints are recommended, since they are
35
- both faster and achieve better WER results:
36
-
37
- | Model | Params / M | Rel. Latency ↑ | Short-Form WER ↓ | Long-Form WER ↓ |
38
- |----------------------------------------------------------------------------|------------|----------------|------------------|-----------------|
39
- | [large-v3](https://huggingface.co/openai/whisper-large-v3) | 1550 | 1.0 | **8.4** | 11.0 |
40
- | [large-v2](https://huggingface.co/openai/whisper-large-v2) | 1550 | 1.0 | 9.1 | 11.7 |
41
- | | | | | |
42
- | [distil-large-v3](https://huggingface.co/distil-whisper/distil-large-v3) | 756 | 6.3 | 9.7 | **10.8** |
43
- | [distil-large-v2](https://huggingface.co/distil-whisper/distil-large-v2) | 756 | 5.8 | 10.1 | 11.6 |
44
- | [distil-medium.en](https://huggingface.co/distil-whisper/distil-medium.en) | 394 | **6.8** | 11.1 | 12.4 |
45
- | [distil-small.en](https://huggingface.co/distil-whisper/distil-small.en) | **166** | 5.6 | 12.1 | 12.8 |
46
-
47
- **Note:** Distil-Whisper is currently only available for English speech recognition. We are working with the community
48
- to distill Whisper on other languages. If you are interested in distilling Whisper in your language, check out the
49
- provided [training code](https://github.com/huggingface/distil-whisper/tree/main/training). We will update the
50
- [Distil-Whisper repository](https://github.com/huggingface/distil-whisper/) with multilingual checkpoints when ready!
51
-
52
- ### Why is distil-small.en slower than distil-large-v2?
53
-
54
- While [distil-medium.en](https://huggingface.co/distil-whisper/distil-medium.en) and [distil-large-v2](https://huggingface.co/distil-whisper/distil-large-v2)
55
- use two decoder layers each, distil-small.en uses four. Using more decoder layers improves the WER performance of the
56
- model, at the expense of slower inference speed. We found that four layers was the minimum required to get reasonable
57
- WER performance for `distil-small.en`, where it performs to within 3% WER of Whisper [large-v2](https://huggingface.co/openai/whisper-large-v2)
58
- while being 5.6x faster. When we tried distilling with just two layers, the model was over 5% worse than large-v2, albeit
59
- 7.8x faster. We leave distilling a two layer small.en model as future works.
60
-
61
- ## Usage
62
-
63
- Distil-Whisper is supported in Hugging Face 🤗 Transformers from version 4.35 onwards. To run the model, first
64
- install the latest version of the Transformers library. For this example, we'll also install 🤗 Datasets to load toy
65
- audio dataset from the Hugging Face Hub:
66
-
67
- ```bash
68
- pip install --upgrade pip
69
- pip install --upgrade transformers accelerate datasets[audio]
70
- ```
71
-
72
- ### Short-Form Transcription
73
-
74
- The model can be used with the [`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline)
75
- class to transcribe short-form audio files (< 30-seconds) as follows:
76
-
77
- ```python
78
- import torch
79
- from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline
80
- from datasets import load_dataset
81
-
82
-
83
- device = "cuda:0" if torch.cuda.is_available() else "cpu"
84
- torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
85
-
86
- model_id = "distil-whisper/distil-small.en"
87
-
88
- model = AutoModelForSpeechSeq2Seq.from_pretrained(
89
- model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True
90
- )
91
- model.to(device)
92
-
93
- processor = AutoProcessor.from_pretrained(model_id)
94
-
95
- pipe = pipeline(
96
- "automatic-speech-recognition",
97
- model=model,
98
- tokenizer=processor.tokenizer,
99
- feature_extractor=processor.feature_extractor,
100
- max_new_tokens=128,
101
- torch_dtype=torch_dtype,
102
- device=device,
103
- )
104
-
105
- dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
106
- sample = dataset[0]["audio"]
107
-
108
- result = pipe(sample)
109
- print(result["text"])
110
- ```
111
-
112
- To transcribe a local audio file, simply pass the path to your audio file when you call the pipeline:
113
- ```diff
114
- - result = pipe(sample)
115
- + result = pipe("audio.mp3")
116
- ```
117
-
118
- ### Long-Form Transcription
119
-
120
- Distil-Whisper uses a chunked algorithm to transcribe long-form audio files (> 30-seconds). In practice, this chunked long-form algorithm
121
- is 9x faster than the sequential algorithm proposed by OpenAI in the Whisper paper (see Table 7 of the [Distil-Whisper paper](https://arxiv.org/abs/2311.00430)).
122
-
123
- To enable chunking, pass the `chunk_length_s` parameter to the `pipeline`. For Distil-Whisper, a chunk length of 15-seconds
124
- is optimal. To activate batching, pass the argument `batch_size`:
125
-
126
- ```python
127
- import torch
128
- from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline
129
- from datasets import load_dataset
130
-
131
-
132
- device = "cuda:0" if torch.cuda.is_available() else "cpu"
133
- torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
134
-
135
- model_id = "distil-whisper/distil-small.en"
136
-
137
- model = AutoModelForSpeechSeq2Seq.from_pretrained(
138
- model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True
139
- )
140
- model.to(device)
141
-
142
- processor = AutoProcessor.from_pretrained(model_id)
143
-
144
- pipe = pipeline(
145
- "automatic-speech-recognition",
146
- model=model,
147
- tokenizer=processor.tokenizer,
148
- feature_extractor=processor.feature_extractor,
149
- max_new_tokens=128,
150
- chunk_length_s=15,
151
- batch_size=16,
152
- torch_dtype=torch_dtype,
153
- device=device,
154
- )
155
-
156
- dataset = load_dataset("distil-whisper/librispeech_long", "default", split="validation")
157
- sample = dataset[0]["audio"]
158
-
159
- result = pipe(sample)
160
- print(result["text"])
161
- ```
162
-
163
- <!---
164
- **Tip:** The pipeline can also be used to transcribe an audio file from a remote URL, for example:
165
-
166
- ```python
167
- result = pipe("https://huggingface.co/datasets/sanchit-gandhi/librispeech_long/resolve/main/audio.wav")
168
- ```
169
- --->
170
-
171
- ### Speculative Decoding
172
-
173
- Distil-Whisper can be used as an assistant model to Whisper for [speculative decoding](https://huggingface.co/blog/whisper-speculative-decoding).
174
- Speculative decoding mathematically ensures the exact same outputs as Whisper are obtained while being 2 times faster.
175
- This makes it the perfect drop-in replacement for existing Whisper pipelines, since the same outputs are guaranteed.
176
-
177
- In the following code-snippet, we load the assistant Distil-Whisper model standalone to the main Whisper pipeline. We then
178
- specify it as the "assistant model" for generation:
179
-
180
- ```python
181
- from transformers import pipeline, AutoModelForSpeechSeq2Seq, AutoProcessor
182
- import torch
183
- from datasets import load_dataset
184
-
185
- device = "cuda:0" if torch.cuda.is_available() else "cpu"
186
- torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
187
-
188
- assistant_model_id = "distil-whisper/distil-small.en"
189
-
190
- assistant_model = AutoModelForSpeechSeq2Seq.from_pretrained(
191
- assistant_model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True
192
- )
193
- assistant_model.to(device)
194
-
195
- model_id = "openai/whisper-medium.en"
196
-
197
- model = AutoModelForSpeechSeq2Seq.from_pretrained(
198
- model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True
199
- )
200
- model.to(device)
201
-
202
- processor = AutoProcessor.from_pretrained(model_id)
203
-
204
- pipe = pipeline(
205
- "automatic-speech-recognition",
206
- model=model,
207
- tokenizer=processor.tokenizer,
208
- feature_extractor=processor.feature_extractor,
209
- max_new_tokens=128,
210
- generate_kwargs={"assistant_model": assistant_model},
211
- torch_dtype=torch_dtype,
212
- device=device,
213
- )
214
-
215
- dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
216
- sample = dataset[0]["audio"]
217
-
218
- result = pipe(sample)
219
- print(result["text"])
220
- ```
221
-
222
- ## Additional Speed & Memory Improvements
223
-
224
- You can apply additional speed and memory improvements to Distil-Whisper which we cover in the following.
225
-
226
- ### Flash Attention
227
-
228
- We recommend using [Flash-Attention 2](https://huggingface.co/docs/transformers/main/en/perf_infer_gpu_one#flashattention-2) if your GPU allows for it.
229
- To do so, you first need to install [Flash Attention](https://github.com/Dao-AILab/flash-attention):
230
-
231
- ```
232
- pip install flash-attn --no-build-isolation
233
- ```
234
-
235
- and then all you have to do is to pass `use_flash_attention_2=True` to `from_pretrained`:
236
-
237
- ```diff
238
- - model = AutoModelForSpeechSeq2Seq.from_pretrained(model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True)
239
- + model = AutoModelForSpeechSeq2Seq.from_pretrained(model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True, use_flash_attention_2=True)
240
- ```
241
-
242
- ### Torch Scale-Product-Attention (SDPA)
243
-
244
- If your GPU does not support Flash Attention, we recommend making use of [BetterTransformers](https://huggingface.co/docs/transformers/main/en/perf_infer_gpu_one#bettertransformer).
245
- To do so, you first need to install optimum:
246
-
247
- ```
248
- pip install --upgrade optimum
249
- ```
250
-
251
- And then convert your model to a "BetterTransformer" model before using it:
252
-
253
- ```diff
254
- model = AutoModelForSpeechSeq2Seq.from_pretrained(model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True)
255
- + model = model.to_bettertransformer()
256
- ```
257
-
258
- ### Running Distil-Whisper in `openai-whisper`
259
-
260
- To use the model in the original Whisper format, first ensure you have the [`openai-whisper`](https://pypi.org/project/openai-whisper/) package installed:
261
-
262
- ```bash
263
- pip install --upgrade openai-whisper
264
- ```
265
-
266
- The following code-snippet demonstrates how to transcribe a sample file from the LibriSpeech dataset loaded using
267
- 🤗 Datasets:
268
-
269
- ```python
270
- import torch
271
- from datasets import load_dataset
272
- from huggingface_hub import hf_hub_download
273
- from whisper import load_model, transcribe
274
-
275
- distil_small_en = hf_hub_download(repo_id="distil-whisper/distil-small.en", filename="original-model.bin")
276
- model = load_model(distil_small_en)
277
-
278
- dataset = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
279
- sample = dataset[0]["audio"]["array"]
280
- sample = torch.from_numpy(sample).float()
281
-
282
- pred_out = transcribe(model, audio=sample)
283
- print(pred_out["text"])
284
- ```
285
-
286
- Note that the model weights will be downloaded and saved to your cache the first time you run the example. Subsequently,
287
- you can re-use the same example, and the weights will be loaded directly from your cache without having to download them
288
- again.
289
-
290
- To transcribe a local audio file, simply pass the path to the audio file as the `audio` argument to transcribe:
291
-
292
- ```python
293
- pred_out = transcribe(model, audio="audio.mp3")
294
- ```
295
-
296
- ### Whisper.cpp
297
-
298
- Distil-Whisper can be run from the [Whisper.cpp](https://github.com/ggerganov/whisper.cpp) repository with the original
299
- sequential long-form transcription algorithm. In a [provisional benchmark](https://github.com/ggerganov/whisper.cpp/pull/1424#issuecomment-1793513399)
300
- on Mac M1, `distil-small.en` is over 4x faster than `large-v2`, while performing to within 1.4% WER over long-form audio.
301
-
302
- Steps for getting started:
303
- 1. Clone the Whisper.cpp repository:
304
- ```
305
- git clone https://github.com/ggerganov/whisper.cpp.git
306
- cd whisper.cpp
307
- ```
308
- 2. Download the ggml weights for `distil-small.en` from the Hugging Face Hub:
309
-
310
- ```bash
311
- python -c "from huggingface_hub import hf_hub_download; hf_hub_download(repo_id='distil-whisper/distil-small.en', filename='ggml-distil-small.en.bin', local_dir='./models')"
312
- ```
313
-
314
- Note that if you do not have the `huggingface_hub` package installed, you can also download the weights with `wget`:
315
-
316
- ```bash
317
- wget https://huggingface.co/distil-whisper/distil-small.en/resolve/main/ggml-distil-small.en.bin -P ./models
318
- ```
319
-
320
- 3. Run inference using the provided sample audio:
321
-
322
- ```bash
323
- make -j && ./main -m models/ggml-distil-small.en.bin -f samples/jfk.wav
324
- ```
325
-
326
- ### Transformers.js
327
-
328
- Distil-Whisper can even run completely in your web browser with [Transformers.js](http://github.com/xenova/transformers.js):
329
-
330
- 1. Install Transformers.js from [NPM](https://www.npmjs.com/package/@xenova/transformers):
331
- ```bash
332
- npm i @xenova/transformers
333
- ```
334
-
335
- 2. Import the library and perform inference with the pipeline API.
336
- ```js
337
- import { pipeline } from '@xenova/transformers';
338
-
339
- const transcriber = await pipeline('automatic-speech-recognition', 'distil-whisper/distil-small.en');
340
-
341
- const url = 'https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/jfk.wav';
342
- const output = await transcriber(url);
343
- // { text: " And so my fellow Americans, ask not what your country can do for you. Ask what you can do for your country." }
344
- ```
345
-
346
- Check out the online [Distil-Whisper Web demo](https://huggingface.co/spaces/Xenova/distil-whisper-web) to try it out yourself. As you'll see, it runs locally in your browser: no server required!
347
-
348
- See the [docs](https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.AutomaticSpeechRecognitionPipeline) for more information.
349
-
350
- ### Candle
351
-
352
- Coming soon!
353
-
354
- <!---
355
-
356
- Through an integration with Hugging Face [Candle](https://github.com/huggingface/candle/tree/main) 🕯️, Distil-Whisper is
357
- now available in the Rust library 🦀
358
-
359
- Benefit from:
360
- * Optimised CPU backend with optional MKL support for x86 and Accelerate for Macs
361
- * CUDA backend for efficiently running on GPUs, multiple GPU distribution via NCCL
362
- * WASM support: run Distil-Whisper in a browser
363
-
364
- Steps for getting started:
365
- 1. Install [`candle-core`](https://github.com/huggingface/candle/tree/main/candle-core) as explained [here](https://huggingface.github.io/candle/guide/installation.html)
366
- 2. Clone the `candle` repository locally:
367
- ```
368
- git clone https://github.com/huggingface/candle.git
369
- ```
370
- 3. Enter the example directory for [Whisper](https://github.com/huggingface/candle/tree/main/candle-examples/examples/whisper):
371
- ```
372
- cd candle/candle-examples/examples/whisper
373
- ```
374
- 4. Run an example:
375
- ```
376
- cargo run --example whisper --release -- --model distil-small.en
377
- ```
378
- 5. To specify your own audio file, add the `--input` flag:
379
- ```
380
- cargo run --example whisper --release -- --model distil-small.en --input audio.wav
381
- ```
382
-
383
- --->
384
-
385
- ### 8bit & 4bit Quantization
386
-
387
- Coming soon!
388
-
389
- ## Model Details
390
-
391
- Distil-Whisper inherits the encoder-decoder architecture from Whisper. The encoder maps a sequence of speech vector
392
- inputs to a sequence of hidden-state vectors. The decoder auto-regressively predicts text tokens, conditional on all
393
- previous tokens and the encoder hidden-states. Consequently, the encoder is only run forward once, whereas the decoder
394
- is run as many times as the number of tokens generated. In practice, this means the decoder accounts for over 90% of
395
- total inference time. Thus, to optimise for latency, the focus is on minimising the inference time of the decoder.
396
-
397
- To distill the Whisper model, we reduce the number of decoder layers while keeping the encoder fixed.
398
- The encoder (shown in green) is entirely copied from the teacher to the student and frozen during training.
399
- The student's decoder consists of a subset of the teacher decoder layers, which are intialised from maximally spaced layers.
400
- The model is then trained on a weighted sum of the KL divergence and pseudo-label loss terms.
401
-
402
- <p align="center">
403
- <img src="https://huggingface.co/datasets/distil-whisper/figures/resolve/main/architecture.png?raw=true" width="600"/>
404
- </p>
405
-
406
- ## Evaluation
407
-
408
- The following code-snippets demonstrates how to evaluate the Distil-Whisper model on the LibriSpeech validation.clean
409
- dataset with [streaming mode](https://huggingface.co/blog/audio-datasets#streaming-mode-the-silver-bullet), meaning no
410
- audio data has to be downloaded to your local device.
411
-
412
- First, we need to install the required packages, including 🤗 Datasets to stream and load the audio data, and 🤗 Evaluate to
413
- perform the WER calculation:
414
-
415
- ```bash
416
- pip install --upgrade pip
417
- pip install --upgrade transformers datasets[audio] evaluate jiwer
418
- ```
419
-
420
- Evaluation can then be run end-to-end with the following example:
421
-
422
- ```python
423
- from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor
424
- from transformers.models.whisper.english_normalizer import EnglishTextNormalizer
425
- from datasets import load_dataset
426
- from evaluate import load
427
- import torch
428
- from tqdm import tqdm
429
-
430
- # define our torch configuration
431
- device = "cuda:0" if torch.cuda.is_available() else "cpu"
432
- torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
433
-
434
- model_id = "distil-whisper/distil-small.en"
435
-
436
- # load the model + processor
437
- model = AutoModelForSpeechSeq2Seq.from_pretrained(model_id, torch_dtype=torch_dtype, use_safetensors=True, low_cpu_mem_usage=True)
438
- model = model.to(device)
439
- processor = AutoProcessor.from_pretrained(model_id)
440
-
441
- # load the dataset with streaming mode
442
- dataset = load_dataset("librispeech_asr", "clean", split="validation", streaming=True)
443
-
444
- # define the evaluation metric
445
- wer_metric = load("wer")
446
- normalizer = EnglishTextNormalizer(processor.tokenizer.english_spelling_normalizer)
447
-
448
- def inference(batch):
449
- # 1. Pre-process the audio data to log-mel spectrogram inputs
450
- audio = [sample["array"] for sample in batch["audio"]]
451
- input_features = processor(audio, sampling_rate=batch["audio"][0]["sampling_rate"], return_tensors="pt").input_features
452
- input_features = input_features.to(device, dtype=torch_dtype)
453
-
454
- # 2. Auto-regressively generate the predicted token ids
455
- pred_ids = model.generate(input_features, max_new_tokens=128)
456
-
457
- # 3. Decode the token ids to the final transcription
458
- batch["transcription"] = processor.batch_decode(pred_ids, skip_special_tokens=True)
459
- batch["reference"] = batch["text"]
460
- return batch
461
-
462
- dataset = dataset.map(function=inference, batched=True, batch_size=16)
463
-
464
- all_transcriptions = []
465
- all_references = []
466
-
467
- # iterate over the dataset and run inference
468
- for i, result in tqdm(enumerate(dataset), desc="Evaluating..."):
469
- all_transcriptions.append(result["transcription"])
470
- all_references.append(result["reference"])
471
-
472
- # normalize predictions and references
473
- all_transcriptions = [normalizer(transcription) for transcription in all_transcriptions]
474
- all_references = [normalizer(reference) for reference in all_references]
475
-
476
- # compute the WER metric
477
- wer = 100 * wer_metric.compute(predictions=all_transcriptions, references=all_references)
478
- print(wer)
479
-
480
- ```
481
- **Print Output:**
482
- ```
483
- 3.4326070294536297
484
- ```
485
-
486
- ## Intended Use
487
-
488
- Distil-Whisper is intended to be a drop-in replacement for Whisper on English speech recognition. In particular, it
489
- achieves comparable WER results over out-of-distribution test data, while being 6x faster over both short and long-form
490
- audio.
491
-
492
- ## Data
493
-
494
- Distil-Whisper is trained on 22,000 hours of audio data from 9 open-source, permissively licensed speech datasets on the
495
- Hugging Face Hub:
496
-
497
- | Dataset | Size / h | Speakers | Domain | Licence |
498
- |-----------------------------------------------------------------------------------------|----------|----------|-----------------------------|-----------------|
499
- | [People's Speech](https://huggingface.co/datasets/MLCommons/peoples_speech) | 12,000 | unknown | Internet Archive | CC-BY-SA-4.0 |
500
- | [Common Voice 13](https://huggingface.co/datasets/mozilla-foundation/common_voice_13_0) | 3,000 | unknown | Narrated Wikipedia | CC0-1.0 |
501
- | [GigaSpeech](https://huggingface.co/datasets/speechcolab/gigaspeech) | 2,500 | unknown | Audiobook, podcast, YouTube | apache-2.0 |
502
- | Fisher | 1,960 | 11,900 | Telephone conversations | LDC |
503
- | [LibriSpeech](https://huggingface.co/datasets/librispeech_asr) | 960 | 2,480 | Audiobooks | CC-BY-4.0 |
504
- | [VoxPopuli](https://huggingface.co/datasets/facebook/voxpopuli) | 540 | 1,310 | European Parliament | CC0 |
505
- | [TED-LIUM](https://huggingface.co/datasets/LIUM/tedlium) | 450 | 2,030 | TED talks | CC-BY-NC-ND 3.0 |
506
- | SwitchBoard | 260 | 540 | Telephone conversations | LDC |
507
- | [AMI](https://huggingface.co/datasets/edinburghcstr/ami) | 100 | unknown | Meetings | CC-BY-4.0 |
508
- ||||||
509
- | **Total** | 21,770 | 18,260+ | | |
510
-
511
- The combined dataset spans 10 distinct domains and over 50k speakers. The diversity of this dataset is crucial to ensuring
512
- the distilled model is robust to audio distributions and noise.
513
-
514
- The audio data is then pseudo-labelled using the Whisper large-v2 model: we use Whisper to generate predictions for all
515
- the audio in our training set and use these as the target labels during training. Using pseudo-labels ensures that the
516
- transcriptions are consistently formatted across datasets and provides sequence-level distillation signal during training.
517
 
518
- ## WER Filter
519
 
520
- The Whisper pseudo-label predictions are subject to mis-transcriptions and hallucinations. To ensure we only train on
521
- accurate pseudo-labels, we employ a simple WER heuristic during training. First, we normalise the Whisper pseudo-labels
522
- and the ground truth labels provided by each dataset. We then compute the WER between these labels. If the WER exceeds
523
- a specified threshold, we discard the training example. Otherwise, we keep it for training.
524
 
525
- Section 9.2 of the [Distil-Whisper paper](https://arxiv.org/abs/2311.00430) demonstrates the effectiveness of this filter for improving downstream performance
526
- of the distilled model. We also partially attribute Distil-Whisper's robustness to hallucinations to this filter.
527
 
528
- ## Training
529
 
530
- The model was trained for 50,000 optimisation steps (or 12 epochs) with batch size 2056. The Tensorboard training logs can
531
- be found under: https://huggingface.co/distil-whisper/distil-small.en/tensorboard?params=scalars#frame
532
 
533
- ## Results
534
 
535
- The distilled model performs to within 1% WER of Whisper on out-of-distribution (OOD) short-form audio, and outperforms Whisper
536
- by 0.1% on OOD long-form audio. This performance gain is attributed to lower hallucinations.
537
 
538
- For a detailed per-dataset breakdown of the evaluation results, refer to Tables 16 and 17 of the [Distil-Whisper paper](https://arxiv.org/abs/2311.00430)
539
 
540
- Distil-Whisper is also evaluated on the [ESB benchmark](https://arxiv.org/abs/2210.13352) datasets as part of the [OpenASR leaderboard](https://huggingface.co/spaces/hf-audio/open_asr_leaderboard),
541
- where it performs to within 0.2% WER of Whisper.
542
 
543
- ## Reproducing Distil-Whisper
544
 
545
- Training and evaluation code to reproduce Distil-Whisper is available under the Distil-Whisper repository: https://github.com/huggingface/distil-whisper/tree/main/training
 
 
 
 
 
 
 
 
 
546
 
547
- ## License
548
 
549
- Distil-Whisper inherits the [MIT license](https://github.com/huggingface/distil-whisper/blob/main/LICENSE) from OpenAI's Whisper model.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
550
 
551
- ## Citation
552
 
553
- If you use this model, please consider citing the [Distil-Whisper paper](https://arxiv.org/abs/2311.00430):
554
- ```
555
- @misc{gandhi2023distilwhisper,
556
- title={Distil-Whisper: Robust Knowledge Distillation via Large-Scale Pseudo Labelling},
557
- author={Sanchit Gandhi and Patrick von Platen and Alexander M. Rush},
558
- year={2023},
559
- eprint={2311.00430},
560
- archivePrefix={arXiv},
561
- primaryClass={cs.CL}
562
- }
563
- ```
564
 
565
- ## Acknowledgements
566
- * OpenAI for the Whisper [model](https://huggingface.co/openai/whisper-large-v2) and [original codebase](https://github.com/openai/whisper)
567
- * Hugging Face 🤗 [Transformers](https://github.com/huggingface/transformers) for the model integration
568
- * Google's [TPU Research Cloud (TRC)](https://sites.research.google/trc/about/) programme for Cloud TPU v4s
569
- * [`@rsonavane`](https://huggingface.co/rsonavane/distil-whisper-large-v2-8-ls) for releasing an early iteration of Distil-Whisper on the LibriSpeech dataset
 
1
  ---
2
  language:
3
  - en
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  license: mit
5
+ base_model: distil-whisper/distil-small.en
6
+ tags:
7
+ - generated_from_trainer
8
+ datasets:
9
+ - atc
10
+ metrics:
11
+ - wer
12
+ model-index:
13
+ - name: Whisper Large v3 1500 Epochs 2 - nullonesix
14
+ results:
15
+ - task:
16
+ name: Automatic Speech Recognition
17
+ type: automatic-speech-recognition
18
+ dataset:
19
+ name: atc
20
+ type: atc
21
+ args: 'config: en, split: test'
22
+ metrics:
23
+ - name: Wer
24
+ type: wer
25
+ value: 39.23487544483986
26
  ---
27
 
28
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
29
+ should probably proofread and complete it, then remove this comment. -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
30
 
31
+ # Whisper Large v3 1500 Epochs 2 - nullonesix
32
 
33
+ This model is a fine-tuned version of [distil-whisper/distil-small.en](https://huggingface.co/distil-whisper/distil-small.en) on the atc dataset.
34
+ It achieves the following results on the evaluation set:
35
+ - Loss: 1.4151
36
+ - Wer: 39.2349
37
 
38
+ ## Model description
 
39
 
40
+ More information needed
41
 
42
+ ## Intended uses & limitations
 
43
 
44
+ More information needed
45
 
46
+ ## Training and evaluation data
 
47
 
48
+ More information needed
49
 
50
+ ## Training procedure
 
51
 
52
+ ### Training hyperparameters
53
 
54
+ The following hyperparameters were used during training:
55
+ - learning_rate: 1e-05
56
+ - train_batch_size: 16
57
+ - eval_batch_size: 8
58
+ - seed: 42
59
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
60
+ - lr_scheduler_type: linear
61
+ - lr_scheduler_warmup_steps: 500
62
+ - training_steps: 1500
63
+ - mixed_precision_training: Native AMP
64
 
65
+ ### Training results
66
 
67
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
68
+ |:-------------:|:-------:|:----:|:---------------:|:-------:|
69
+ | 2.8313 | 3.5714 | 100 | 2.7177 | 74.1548 |
70
+ | 1.1366 | 7.1429 | 200 | 1.6407 | 63.0338 |
71
+ | 0.4394 | 10.7143 | 300 | 1.4737 | 47.4644 |
72
+ | 0.1686 | 14.2857 | 400 | 1.4481 | 46.3968 |
73
+ | 0.0761 | 17.8571 | 500 | 1.3707 | 40.8808 |
74
+ | 0.0452 | 21.4286 | 600 | 1.4051 | 38.5231 |
75
+ | 0.0188 | 25.0 | 700 | 1.4044 | 36.7883 |
76
+ | 0.0167 | 28.5714 | 800 | 1.4217 | 38.8345 |
77
+ | 0.0084 | 32.1429 | 900 | 1.4120 | 48.5765 |
78
+ | 0.0033 | 35.7143 | 1000 | 1.4151 | 39.2349 |
79
+ | 0.0022 | 39.2857 | 1100 | 1.4401 | 39.7242 |
80
+ | 0.0008 | 42.8571 | 1200 | 1.4591 | 39.5907 |
81
+ | 0.0007 | 46.4286 | 1300 | 1.4679 | 39.5907 |
82
+ | 0.0006 | 50.0 | 1400 | 1.4724 | 39.8577 |
83
+ | 0.0007 | 53.5714 | 1500 | 1.4737 | 39.7242 |
84
 
 
85
 
86
+ ### Framework versions
 
 
 
 
 
 
 
 
 
 
87
 
88
+ - Transformers 4.42.3
89
+ - Pytorch 2.3.0+cu121
90
+ - Datasets 2.20.0
91
+ - Tokenizers 0.19.1
 
generation_config.json CHANGED
@@ -85,6 +85,10 @@
85
  "decoder_start_token_id": 50257,
86
  "eos_token_id": 50256,
87
  "is_multilingual": false,
 
 
 
 
88
  "max_initial_timestamp_index": 50,
89
  "max_length": 448,
90
  "no_timestamps_token_id": 50362,
@@ -183,6 +187,10 @@
183
  50360,
184
  50361
185
  ],
186
- "transformers_version": "4.36.0.dev0",
 
 
 
 
187
  "use_scan": false
188
  }
 
85
  "decoder_start_token_id": 50257,
86
  "eos_token_id": 50256,
87
  "is_multilingual": false,
88
+ "lang_to_id": {
89
+ "<|en|>": 0
90
+ },
91
+ "language": "english",
92
  "max_initial_timestamp_index": 50,
93
  "max_length": 448,
94
  "no_timestamps_token_id": 50362,
 
187
  50360,
188
  50361
189
  ],
190
+ "task": "transcribe",
191
+ "task_to_id": {
192
+ "transcribe": 0
193
+ },
194
+ "transformers_version": "4.42.3",
195
  "use_scan": false
196
  }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:81ed9edca1db4cc49a7ea9d879e3c1c2c2af9eceb30020e73cfe5f059255de34
3
  size 664561848
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b6f29197bb7365816ae0e312c7110ec407521b7195856dee92302c7bc07a468b
3
  size 664561848
runs/Jul11_09-17-52_dpm4/events.out.tfevents.1720706132.dpm4.408810.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0c20abdcac1250f77e146581c94694d94db9a931f477aab3f767b1cae00cc1b6
3
+ size 406