Jilt commited on
Commit
6b8e033
1 Parent(s): a3a0f4e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +35 -3
README.md CHANGED
@@ -1,3 +1,35 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ ---
4
+
5
+ # Whisper large-v3 turbo model for CTranslate2
6
+
7
+ This repository contains the conversion of [openai/whisper-large-v3-turbo](https://huggingface.co/openai/whisper-large-v3-turbo) to the [CTranslate2](https://github.com/OpenNMT/CTranslate2) model format.
8
+
9
+ This model can be used in CTranslate2 or projects based on CTranslate2 models such as [faster-whisper](https://github.com/systran/faster-whisper). It is called automatically for [Mobius Labs fork of faster-whisper](https://github.com/mobiusml/faster-whisper).
10
+
11
+ ## Example
12
+
13
+
14
+ ```python
15
+ from faster_whisper import WhisperModel
16
+
17
+ model = WhisperModel("faster-whisper-large-v3-turbo")
18
+
19
+ segments, info = model.transcribe("audio.mp3")
20
+ for segment in segments:
21
+ print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
22
+ ```
23
+ Note that the model weights are saved in FP16. This type can be changed when the model is loaded using the [compute_type option in CTranslate2](https://opennmt.net/CTranslate2/quantization.html).
24
+
25
+
26
+ ## Conversion details
27
+ The openAI model was converted with the following command:
28
+ ```
29
+ ct2-transformers-converter --model openai/whisper-large-v3-turbo --output_dir faster-whisper-large-v3-turbo \
30
+ --copy_files tokenizer.json preprocessor_config.json --quantization float16
31
+ ```
32
+
33
+ ### More Information
34
+
35
+ For more information about the original model, see its [model card](https://huggingface.co/openai/whisper-large-v3-turbo).