Commit
·
c30fd2d
1
Parent(s):
73e8987
Update README.md
Browse files
README.md
CHANGED
@@ -41,7 +41,7 @@ apt-get install git-lfs
|
|
41 |
|
42 |
```
|
43 |
git lfs install
|
44 |
-
git clone https://huggingface.co/kurianbenoy/vegam-whisper-medium-ml-
|
45 |
```
|
46 |
|
47 |
## Usage
|
@@ -49,7 +49,7 @@ git clone https://huggingface.co/kurianbenoy/vegam-whisper-medium-ml-fp16
|
|
49 |
```
|
50 |
from faster_whisper import WhisperModel
|
51 |
|
52 |
-
model_path = "vegam-whisper-medium-ml-
|
53 |
|
54 |
# Run on GPU with FP16
|
55 |
model = WhisperModel(model_path, device="cuda", compute_type="float16")
|
@@ -67,7 +67,7 @@ for segment in segments:
|
|
67 |
```
|
68 |
from faster_whisper import WhisperModel
|
69 |
|
70 |
-
model_path = "vegam-whisper-medium-ml-
|
71 |
|
72 |
model = WhisperModel(model_path, device="cuda", compute_type="float16")
|
73 |
|
@@ -90,8 +90,8 @@ Note: The audio file [00b38e80-80b8-4f70-babf-566e848879fc.webm](https://hugging
|
|
90 |
This conversion was possible with wonderful [CTranslate2 library](https://github.com/OpenNMT/CTranslate2) leveraging the [Transformers converter for OpenAI Whisper](https://opennmt.net/CTranslate2/guides/transformers.html#whisper).The original model was converted with the following command:
|
91 |
|
92 |
```
|
93 |
-
ct2-transformers-converter --model thennal/whisper-medium-ml --output_dir vegam-whisper-medium-ml-
|
94 |
-
--quantization
|
95 |
```
|
96 |
|
97 |
## Many Thanks to
|
|
|
41 |
|
42 |
```
|
43 |
git lfs install
|
44 |
+
git clone https://huggingface.co/kurianbenoy/vegam-whisper-medium-ml-int16
|
45 |
```
|
46 |
|
47 |
## Usage
|
|
|
49 |
```
|
50 |
from faster_whisper import WhisperModel
|
51 |
|
52 |
+
model_path = "vegam-whisper-medium-ml-int16"
|
53 |
|
54 |
# Run on GPU with FP16
|
55 |
model = WhisperModel(model_path, device="cuda", compute_type="float16")
|
|
|
67 |
```
|
68 |
from faster_whisper import WhisperModel
|
69 |
|
70 |
+
model_path = "vegam-whisper-medium-ml-int16"
|
71 |
|
72 |
model = WhisperModel(model_path, device="cuda", compute_type="float16")
|
73 |
|
|
|
90 |
This conversion was possible with wonderful [CTranslate2 library](https://github.com/OpenNMT/CTranslate2) leveraging the [Transformers converter for OpenAI Whisper](https://opennmt.net/CTranslate2/guides/transformers.html#whisper).The original model was converted with the following command:
|
91 |
|
92 |
```
|
93 |
+
ct2-transformers-converter --model thennal/whisper-medium-ml --output_dir vegam-whisper-medium-ml-int16 \
|
94 |
+
--quantization int16
|
95 |
```
|
96 |
|
97 |
## Many Thanks to
|