Update inference times
Browse files
README.md
CHANGED
@@ -47,15 +47,15 @@ For evaluation, Common Voice 16.0 yue Test set is used.
|
|
47 |
||`alvanlii/distil-whisper-small-cantonese`|`alvanlii/whisper-small-cantonese`|
|
48 |
|--|--|--|
|
49 |
|CER (lower is better)|0.097|0.089|
|
50 |
-
|GPU Inference time (sdpa) [s/sample]|0.
|
51 |
-
|GPU Inference (regular) [s/sample]|0.
|
52 |
-
|CPU Inference [s/sample]|1.
|
53 |
|Params [M]|157|242|
|
54 |
|
55 |
Note: inference time is calculated by taking the average inference time for the CV16 yue test set
|
56 |
|
57 |
## Using the Model
|
58 |
-
```
|
59 |
import librosa
|
60 |
|
61 |
import torch
|
|
|
47 |
||`alvanlii/distil-whisper-small-cantonese`|`alvanlii/whisper-small-cantonese`|
|
48 |
|--|--|--|
|
49 |
|CER (lower is better)|0.097|0.089|
|
50 |
+
|GPU Inference time (sdpa) [s/sample]|0.027|0.055|
|
51 |
+
|GPU Inference (regular) [s/sample]|0.027|0.308|
|
52 |
+
|CPU Inference [s/sample]|1.3|2.57|
|
53 |
|Params [M]|157|242|
|
54 |
|
55 |
Note: inference time is calculated by taking the average inference time for the CV16 yue test set
|
56 |
|
57 |
## Using the Model
|
58 |
+
```
|
59 |
import librosa
|
60 |
|
61 |
import torch
|