CTranslate2 Conversion of whisper-large-v3-turbo (INT8 Quantization)
This model is converted from openai/whisper-large-v3-turbo to the CTranslate2 format using INT8 quantization, primarily for use with faster-whisper
Model Details
For more details about the model, see its original model card
Conversion Details
The original model was converted using the following command:
ct2-transformers-converter --model whisper-large-v3-turbo --copy_files tokenizer.json preprocessor_config.json --output_dir faster-whisper-large-v3-turbo-int8-ct2 --quantization int8
- Downloads last month
- 23
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the HF Inference API does not support ctranslate2 models with pipeline type automatic-speech-recognition
Model tree for Zoont/faster-whisper-large-v3-turbo-int8-ct2
Base model
openai/whisper-large-v3
Finetuned
openai/whisper-large-v3-turbo