CTranslate2 int8 version of turbcat 8b

This is a int8_float16 quantization of turbcat 8b
See more on CTranslate2: Docs | Github

This model and it's dataset was created by Kaltcit, an admin of the Exllama Discord server.

This model was converted to ct2 format using the following commnd:

ct2-transformers-converter --model kat_turbcat --output_dir turbcat-ct2 --quantization int8_float16 --low_cpu_mem_usage

no converstion needed using the model from this repository as it is already in ct2 format.

Downloads last month
13
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The HF Inference API does not support text-generation models for CTranslate2 library.

Model tree for Anthonyg5005/turbcat-instruct-8b-int8-ct2

Quantized
(9)
this model