Mistral 7B Instruct v0.2 Turkish

Description

This repo contains GGUF format model files for malhajar's Mistral 7B Instruct v0.2 Turkish

Original model

Quantization methods

quantization method bits size use case recommended
Q2_K 2 2.72 GB smallest, significant quality loss โŒ
Q3_K_S 3 3.16 GB very small, high quality loss โŒ
Q3_K_M 3 3.52 GB very small, high quality loss โŒ
Q3_K_L 3 3.82 GB small, substantial quality loss โŒ
Q4_0 4 4.11 GB legacy; small, very high quality loss โŒ
Q4_K_S 4 4.14 GB small, greater quality loss โŒ
Q4_K_M 4 4.37 GB medium, balanced quality โœ…
Q5_0 5 5.00 GB legacy; medium, balanced quality โŒ
Q5_K_S 5 5.00 GB large, low quality loss โœ…
Q5_K_M 5 5.13 GB large, very low quality loss โœ…
Q6_K 6 5.94 GB very large, extremely low quality loss โŒ
Q8_0 8 7.70 GB very large, extremely low quality loss โŒ
FP16 16 14.5 GB enormous, minuscule quality loss โŒ

Prompt Template

### Instruction:
<prompt> (without the <>)
### Response:
Downloads last month
218
GGUF
Model size
7.24B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for sayhan/Mistral-7B-Instruct-v0.2-turkish-GGUF

Quantized
(2)
this model

Collection including sayhan/Mistral-7B-Instruct-v0.2-turkish-GGUF