euclaise/Echo-3B-GGUF

Quantized GGUF model files for Echo-3B from euclaise

Name Quant method Size
echo-3b.fp16.gguf fp16 5.59 GB
echo-3b.q2_k.gguf q2_k 1.20 GB
echo-3b.q3_k_m.gguf q3_k_m 1.39 GB
echo-3b.q4_k_m.gguf q4_k_m 1.71 GB
echo-3b.q5_k_m.gguf q5_k_m 1.99 GB
echo-3b.q6_k.gguf q6_k 2.30 GB
echo-3b.q8_0.gguf q8_0 2.97 GB

Original Model Card:

Downloads last month
22
GGUF
Model size
2.8B params
Architecture
stablelm

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for afrideva/Echo-3B-GGUF

Base model

euclaise/Echo-3B
Quantized
(1)
this model

Datasets used to train afrideva/Echo-3B-GGUF