Update README.md
Browse files
README.md
CHANGED
@@ -112,20 +112,20 @@ Refer to the Provided Files table below to see what files use which methods, and
|
|
112 |
<!-- README_GGUF.md-provided-files start -->
|
113 |
## Provided files
|
114 |
|
115 |
-
| Name | Quant method | Bits |
|
116 |
| ---- | ---- | ---- | ---- | ---- | ----- |
|
117 |
-
| NorskGPT-Llama3-8b.Q2_K.gguf | Q2_K | 2 |
|
118 |
-
| NorskGPT-Llama3-8b.Q3_K_S.gguf | Q3_K_S | 3 |
|
119 |
-
| NorskGPT-Llama3-8b.Q3_K_M.gguf| Q3_K_M | 3 |
|
120 |
-
| NorskGPT-Llama3-8b.Q3_K_L.gguf | Q3_K_L | 3 |
|
121 |
-
| NorskGPT-Llama3-8b.Q4_0.gguf| Q4_0 | 4 | 4.11 GB|
|
122 |
-
| NorskGPT-Llama3-8b.Q4_K_S.gguf | Q4_K_S | 4 |
|
123 |
-
| NorskGPT-Llama3-8b.Q4_K_M.gguf | Q4_K_M | 4 |
|
124 |
-
| NorskGPT-Llama3-8b.Q5_0.gguf | Q5_0 | 5 |
|
125 |
-
| NorskGPT-Llama3-8b.Q5_K_S.gguf | Q5_K_S | 5 |
|
126 |
-
| NorskGPT-Llama3-8b.Q5_K_M.gguf | Q5_K_M | 5 |
|
127 |
-
| NorskGPT-Llama3-8b.Q6_K.gguf| Q6_K | 6 |
|
128 |
-
| NorskGPT-Llama3-8b.Q8_0.gguf | Q8_0 | 8 |
|
129 |
|
130 |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
|
131 |
|
|
|
112 |
<!-- README_GGUF.md-provided-files start -->
|
113 |
## Provided files
|
114 |
|
115 |
+
| Name | Quant method | Bits | Use case |
|
116 |
| ---- | ---- | ---- | ---- | ---- | ----- |
|
117 |
+
| NorskGPT-Llama3-8b.Q2_K.gguf | Q2_K | 2 | significant quality loss - not recommended for most purposes |
|
118 |
+
| NorskGPT-Llama3-8b.Q3_K_S.gguf | Q3_K_S | 3 | very small, high quality loss |
|
119 |
+
| NorskGPT-Llama3-8b.Q3_K_M.gguf| Q3_K_M | 3 | very small, high quality loss |
|
120 |
+
| NorskGPT-Llama3-8b.Q3_K_L.gguf | Q3_K_L | 3 | small, substantial quality loss |
|
121 |
+
| NorskGPT-Llama3-8b.Q4_0.gguf| Q4_0 | 4 | 4.11 GB| legacy; small, very high quality loss - prefer using Q3_K_M |
|
122 |
+
| NorskGPT-Llama3-8b.Q4_K_S.gguf | Q4_K_S | 4 | | small, greater quality loss |
|
123 |
+
| NorskGPT-Llama3-8b.Q4_K_M.gguf | Q4_K_M | 4 | | medium, balanced quality - recommended |
|
124 |
+
| NorskGPT-Llama3-8b.Q5_0.gguf | Q5_0 | 5 | legacy; medium, balanced quality - prefer using Q4_K_M |
|
125 |
+
| NorskGPT-Llama3-8b.Q5_K_S.gguf | Q5_K_S | 5 | large, low quality loss - recommended |
|
126 |
+
| NorskGPT-Llama3-8b.Q5_K_M.gguf | Q5_K_M | 5 | large, very low quality loss - recommended |
|
127 |
+
| NorskGPT-Llama3-8b.Q6_K.gguf| Q6_K | 6 | very large, extremely low quality loss |
|
128 |
+
| NorskGPT-Llama3-8b.Q8_0.gguf | Q8_0 | 8 | very large, extremely low quality loss - not recommended |
|
129 |
|
130 |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
|
131 |
|