HiroseKoichi
commited on
Commit
•
31ed7f2
1
Parent(s):
19a2c6c
Update README.md
Browse files
README.md
CHANGED
@@ -18,7 +18,12 @@ While role-play was the main focus of this merge, its base capabilities weren't
|
|
18 |
Unfortunately, I can't compare it with 70B models because they're too slow on my machine, but this is the best sub-70B model I have used so far; I haven't felt the need to regenerate any responses, which hasn't happened with any other model. This is my first attempt at any kind of merge, and I want to share what I've learned, but this section is already longer than I wanted, so I've decided to place the rest at the bottom of the page.
|
19 |
|
20 |
# Quantization Formats
|
21 |
-
|
|
|
|
|
|
|
|
|
|
|
22 |
|
23 |
# Details
|
24 |
- **License**: [llama3](https://llama.meta.com/llama3/license/)
|
|
|
18 |
Unfortunately, I can't compare it with 70B models because they're too slow on my machine, but this is the best sub-70B model I have used so far; I haven't felt the need to regenerate any responses, which hasn't happened with any other model. This is my first attempt at any kind of merge, and I want to share what I've learned, but this section is already longer than I wanted, so I've decided to place the rest at the bottom of the page.
|
19 |
|
20 |
# Quantization Formats
|
21 |
+
**GGUF**
|
22 |
+
- Static:
|
23 |
+
- https://huggingface.co/HiroseKoichi/Llama-Salad-4x8B-GGUF
|
24 |
+
- https://huggingface.co/mradermacher/Llama-Salad-4x8B-GGUF
|
25 |
+
- Imatrix:
|
26 |
+
- https://huggingface.co/mradermacher/Llama-Salad-4x8B-i1-GGUF
|
27 |
|
28 |
# Details
|
29 |
- **License**: [llama3](https://llama.meta.com/llama3/license/)
|