Update README.md
Browse files
README.md
CHANGED
@@ -1,11 +1,26 @@
|
|
1 |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6589d7e6586088fd2784a12c/QkbFYjmpqCKfCyWnF-rwf.png)
|
|
|
2 |
|
3 |
-
# Making frankenMoEs more than just a meme...
|
4 |
|
5 |
I was approached with the idea to make a merge based on story telling, and considering frankenMoE's tendency to be hallucinatory, I thought that was a wonderful idea. However, I wanted it to be more than just a "meme model". I wanted to make something that would actually work...so we decided to use [SanjiWatsuki/Loyal-Macaroni-Maid-7B](https://huggingface.co/SanjiWatsuki/Loyal-Macaroni-Maid-7B) as a base, [cognitivecomputations/dolphin-2.6-mistral-7b](https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b) as two of the four experts in order to stabilize it, [SanjiWatsuki/Silicon-Maid-7B](https://huggingface.co/SanjiWatsuki/Silicon-Maid-7B) in order to improve its logical reasoning, and [NeuralNovel/Panda-7B-v0.1](https://huggingface.co/NeuralNovel/Panda-7B-v0.1) to improve its creativity and nuanced storytelling mechanics.
|
6 |
|
7 |
We believe that this, while it might not be better logically than mixtral base instruct, is definitely more creative. Special thanks to [NeuralNovel](https://huggingface.co/NeuralNovel) for collaborating with me on this project
|
8 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
9 |
# "[What is a Mixture of Experts (MoE)?](https://huggingface.co/blog/moe)"
|
10 |
### (from the MistralAI papers...click the quoted question above to navigate to it directly.)
|
11 |
|
|
|
1 |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6589d7e6586088fd2784a12c/QkbFYjmpqCKfCyWnF-rwf.png)
|
2 |
+
(Image credit goes to [NeuralNovel](https://huggingface.co/NeuralNovel))
|
3 |
|
4 |
+
# Making frankenMoEs more than just a meme...(this is the GGUF version, to be compiled using Llama.cpp model loader on OOBA or VLLm or something...)
|
5 |
|
6 |
I was approached with the idea to make a merge based on story telling, and considering frankenMoE's tendency to be hallucinatory, I thought that was a wonderful idea. However, I wanted it to be more than just a "meme model". I wanted to make something that would actually work...so we decided to use [SanjiWatsuki/Loyal-Macaroni-Maid-7B](https://huggingface.co/SanjiWatsuki/Loyal-Macaroni-Maid-7B) as a base, [cognitivecomputations/dolphin-2.6-mistral-7b](https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b) as two of the four experts in order to stabilize it, [SanjiWatsuki/Silicon-Maid-7B](https://huggingface.co/SanjiWatsuki/Silicon-Maid-7B) in order to improve its logical reasoning, and [NeuralNovel/Panda-7B-v0.1](https://huggingface.co/NeuralNovel/Panda-7B-v0.1) to improve its creativity and nuanced storytelling mechanics.
|
7 |
|
8 |
We believe that this, while it might not be better logically than mixtral base instruct, is definitely more creative. Special thanks to [NeuralNovel](https://huggingface.co/NeuralNovel) for collaborating with me on this project
|
9 |
|
10 |
+
## Provided files
|
11 |
+
|
12 |
+
| Name | Quant method | Bits | Size | Max RAM required | Use case |
|
13 |
+
| ---- | ---- | ---- | ---- | ---- | ----- |
|
14 |
+
| [mixtral-8x7b-v0.1.Q2_K.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-v0.1-GGUF/blob/main/mixtral-8x7b-v0.1.Q2_K.gguf) | Q2_K | 2 | 7.87 GB| 9.94 GB | smallest, significant quality loss - not recommended for most purposes |
|
15 |
+
| [mixtral-8x7b-v0.1.Q3_K_M.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-v0.1-GGUF/blob/main/mixtral-8x7b-v0.1.Q3_K_M.gguf) | Q3_K_M | 3 | 10.28 GB| 12.47 GB | very small, high quality loss |
|
16 |
+
| [mixtral-8x7b-v0.1.Q4_0.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-v0.1-GGUF/blob/main/mixtral-8x7b-v0.1.Q4_0.gguf) | Q4_0 | 4 | 13.30 GB| 15.43 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
|
17 |
+
| [mixtral-8x7b-v0.1.Q4_K_M.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-v0.1-GGUF/blob/main/mixtral-8x7b-v0.1.Q4_K_M.gguf) | Q4_K_M | 4 | 13.32 GB| 15.73 GB | medium, balanced quality - recommended |
|
18 |
+
| [mixtral-8x7b-v0.1.Q5_0.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-v0.1-GGUF/blob/main/mixtral-8x7b-v0.1.Q5_0.gguf) | Q5_0 | 5 | 16.24 GB| 18.64 GB | legacy; large, balanced quality |
|
19 |
+
| [mixtral-8x7b-v0.1.Q4_K_M.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-v0.1-GGUF/blob/main/mixtral-8x7b-v0.1.Q4_K_M.gguf) | Q5_K_M | 5 | 16.25 GB| ~18.64 GB | large, balanced quality - recommended |
|
20 |
+
| [mixtral-8x7b-v0.1.Q6_K.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-v0.1-GGUF/blob/main/mixtral-8x7b-v0.1.Q6_K.gguf) | Q6_K | 6 | 19.35 GB| 21.52 GB | very large, extremely low quality loss |
|
21 |
+
| [mixtral-8x7b-v0.1.Q8_0.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-v0.1-GGUF/blob/main/mixtral-8x7b-v0.1.Q8_0.gguf) | Q8_0 | 8 | 25.06 GB| 27.43 GB | very large, extremely low quality loss - not recommended |
|
22 |
+
|
23 |
+
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
|
24 |
# "[What is a Mixture of Experts (MoE)?](https://huggingface.co/blog/moe)"
|
25 |
### (from the MistralAI papers...click the quoted question above to navigate to it directly.)
|
26 |
|