GGUF
English
Merge
Inference Endpoints
Kquant03's picture
Update README.md
56cea5e
|
raw
history blame
9.69 kB
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6589d7e6586088fd2784a12c/QkbFYjmpqCKfCyWnF-rwf.png)
(Image credit goes to [NeuralNovel](https://huggingface.co/NeuralNovel))
# Making frankenMoEs more than just a meme...(this is the GGUF version, to be compiled using Llama.cpp model loader on OOBA or VLLm or something...)
I was approached with the idea to make a merge based on story telling, and considering frankenMoE's tendency to be hallucinatory, I thought that was a wonderful idea. However, I wanted it to be more than just a "meme model". I wanted to make something that would actually work...so we decided to use [SanjiWatsuki/Loyal-Macaroni-Maid-7B](https://huggingface.co/SanjiWatsuki/Loyal-Macaroni-Maid-7B) as a base, [cognitivecomputations/dolphin-2.6-mistral-7b](https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b) as two of the four experts in order to stabilize it, [SanjiWatsuki/Silicon-Maid-7B](https://huggingface.co/SanjiWatsuki/Silicon-Maid-7B) in order to improve its logical reasoning, and [NeuralNovel/Panda-7B-v0.1](https://huggingface.co/NeuralNovel/Panda-7B-v0.1) to improve its creativity and nuanced storytelling mechanics.
We believe that this, while it might not be better logically than mixtral base instruct, is definitely more creative. Special thanks to [NeuralNovel](https://huggingface.co/NeuralNovel) for collaborating with me on this project
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [mixtral-8x7b-v0.1.Q2_K.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-v0.1-GGUF/blob/main/mixtral-8x7b-v0.1.Q2_K.gguf) | Q2_K | 2 | 7.87 GB| 9.94 GB | smallest, significant quality loss - not recommended for most purposes |
| [mixtral-8x7b-v0.1.Q3_K_M.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-v0.1-GGUF/blob/main/mixtral-8x7b-v0.1.Q3_K_M.gguf) | Q3_K_M | 3 | 10.28 GB| 12.47 GB | very small, high quality loss |
| [mixtral-8x7b-v0.1.Q4_0.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-v0.1-GGUF/blob/main/mixtral-8x7b-v0.1.Q4_0.gguf) | Q4_0 | 4 | 13.30 GB| 15.43 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [mixtral-8x7b-v0.1.Q4_K_M.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-v0.1-GGUF/blob/main/mixtral-8x7b-v0.1.Q4_K_M.gguf) | Q4_K_M | 4 | 13.32 GB| 15.73 GB | medium, balanced quality - recommended |
| [mixtral-8x7b-v0.1.Q5_0.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-v0.1-GGUF/blob/main/mixtral-8x7b-v0.1.Q5_0.gguf) | Q5_0 | 5 | 16.24 GB| 18.64 GB | legacy; large, balanced quality |
| [mixtral-8x7b-v0.1.Q4_K_M.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-v0.1-GGUF/blob/main/mixtral-8x7b-v0.1.Q4_K_M.gguf) | Q5_K_M | 5 | 16.25 GB| ~18.64 GB | large, balanced quality - recommended |
| [mixtral-8x7b-v0.1.Q6_K.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-v0.1-GGUF/blob/main/mixtral-8x7b-v0.1.Q6_K.gguf) | Q6_K | 6 | 19.35 GB| 21.52 GB | very large, extremely low quality loss |
| [mixtral-8x7b-v0.1.Q8_0.gguf](https://huggingface.co/TheBloke/Mixtral-8x7B-v0.1-GGUF/blob/main/mixtral-8x7b-v0.1.Q8_0.gguf) | Q8_0 | 8 | 25.06 GB| 27.43 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
# "[What is a Mixture of Experts (MoE)?](https://huggingface.co/blog/moe)"
### (from the MistralAI papers...click the quoted question above to navigate to it directly.)
The scale of a model is one of the most important axes for better model quality. Given a fixed computing budget, training a larger model for fewer steps is better than training a smaller model for more steps.
Mixture of Experts enable models to be pretrained with far less compute, which means you can dramatically scale up the model or dataset size with the same compute budget as a dense model. In particular, a MoE model should achieve the same quality as its dense counterpart much faster during pretraining.
So, what exactly is a MoE? In the context of transformer models, a MoE consists of two main elements:
Sparse MoE layers are used instead of dense feed-forward network (FFN) layers. MoE layers have a certain number of “experts” (e.g. 32 in my "frankenMoE"), where each expert is a neural network. In practice, the experts are FFNs, but they can also be more complex networks or even a MoE itself, leading to hierarchical MoEs!
A gate network or router, that determines which tokens are sent to which expert. For example, in the image below, the token “More” is sent to the second expert, and the token "Parameters” is sent to the first network. As we’ll explore later, we can send a token to more than one expert. How to route a token to an expert is one of the big decisions when working with MoEs - the router is composed of learned parameters and is pretrained at the same time as the rest of the network.
At every layer, for every token, a router network chooses two of these groups (the “experts”) to process the token and combine their output additively.
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6589d7e6586088fd2784a12c/up_I0R2TQGjqTShZp_1Sz.png)
Switch Layer
MoE layer from the [Switch Transformers paper](https://arxiv.org/abs/2101.03961)
So, to recap, in MoEs we replace every FFN layer of the transformer model with an MoE layer, which is composed of a gate network and a certain number of experts.
Although MoEs provide benefits like efficient pretraining and faster inference compared to dense models, they also come with challenges:
Training: MoEs enable significantly more compute-efficient pretraining, but they’ve historically struggled to generalize during fine-tuning, leading to overfitting.
Inference: Although a MoE might have many parameters, only some of them are used during inference. This leads to much faster inference compared to a dense model with the same number of parameters. However, all parameters need to be loaded in RAM, so memory requirements are high. For example, [given a MoE like Mixtral 8x7B](https://huggingface.co/blog/moe), we’ll need to have enough VRAM to hold a dense 47B parameter model. Why 47B parameters and not 8 x 7B = 56B? That’s because in MoE models, only the FFN layers are treated as individual experts, and the rest of the model parameters are shared. At the same time, assuming just two experts are being used per token, the inference speed (FLOPs) is like using a 12B model (as opposed to a 14B model), because it computes 2x7B matrix multiplications, but with some layers shared (more on this soon).
If all our tokens are sent to just a few popular experts, that will make training inefficient. In a normal MoE training, the gating network converges to mostly activate the same few experts. This self-reinforces as favored experts are trained quicker and hence selected more. To mitigate this, an auxiliary loss is added to encourage giving all experts equal importance. This loss ensures that all experts receive a roughly equal number of training examples. The following sections will also explore the concept of expert capacity, which introduces a threshold of how many tokens can be processed by an expert. In transformers, the auxiliary loss is exposed via the aux_loss parameter.
## "Wait...but you called this a frankenMoE?"
The difference between MoE and "frankenMoE" lies in the fact that the router layer in a model like the one on this repo is not trained simultaneously. There are rumors about someone developing a way for us to unscuff these frankenMoE models by training the router layer simultaneously. For now, frankenMoE remains psychotic. Raiden does improve upon the base heegyu/WizardVicuna-Uncensored-3B-0719, though.
## "Are there at least any datasets or plans for this model, in any way?"
There are many datasets included as a result of merging four models...for one, Silicon Maid is a merge of xDan which is trained on the [OpenOrca Dataset](https://huggingface.co/datasets/Open-Orca/OpenOrca) and the [OpenOrca DPO pairs](https://huggingface.co/datasets/Intel/orca_dpo_pairs). Loyal-Macaroni-Maid uses OpenChat-3.5, Starling and NeuralChat which has so many datasets I'm not going to list them all here. Dolphin 2.6 Mistral also has a large variety of datasets. Panda-7B-v0.1 was fine tuned by the person collaborating on this project with me using a base mistral and a private dataset. Panda gives the model the creativity it has while the rest act as support.
# Results
## Some results from the model's performance.
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6589d7e6586088fd2784a12c/gPOIVGSeqsTFiT_0QWGlr.png)
Most models answer eternal life...this was a compelling argument given by this model. At lower quants this model will lean towards eternal life.
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6589d7e6586088fd2784a12c/Zj45g_V_e5VH95SlPUVZC.png)
Considerably better than MythoMax in my opinion...
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6589d7e6586088fd2784a12c/dzfP1qZrOtCCpLmH7U1JP.png)
It actually wrote a perfect haiku. This model is so much better than my other frankenMoEs...
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6589d7e6586088fd2784a12c/FosMQSQIieUv0fzS8XP0x.png)
![image/gif](https://cdn-uploads.huggingface.co/production/uploads/6589d7e6586088fd2784a12c/KNiQIxuGnBzKWU7xrJWqi.gif)
There's a reason I pushed this straight to GGUF right away. I lack compute to make EXL2 or something but perhaps someone else would be interested in that.