|
--- |
|
license: apache-2.0 |
|
tags: |
|
- mixtral |
|
- dense |
|
- mistral |
|
- expert |
|
--- |
|
|
|
# Unmixtraled 22B expert 1 |
|
|
|
> [!WARNING] |
|
> This model outputs gibberish as it was not trained under the dense configuration. Finetuning or merging is needed to make this model useful. |
|
|
|
This is a 22B Mistral model recycling weights from [mistral-community/Mixtral-8x22B-v0.1](https://huggingface.co/mistral-community/Mixtral-8x22B-v0.1). |
|
The model was adapted from a Mixtral architecture to a dense Mistral architecture with the same number of layers, attention heads and hidden dimensions. |
|
Embeddings, attention, layer norms and LM head weights were taken directly from the 8x22B model, all MLP weights were taken from expert 1. |
|
|
|
The following named weight correspondance was used: |
|
|
|
| Mistral weight | Mixtral weight | |
|
|----------------|----------------------------------| |
|
| `gate_proj` | `experts.1.w1` | |
|
| `down_proj` | `experts.1.w2` | |
|
| `up_proj` | `experts.1.w3` | |
|
|
|
## Unmixtraled models |
|
| Expert | Source | Wikitext perplexity | |
|
|--------|-----------------|---------------------| |
|
| [Unmixtraled-22B-v0.1-expert-0](https://huggingface.co/thomasgauthier/Unmixtraled-22B-v0.1-expert-0) | Mixtral 8x22B embed, attn, layernorm, lm_head + expert 0 MLPs | 696.6932983398438 | |
|
| [**Unmixtraled-22B-v0.1-expert-1**](https://huggingface.co/thomasgauthier/Unmixtraled-22B-v0.1-expert-1) | **Mixtral 8x22B embed, attn, layernorm, lm_head + expert 1 MLPs** | **6853.04248046875** | |
|
| [Unmixtraled-22B-v0.1-expert-2](https://huggingface.co/thomasgauthier/Unmixtraled-22B-v0.1-expert-2) | Mixtral 8x22B embed, attn, layernorm, lm_head + expert 2 MLPs | 4689.181640625 | |
|
| [Unmixtraled-22B-v0.1-expert-3](https://huggingface.co/thomasgauthier/Unmixtraled-22B-v0.1-expert-3) | Mixtral 8x22B embed, attn, layernorm, lm_head + expert 3 MLPs | 782.3755493164062 | |
|
| [Unmixtraled-22B-v0.1-expert-4](https://huggingface.co/thomasgauthier/Unmixtraled-22B-v0.1-expert-4) | Mixtral 8x22B embed, attn, layernorm, lm_head + expert 4 MLPs | 2844.943603515625 | |
|
| [Unmixtraled-22B-v0.1-expert-5](https://huggingface.co/thomasgauthier/Unmixtraled-22B-v0.1-expert-5) | Mixtral 8x22B embed, attn, layernorm, lm_head + expert 5 MLPs | 1099.32373046875 | |
|
| [Unmixtraled-22B-v0.1-expert-6](https://huggingface.co/thomasgauthier/Unmixtraled-22B-v0.1-expert-6) | Mixtral 8x22B embed, attn, layernorm, lm_head + expert 6 MLPs | 341.5309753417969 | |
|
| [Unmixtraled-22B-v0.1-expert-7](https://huggingface.co/thomasgauthier/Unmixtraled-22B-v0.1-expert-7) | Mixtral 8x22B embed, attn, layernorm, lm_head + expert 7 MLPs | 2099.63818359375 | |
|
| [Unmixtraled-22B-v0.1-lerp](https://huggingface.co/thomasgauthier/Unmixtraled-22B-v0.1-lerp) | Mixtral 8x22B embed, attn, layernorm, lm_head + linear merge of expert 0-7 MLPs | 1873.9874267578125 | |