--- license: llama3 language: - en tags: - moe ---

Testing... 8bpw/h8 exl2 quantization of [xxx777xxxASD/ChaoticSoliloquy-4x8B](https://huggingface.co/xxx777xxxASD/ChaoticSoliloquy-4x8B) using [PIPPA](https://huggingface.co/datasets/royallab/PIPPA-cleaned) calibration dataset (l=8192, r=200). --- **ORIGINAL CARD:** ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f5e51289c121cb864ba464/jgyhmI451GRXri5hEj3lh.png) (Maybe i'll change the waifu picture later) Experimental RP-oriented MoE, the idea was to get a model that would be equal to or better than the Mixtral 8x7B and it's finetunes in RP/ERP tasks. [GGUF, Exl2](https://huggingface.co/collections/xxx777xxxASD/chaoticsoliloquy-4x8b-6628a759b5a60d8d3f51ed62) ### ChaoticSoliloquy-4x8B ``` base_model: jeiku_Chaos_RP_l3_8B gate_mode: random dtype: bfloat16 experts_per_token: 2 experts: - source_model: ChaoticNeutrals_Poppy_Porpoise-v0.6-L3-8B - source_model: jeiku_Chaos_RP_l3_8B - source_model: openlynn_Llama-3-Soliloquy-8B - source_model: Sao10K_L3-Solana-8B-v1 ``` ## Models used - [ChaoticNeutrals/Poppy_Porpoise-v0.6-L3-8B](https://huggingface.co/ChaoticNeutrals/Poppy_Porpoise-v0.6-L3-8B) - [jeiku/Chaos_RP_l3_8B](https://huggingface.co/jeiku/Chaos_RP_l3_8B) - [openlynn/Llama-3-Soliloquy-8B](https://huggingface.co/openlynn/Llama-3-Soliloquy-8B) - [Sao10K/L3-Solana-8B-v1](https://huggingface.co/Sao10K/L3-Solana-8B-v1) ## Vision [llama3_mmproj](https://huggingface.co/ChaoticNeutrals/Llava_1.5_Llama3_mmproj) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f5e51289c121cb864ba464/yv4C6NalqORLjvY3KKZk8.png) ## Prompt format: Llama 3