--- base_model: - mistralai/Mixtral-8x7B-v0.1 - jondurbin/bagel-dpo-8x7b-v0.2 - Sao10K/Sensualize-Mixtral-bf16 - mistralai/Mixtral-8x7B-v0.1 - Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora - mistralai/Mixtral-8x7B-Instruct-v0.1 tags: - mergekit - merge --- # BagelMIsteryTour-v2-8x7B This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) as a base. ### Models Merged The following models were included in the merge: * [jondurbin/bagel-dpo-8x7b-v0.2](https://huggingface.co/jondurbin/bagel-dpo-8x7b-v0.2) * [Sao10K/Sensualize-Mixtral-bf16](https://huggingface.co/Sao10K/Sensualize-Mixtral-bf16) * [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) + [Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora](https://huggingface.co/Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora) * [mistralai/Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: mistralai/Mixtral-8x7B-v0.1 models: - model: mistralai/Mixtral-8x7B-v0.1+Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora parameters: density: 0.5 weight: 0.2 - model: Sao10K/Sensualize-Mixtral-bf16 parameters: density: 0.5 weight: 0.1 - model: mistralai/Mixtral-8x7B-Instruct-v0.1 parameters: density: 0.6 weight: 1.0 - model: jondurbin/bagel-dpo-8x7b-v0.2 parameters: density: 0.6 weight: 0.5 merge_method: dare_ties dtype: bfloat16 ```