--- base_model: - rityak/NM-Vikhr-Magnum-dare-12B - rityak/Magnum-DareLinearAbliterated-Instruct-DPO - Aleteian/base-ground-2 - rityak/Vikhr-Nemo-dostoevsky-saiga-12b library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [rityak/Vikhr-Nemo-dostoevsky-saiga-12b](https://huggingface.co/rityak/Vikhr-Nemo-dostoevsky-saiga-12b) as a base. ### Models Merged The following models were included in the merge: * [rityak/NM-Vikhr-Magnum-dare-12B](https://huggingface.co/rityak/NM-Vikhr-Magnum-dare-12B) * [rityak/Magnum-DareLinearAbliterated-Instruct-DPO](https://huggingface.co/rityak/Magnum-DareLinearAbliterated-Instruct-DPO) * [Aleteian/base-ground-2](https://huggingface.co/Aleteian/base-ground-2) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: rityak/Vikhr-Nemo-dostoevsky-saiga-12b - model: rityak/NM-Vikhr-Magnum-dare-12B parameters: weight: .48 density: [.6, .5, .8] - model: rityak/Magnum-DareLinearAbliterated-Instruct-DPO parameters: weight: .21 density: [.3, .3, .2] - model: Aleteian/base-ground-2 parameters: weight: .21 density: [.3, .4, .4] merge_method: dare_ties tokenizer_source: union base_model: rityak/Vikhr-Nemo-dostoevsky-saiga-12b dtype: float16 ```