--- base_model: - IlyaGusev/saiga_nemo_12b - elinas/Chronos-Gold-12B-1.0 - Vikhrmodels/Vikhr-Nemo-12B-Instruct-R-21-09-24 - MarinaraSpaghetti/NemoMix-Unleashed-12B library_name: transformers tags: - mergekit - merge - rp - role-play - mistral language: - ru - en --- # SAINEMO-reMIX ![SAINEMO-reMIX](./remixwife.webp) # GGUF: thx team mradermacher https://huggingface.co/mradermacher/SAINEMO-reMIX-GGUF # GGUF imatrix https://huggingface.co/mradermacher/SAINEMO-reMIX-i1-GGUF # learderboard ![SAINEMO-reMIX](./learderboard.png) # Presets The given presets are quite suitable for this model. https://huggingface.co/MarinaraSpaghetti/SillyTavern-Settings/tree/main/Customized/Mistral%20Improved # Sampler ``` Temp - 0,7 - 1,2 ~ TopA - 0,1 DRY - 0,8 1,75 2 0 I recommend trying the stock presets from SillyTavern, such as simple-1. ``` This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the della_linear merge method using E:\Programs\TextGen\text-generation-webui\models\IlyaGusev_saiga_nemo_12b as a base. ### Models Merged The following models were included in the merge: * E:\Programs\TextGen\text-generation-webui\models\elinas_Chronos-Gold-12B-1.0 * E:\Programs\TextGen\text-generation-webui\models\Vikhrmodels_Vikhr-Nemo-12B-Instruct-R-21-09-24 * E:\Programs\TextGen\text-generation-webui\models\MarinaraSpaghetti_NemoMix-Unleashed-12B ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: E:\Programs\TextGen\text-generation-webui\models\IlyaGusev_saiga_nemo_12b parameters: weight: 0.55 # Основной акцент на русском языке density: 0.4 - model: E:\Programs\TextGen\text-generation-webui\models\MarinaraSpaghetti_NemoMix-Unleashed-12B parameters: weight: 0.2 # РП модель, чуть меньший вес из-за ориентации на английский density: 0.4 - model: E:\Programs\TextGen\text-generation-webui\models\elinas_Chronos-Gold-12B-1.0 parameters: weight: 0.15 # Вторая РП модель density: 0.4 - model: E:\Programs\TextGen\text-generation-webui\models\Vikhrmodels_Vikhr-Nemo-12B-Instruct-R-21-09-24 parameters: weight: 0.25 # Русскоязычная поддержка и баланс density: 0.4 merge_method: della_linear base_model: E:\Programs\TextGen\text-generation-webui\models\IlyaGusev_saiga_nemo_12b parameters: epsilon: 0.05 lambda: 1 dtype: float16 tokenizer_source: base ```