File size: 2,645 Bytes
5ab71fa
 
 
 
 
d9da2db
5ab71fa
 
 
 
a546241
 
 
5ab71fa
 
 
 
 
 
 
54c829f
 
5ab71fa
1f4107b
 
69379ba
 
1f4107b
5ab71fa
 
 
 
 
 
 
 
 
bb436bb
5ab71fa
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
---
base_model:
- IlyaGusev/saiga_nemo_12b
- elinas/Chronos-Gold-12B-1.0
- Vikhrmodels/Vikhr-Nemo-12B-Instruct-R-21-09-24
- MarinaraSpaghetti/NemoMix-Unleashed-12B
library_name: transformers
tags:
- mergekit
- merge
- rp
- role-play
- mistral
language:
- ru
- en
---
# SAINEMO-reMIX
![SAINEMO-reMIX](./remixwife.webp)

# GGUF: thx team mradermacher
https://huggingface.co/mradermacher/SAINEMO-reMIX-GGUF

# GGUF imatrix
https://huggingface.co/mradermacher/SAINEMO-reMIX-i1-GGUF
# learderboard
![SAINEMO-reMIX](./learderboard.png)

# Presets
The given presets are quite suitable for this model. https://huggingface.co/MarinaraSpaghetti/SillyTavern-Settings/tree/main/Customized/Mistral%20Improved
# Sampler


```
Temp - 0,7 - 1,2 ~
TopA - 0,1
DRY - 0,8 1,75 2 0
I recommend trying the stock presets from SillyTavern, such as simple-1.
```











This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).

## Merge Details
### Merge Method

This model was merged using the della_linear merge method using E:\Programs\TextGen\text-generation-webui\models\IlyaGusev_saiga_nemo_12b as a base.

### Models Merged

The following models were included in the merge:
* E:\Programs\TextGen\text-generation-webui\models\elinas_Chronos-Gold-12B-1.0
* E:\Programs\TextGen\text-generation-webui\models\Vikhrmodels_Vikhr-Nemo-12B-Instruct-R-21-09-24
* E:\Programs\TextGen\text-generation-webui\models\MarinaraSpaghetti_NemoMix-Unleashed-12B

### Configuration

The following YAML configuration was used to produce this model:

```yaml
models:
  - model: E:\Programs\TextGen\text-generation-webui\models\IlyaGusev_saiga_nemo_12b
    parameters:
      weight: 0.55  # Основной акцент на русском языке
      density: 0.4
  - model: E:\Programs\TextGen\text-generation-webui\models\MarinaraSpaghetti_NemoMix-Unleashed-12B
    parameters:
      weight: 0.2  # РП модель, чуть меньший вес из-за ориентации на английский
      density: 0.4
  - model: E:\Programs\TextGen\text-generation-webui\models\elinas_Chronos-Gold-12B-1.0
    parameters:
      weight: 0.15  # Вторая РП модель
      density: 0.4
  - model: E:\Programs\TextGen\text-generation-webui\models\Vikhrmodels_Vikhr-Nemo-12B-Instruct-R-21-09-24
    parameters:
      weight: 0.25  # Русскоязычная поддержка и баланс
      density: 0.4

merge_method: della_linear
base_model: E:\Programs\TextGen\text-generation-webui\models\IlyaGusev_saiga_nemo_12b
parameters:
  epsilon: 0.05
  lambda: 1
dtype: float16
tokenizer_source: base

```