File size: 2,311 Bytes
748e8bb
 
 
 
 
 
 
 
 
 
 
 
c4d5954
748e8bb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c4d5954
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
---
base_model:
- PocketDoc/Dans-PersonalityEngine-V1.1.0-12b
- LatitudeGames/Wayfarer-12B
- mistralai/Mistral-Nemo-Base-2407
- Vikhrmodels/Vikhr-Nemo-12B-Instruct-R-21-09-24
- bamec66557/MISCHIEVOUS-12B-Mix_0.6v
- aixonlab/Grey-12b
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# nemoties

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).

## Merge Details
### Merge Method

This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [mistralai/Mistral-Nemo-Base-2407](https://huggingface.co/mistralai/Mistral-Nemo-Base-2407) as a base.

### Models Merged

The following models were included in the merge:
* [PocketDoc/Dans-PersonalityEngine-V1.1.0-12b](https://huggingface.co/PocketDoc/Dans-PersonalityEngine-V1.1.0-12b)
* [LatitudeGames/Wayfarer-12B](https://huggingface.co/LatitudeGames/Wayfarer-12B)
* [Vikhrmodels/Vikhr-Nemo-12B-Instruct-R-21-09-24](https://huggingface.co/Vikhrmodels/Vikhr-Nemo-12B-Instruct-R-21-09-24)
* [bamec66557/MISCHIEVOUS-12B-Mix_0.6v](https://huggingface.co/bamec66557/MISCHIEVOUS-12B-Mix_0.6v)
* [aixonlab/Grey-12b](https://huggingface.co/aixonlab/Grey-12b)

### Configuration

The following YAML configuration was used to produce this model:

```yaml
models:
  - model: PocketDoc/Dans-PersonalityEngine-V1.1.0-12b #ifeval, musr; weak in math, gpqa
    parameters:
      density: [0.1, 0.25, 1.0, 0.5, 1.0]
      weight: 1.0
  - model: LatitudeGames/Wayfarer-12B #personality, bias, sentiment; unknown weaknesses
    parameters:
      density: [1.0, 0.5, 0.01]
      weight: [1.0, 0.5]
  - model: aixonlab/Grey-12b #bbh, mmlu; weak in ifeval, gpqa
    parameters:
      density: [0.75, 1.0, 0.6, 0.25, 0.01]
      weight: [1.0, 0.5]
  - model: Vikhrmodels/Vikhr-Nemo-12B-Instruct-R-21-09-24 #MATH; weak in musr, gpqa
    parameters:
      density: [0.1, 0.1, 0.25, 1.0, 0.01]
      weight: 1.0
  - model: bamec66557/MISCHIEVOUS-12B-Mix_0.6v #ok GPQA and other reasoning; mediocre ifeval
    parameters:
      density: [0.1, 0.5, 1.0, 0.5, 0.01]
      weight: 1.0
merge_method: ties
base_model: mistralai/Mistral-Nemo-Base-2407
tokenizer_source: PocketDoc/Dans-PersonalityEngine-V1.1.0-12b
parameters:
  normalize: true
  int8_mask: true
dtype: float16

```