File size: 2,208 Bytes
3506d7a
 
 
 
 
 
 
 
 
 
 
d9f7ea3
 
3506d7a
 
 
 
 
d9f7ea3
 
 
 
 
 
 
 
 
 
 
 
7c00be0
3506d7a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
---
base_model:
- Fizzarolli/L3-8b-Rosier-v1
- NousResearch/Meta-Llama-3-8B
- Sao10K/L3-8B-Stheno-v3.2
library_name: transformers
tags:
- mergekit
- merge

---
![By NovelAI](https://huggingface.co/inflatebot/helide-alpha-r2/resolve/main/img.png)
`"Helide" (say HE-lied) is an ion of helium -- famously a very unreactive element, which doesn't form ions in most conditions.`
# merge

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).

## Merge Details

An experimental merge of the legendary L3-8B-Stheno with Fizzarolli's Rosier. The aim is to improve Stheno's "ball-rolling" capabilities and reduce its awkwardness with more niche content. For a first go, I'm surprised at how well it's doing so far, but given that this is literally my first LLM project ever, probably temper your expectations.

Since R1: Changed to task-arithmetic. Snazzy new model card image.

Since R2: Fixed unnecessary conversion.

Since R3: Tweaked ratios, Rosier's influence cut in half.

Since R4: Scrubbin' it down. +0.08 to Rosier (pre-normalization). Closing in on a good ratio.

Since R5: Doubled both ratios; since normalization is enabled, this *should* essentially be the same as R5, but it makes the numbers nicer to work with, as now they can be envisioned as a ratio against 1.
(Edit: They have the same SHA-256 sums, so they're literally identical.)
### Merge Method

This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) as a base.

### Models Merged

The following models were included in the merge:
* [Fizzarolli/L3-8b-Rosier-v1](https://huggingface.co/Fizzarolli/L3-8b-Rosier-v1)
* [Sao10K/L3-8B-Stheno-v3.2](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2)

### Configuration

The following YAML configuration was used to produce this model:

```yaml
models:
  - model: Sao10K/L3-8B-Stheno-v3.2
    parameters:
      weight: 1
  - model: Fizzarolli/L3-8b-Rosier-v1
    parameters:
      weight: 0.66

merge_method: task_arithmetic
base_model: NousResearch/Meta-Llama-3-8B
parameters:
  normalize: true
dtype: bfloat16

```