File size: 1,389 Bytes
7bc900b
 
 
 
 
 
f007ac0
 
 
 
 
 
 
 
7bc900b
 
 
f007ac0
c851b0b
9285b37
8e3358f
 
7bc900b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f007ac0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
- llama 3
- smaug
- lumimaid
- abliterated
- gradent
- instruct
- arimas
- breadcrums
---
# model

Any feedback regarding the model its behaviour is very welcome.
In V1 i experimented with a giraffe base but i noticed the giraffe version quickly lossed the long context ability.

So in V1.5 i switched back to a gradient base.


## Merge Details
### Merge Method

This model was merged using the breadcrumbs_ties merge method using \Llama-3-70B-Instruct-Gradient-262k as a base.

### Models Merged

The following models were included in the merge:
* \Llama-3-Lumimaid-70B-v0.1-OAS
* \Smaug-Llama-3-70B-Instruct
* \Llama-3-70B-Instruct-abliterated-v3

### Configuration

The following YAML configuration was used to produce this model:

```yaml
models:
  - model: \Llama-3-70B-Instruct-Gradient-262k
    parameters:
      weight: 0.40
      density: 0.90
      gamma: 0.01
  - model: \Llama-3-70B-Instruct-abliterated-v3
    parameters:
      weight: 0.20
      density: 0.90
      gamma: 0.01
  - model: \Smaug-Llama-3-70B-Instruct
    parameters:
      weight: 0.40
      density: 0.90
      gamma: 0.01
  - model: \Llama-3-Lumimaid-70B-v0.1-OAS
    parameters:
      weight: 0.20
      density: 0.90
      gamma: 0.01
merge_method: breadcrumbs_ties
base_model: \Llama-3-70B-Instruct-Gradient-262k
dtype: bfloat16
```