File size: 1,340 Bytes
652f05d
 
 
 
 
 
 
 
74dca2c
652f05d
309aab4
e6a4222
 
 
309aab4
652f05d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
309aab4
 
 
74dca2c
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
---
base_model:
- rhysjones/phi-2-orange-v2
- Weyaxi/Einstein-v4-phi2
library_name: transformers
tags:
- mergekit
- merge
license: mit
---
## L-MChat-Small
<div style="text-align:center;width:250px;height:250px;">
    <img src="https://cdn.lauche.eu/L-MChat-Series-Logo.jpeg" alt="L-MChat-Series-Logo"">
</div>
This was a test of mine how small merges perform, because there are a lot of 7b merges and higher but not a lot of 2b merges.

### Merge Method

This model was merged using the SLERP merge method.

### Models Merged

The following models were included in the merge:
* [rhysjones/phi-2-orange-v2](https://huggingface.co/rhysjones/phi-2-orange-v2)
* [Weyaxi/Einstein-v4-phi2](https://huggingface.co/Weyaxi/Einstein-v4-phi2)

### Configuration

The following YAML configuration was used to produce this model:

```yaml
slices:
- sources:
  - model: Weyaxi/Einstein-v4-phi2
    layer_range:
    - 0
    - 32
  - model: rhysjones/phi-2-orange-v2
    layer_range:
    - 0
    - 32
merge_method: slerp
base_model: rhysjones/phi-2-orange-v2
parameters:
  t:
  - filter: self_attn
    value:
    - 0
    - 0.5
    - 0.3
    - 0.7
    - 1
  - filter: mlp
    value:
    - 1
    - 0.5
    - 0.7
    - 0.3
    - 0
  - value: 0.5
dtype: bfloat16
```

## Usage

Use it with the ChatML format, you can also use the Inference-API for this Model.