Update README.md
Browse files
README.md
CHANGED
@@ -8,4 +8,18 @@ license: apache-2.0
|
|
8 |
---
|
9 |
# Padma_SLM_7b_v3_folder
|
10 |
|
11 |
-
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
8 |
---
|
9 |
# Padma_SLM_7b_v3_folder
|
10 |
|
11 |
+
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
12 |
+
|
13 |
+
## Merge Details
|
14 |
+
### Merge Method
|
15 |
+
|
16 |
+
This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using bophades-mistral-truthy-DPO-7B as a base.
|
17 |
+
|
18 |
+
### Models Merged
|
19 |
+
|
20 |
+
The following models were included in the merge:
|
21 |
+
* multi_verse_model
|
22 |
+
* Calme-7B-Instruct-v0.9
|
23 |
+
* YamshadowExperiment28-7B
|
24 |
+
|
25 |
+
### Configuration
|