T145 commited on
Commit
c7da6c6
·
verified ·
1 Parent(s): b4f99e7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +57 -57
README.md CHANGED
@@ -1,57 +1,57 @@
1
- ---
2
- base_model:
3
- - SicariusSicariiStuff/LLAMA-3_8B_Unaligned_BETA
4
- - arcee-ai/Llama-3.1-SuperNova-Lite
5
- - unsloth/Meta-Llama-3.1-8B-Instruct
6
- - akjindal53244/Llama-3.1-Storm-8B
7
- library_name: transformers
8
- tags:
9
- - mergekit
10
- - merge
11
-
12
- ---
13
- # Untitled Model (1)
14
-
15
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
16
-
17
- ## Merge Details
18
- ### Merge Method
19
-
20
- This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [unsloth/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/unsloth/Meta-Llama-3.1-8B-Instruct) as a base.
21
-
22
- ### Models Merged
23
-
24
- The following models were included in the merge:
25
- * [SicariusSicariiStuff/LLAMA-3_8B_Unaligned_BETA](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_BETA)
26
- * [arcee-ai/Llama-3.1-SuperNova-Lite](https://huggingface.co/arcee-ai/Llama-3.1-SuperNova-Lite)
27
- * [akjindal53244/Llama-3.1-Storm-8B](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B)
28
-
29
- ### Configuration
30
-
31
- The following YAML configuration was used to produce this model:
32
-
33
- ```yaml
34
- base_model: unsloth/Meta-Llama-3.1-8B-Instruct
35
- dtype: bfloat16
36
- merge_method: dare_ties
37
- slices:
38
- - sources:
39
- - layer_range: [0, 32]
40
- model: akjindal53244/Llama-3.1-Storm-8B
41
- parameters:
42
- density: 0.8
43
- weight: 0.25
44
- - layer_range: [0, 32]
45
- model: arcee-ai/Llama-3.1-SuperNova-Lite
46
- parameters:
47
- density: 0.8
48
- weight: 0.33
49
- - layer_range: [0, 32]
50
- model: SicariusSicariiStuff/LLAMA-3_8B_Unaligned_BETA
51
- parameters:
52
- density: 0.8
53
- weight: 0.42
54
- - layer_range: [0, 32]
55
- model: unsloth/Meta-Llama-3.1-8B-Instruct
56
- tokenizer_source: base
57
- ```
 
1
+ ---
2
+ base_model:
3
+ - SicariusSicariiStuff/LLAMA-3_8B_Unaligned_BETA
4
+ - arcee-ai/Llama-3.1-SuperNova-Lite
5
+ - unsloth/Meta-Llama-3.1-8B-Instruct
6
+ - akjindal53244/Llama-3.1-Storm-8B
7
+ library_name: transformers
8
+ tags:
9
+ - mergekit
10
+ - merge
11
+ license: llama3.1
12
+ ---
13
+ # Untitled Model (1)
14
+
15
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
16
+
17
+ ## Merge Details
18
+ ### Merge Method
19
+
20
+ This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [unsloth/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/unsloth/Meta-Llama-3.1-8B-Instruct) as a base.
21
+
22
+ ### Models Merged
23
+
24
+ The following models were included in the merge:
25
+ * [SicariusSicariiStuff/LLAMA-3_8B_Unaligned_BETA](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_BETA)
26
+ * [arcee-ai/Llama-3.1-SuperNova-Lite](https://huggingface.co/arcee-ai/Llama-3.1-SuperNova-Lite)
27
+ * [akjindal53244/Llama-3.1-Storm-8B](https://huggingface.co/akjindal53244/Llama-3.1-Storm-8B)
28
+
29
+ ### Configuration
30
+
31
+ The following YAML configuration was used to produce this model:
32
+
33
+ ```yaml
34
+ base_model: unsloth/Meta-Llama-3.1-8B-Instruct
35
+ dtype: bfloat16
36
+ merge_method: dare_ties
37
+ slices:
38
+ - sources:
39
+ - layer_range: [0, 32]
40
+ model: akjindal53244/Llama-3.1-Storm-8B
41
+ parameters:
42
+ density: 0.8
43
+ weight: 0.25
44
+ - layer_range: [0, 32]
45
+ model: arcee-ai/Llama-3.1-SuperNova-Lite
46
+ parameters:
47
+ density: 0.8
48
+ weight: 0.33
49
+ - layer_range: [0, 32]
50
+ model: SicariusSicariiStuff/LLAMA-3_8B_Unaligned_BETA
51
+ parameters:
52
+ density: 0.8
53
+ weight: 0.42
54
+ - layer_range: [0, 32]
55
+ model: unsloth/Meta-Llama-3.1-8B-Instruct
56
+ tokenizer_source: base
57
+ ```