SubtleOne commited on
Commit
4640511
·
verified ·
1 Parent(s): 8362b39

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +52 -3
README.md CHANGED
@@ -1,3 +1,52 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model:
4
+ - rombodawg/Rombos-LLM-V2.5-Qwen-32b
5
+ library_name: transformers
6
+ tags:
7
+ - merge
8
+ - mergekit
9
+ ---
10
+ # merge
11
+
12
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
13
+
14
+ ## Merge Details
15
+ ### Merge Method
16
+
17
+ This model was merged using the [DELLA](https://arxiv.org/abs/2406.11617) merge method using C:\Users\silve\Applications\mergekit\writing-model-good as a base.
18
+
19
+ ### Models Merged
20
+
21
+ The following models were included in the merge:
22
+ * [Rombos-LLM-V2.5-Qwen-32b][https://huggingface.co/rombodawg/Rombos-LLM-V2.5-Qwen-32b]
23
+ * [EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2](https://huggingface.co/EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2)
24
+ * [allura-org/Qwen2.5-32b-RP-Ink](https://huggingface.co/allura-org/Qwen2.5-32b-RP-Ink)
25
+ * [nbeerbower/Qwen2.5-Gutenberg-Doppel-32B](https://huggingface.co/nbeerbower/Qwen2.5-Gutenberg-Doppel-32B)
26
+
27
+ ### Configuration
28
+
29
+ The following YAML configuration was used to produce this model:
30
+
31
+ ```yaml
32
+ base_model: rombodawg/Rombos-LLM-V2.5-Qwen-32b
33
+ parameters:
34
+ int8_mask: true
35
+ rescale: false
36
+ normalize: true
37
+ dtype: bfloat16
38
+ tokenizer_source: union
39
+ merge_method: dare_ties
40
+ models:
41
+ - model: EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2
42
+ parameters:
43
+ weight: [0.4]
44
+ density: [0.55]
45
+ - model: nbeerbower/Qwen2.5-Gutenberg-Doppel-32B
46
+ parameters:
47
+ weight: [0.3]
48
+ density: [0.55]
49
+ - model: allura-org/Qwen2.5-32b-RP-Ink
50
+ parameters:
51
+ weight: [0.4]
52
+ density: [0.55]