ehartford commited on
Commit
3339210
1 Parent(s): c8cc0a4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +78 -0
README.md CHANGED
@@ -1,9 +1,87 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
 
3
  ---
4
 
5
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/9OI19I3DhuPp_i8Uhp6ss.png)
6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  # Example Output
8
 
9
  > Please invent a new idea in the area of mathematics, that combines two or more papers into a new idea that has not yet been published to your knowledge
 
1
  ---
2
  license: apache-2.0
3
+ base_model:
4
+ - cognitivecomputations/dolphin-2.2-70b
5
+ - WizardLM/WizardMath-70B-V1.0
6
+ - migtissera/SynthIA-70B-v1.2b
7
+ - epfl-llm/meditron-70b
8
+ tags:
9
+ - mergekit
10
+ - merge
11
  ---
12
 
13
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/9OI19I3DhuPp_i8Uhp6ss.png)
14
 
15
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
16
+
17
+ ## Merge Details
18
+ ### Merge Method
19
+
20
+ This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
21
+
22
+ ### Models Merged
23
+
24
+ The following models were included in the merge:
25
+ * [cognitivecomputations/dolphin-2.2-70b](https://huggingface.co/cognitivecomputations/dolphin-2.2-70b)
26
+ * [WizardLM/WizardMath-70B-V1.0](https://huggingface.co/WizardLM/WizardMath-70B-V1.0)
27
+ * [migtissera/SynthIA-70B-v1.2b](https://huggingface.co/migtissera/SynthIA-70B-v1.2b)
28
+ * [epfl-llm/meditron-70b](https://huggingface.co/epfl-llm/meditron-70b)
29
+
30
+ ### Configuration
31
+
32
+ The following YAML configuration was used to produce this model:
33
+
34
+ ```yaml
35
+ merge_method: linear # use linear so we can include multiple models, albeit at a zero weight
36
+ parameters:
37
+ weight: 1.0 # weight everything as 1 unless specified otherwise - linear with one model weighted at 1 is a no-op like passthrough
38
+ slices:
39
+ - sources:
40
+ - model: cognitivecomputations/dolphin-2.2-70b # embed_tokens comes along with the ride with whatever is the first layer
41
+ layer_range: [0, 1]
42
+ - model: migtissera/SynthIA-70B-v1.2b # add dummy second model with 0 weight so tokenizer-based merge routine is invoked for embed_tokens
43
+ layer_range: [0, 1]
44
+ parameters:
45
+ weight: 0
46
+ - sources:
47
+ - model: cognitivecomputations/dolphin-2.2-70b
48
+ layer_range: [1, 20]
49
+ - sources:
50
+ - model: migtissera/SynthIA-70B-v1.2b
51
+ layer_range: [10, 30]
52
+ - sources:
53
+ - model: WizardLM/WizardMath-70B-V1.0
54
+ layer_range: [20, 40]
55
+ - sources:
56
+ - model: epfl-llm/meditron-70b
57
+ layer_range: [25, 45]
58
+ - sources:
59
+ - model: cognitivecomputations/dolphin-2.2-70b
60
+ layer_range: [30, 50]
61
+ - sources:
62
+ - model: migtissera/SynthIA-70B-v1.2b
63
+ layer_range: [40, 60]
64
+ - sources:
65
+ - model: WizardLM/WizardMath-70B-V1.0
66
+ layer_range: [50, 70]
67
+ - sources:
68
+ - model: epfl-llm/meditron-70b
69
+ layer_range: [55, 75]
70
+ - sources:
71
+ - model: cognitivecomputations/dolphin-2.2-70b
72
+ layer_range: [60, 79]
73
+ - sources: # same as above, but for lm_head with the last layer
74
+ - model: cognitivecomputations/dolphin-2.2-70b
75
+ layer_range: [79, 80]
76
+ - model: migtissera/SynthIA-70B-v1.2b
77
+ layer_range: [79, 80]
78
+ parameters:
79
+ weight: 0
80
+ dtype: float16
81
+ tokenizer_source: model:cognitivecomputations/dolphin-2.2-70b # keep exact tokenizer used by dolphin - or you could use `union` if you add all of the input models to the first/last slice, but they would need to be non-zero weight or you'll get NaNs in your embeddings
82
+ ```
83
+
84
+
85
  # Example Output
86
 
87
  > Please invent a new idea in the area of mathematics, that combines two or more papers into a new idea that has not yet been published to your knowledge