Steelskull
commited on
Commit
•
cd310f0
1
Parent(s):
9467ed2
Update README.md
Browse files
README.md
CHANGED
@@ -64,13 +64,13 @@ AGIEval 31.95
|
|
64 |
GPT4All 70.81
|
65 |
TruthfulQA 48.39
|
66 |
```
|
67 |
-
|
68 |
Loss or Increase:
|
69 |
Avg. -0.44
|
70 |
AGIEval -2.31
|
71 |
GPT4All -1.33
|
72 |
TruthfulQA +1.90
|
73 |
-
|
74 |
|
75 |
Example of loss:
|
76 |
[Steelskull/Etheria-55b-v0.1](https://huggingface.co/Steelskull/Etheria-55b-v0.1)
|
@@ -96,7 +96,7 @@ TruthfulQA 56.31
|
|
96 |
Winogrande 82.79
|
97 |
GSM8k 65.43
|
98 |
```
|
99 |
-
|
100 |
Merge Loss (Yi-34B-200K-DARE-megamerge-v8 compared to Etheria-55b-v0.1):
|
101 |
Avg. -7.87
|
102 |
AI2 Reasoning Challenge -2.65
|
@@ -105,7 +105,7 @@ MMLU -3.37
|
|
105 |
TruthfulQA +0.15
|
106 |
Winogrande -6.70
|
107 |
GSM8k -30.25
|
108 |
-
|
109 |
In the example comparing Etheria-55b-v0.1 and Yi-34B-200K-DARE-megamerge-v8, there is a significant decrease in performance across all metrics, with the average score decreasing by 7.87 points. The most notable is in the GSM8k benchmark, where Yi-34B-200K-DARE-megamerge-v8 outperforms Etheria-55b-v0.1 by 30.25 points.
|
110 |
|
111 |
This method is still in active development, and I am currently tweaking the algorithm to improve the layer selection process,
|
|
|
64 |
GPT4All 70.81
|
65 |
TruthfulQA 48.39
|
66 |
```
|
67 |
+
```
|
68 |
Loss or Increase:
|
69 |
Avg. -0.44
|
70 |
AGIEval -2.31
|
71 |
GPT4All -1.33
|
72 |
TruthfulQA +1.90
|
73 |
+
```
|
74 |
|
75 |
Example of loss:
|
76 |
[Steelskull/Etheria-55b-v0.1](https://huggingface.co/Steelskull/Etheria-55b-v0.1)
|
|
|
96 |
Winogrande 82.79
|
97 |
GSM8k 65.43
|
98 |
```
|
99 |
+
```
|
100 |
Merge Loss (Yi-34B-200K-DARE-megamerge-v8 compared to Etheria-55b-v0.1):
|
101 |
Avg. -7.87
|
102 |
AI2 Reasoning Challenge -2.65
|
|
|
105 |
TruthfulQA +0.15
|
106 |
Winogrande -6.70
|
107 |
GSM8k -30.25
|
108 |
+
```
|
109 |
In the example comparing Etheria-55b-v0.1 and Yi-34B-200K-DARE-megamerge-v8, there is a significant decrease in performance across all metrics, with the average score decreasing by 7.87 points. The most notable is in the GSM8k benchmark, where Yi-34B-200K-DARE-megamerge-v8 outperforms Etheria-55b-v0.1 by 30.25 points.
|
110 |
|
111 |
This method is still in active development, and I am currently tweaking the algorithm to improve the layer selection process,
|