Update README.md
Browse files
README.md
CHANGED
@@ -174,11 +174,11 @@ neural-chat-7b-v3-1 6.812500
|
|
174 |
|
175 |
### Language Model evaluation Harness:
|
176 |
Compared to Aleph Alpha Luminous Models
|
177 |
-
![Harness](
|
178 |
|
179 |
*performed with newest Language Model Evaluation Harness
|
180 |
### BBH:
|
181 |
-
![BBH](
|
182 |
*performed with newest Language Model Evaluation Harness
|
183 |
### GPT4ALL:
|
184 |
Compared to Aleph Alpha Luminous Models, LeoLM and EM_German
|
|
|
174 |
|
175 |
### Language Model evaluation Harness:
|
176 |
Compared to Aleph Alpha Luminous Models
|
177 |
+
![Harness](https://vago-solutions.de/wp-content/uploads/2023/11/luminouscompare.png "SauerkrautLM-7b-HerO Harness")
|
178 |
|
179 |
*performed with newest Language Model Evaluation Harness
|
180 |
### BBH:
|
181 |
+
![BBH](https://vago-solutions.de/wp-content/uploads/2023/11/bbh.png "SauerkrautLM-7b-HerO BBH")
|
182 |
*performed with newest Language Model Evaluation Harness
|
183 |
### GPT4ALL:
|
184 |
Compared to Aleph Alpha Luminous Models, LeoLM and EM_German
|