Update README.md
Browse files
README.md
CHANGED
@@ -191,9 +191,14 @@ Average Score Comparison between OpenHermes-1 Llama-2 13B and OpenHermes-2 Mistr
|
|
191 |
|Average Total | 47.17| 51.42| 52.38| +5.21| +0.96|
|
192 |
```
|
193 |
|
|
|
|
|
194 |
**HumanEval:**
|
|
|
195 |
HumanEval: 50.7% @ Pass1
|
196 |
|
|
|
|
|
197 |
# Prompt Format
|
198 |
|
199 |
OpenHermes 2.5 now uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue.
|
|
|
191 |
|Average Total | 47.17| 51.42| 52.38| +5.21| +0.96|
|
192 |
```
|
193 |
|
194 |
+

|
195 |
+
|
196 |
**HumanEval:**
|
197 |
+
On code tasks, I first set out to make a hermes-2 coder, but found that it can have generalist improvements to the model, so I settled for slightly less code capabilities, for maximum generalist ones. That said, code capabilities had a decent jump alongside the overall capabilities of the model:
|
198 |
HumanEval: 50.7% @ Pass1
|
199 |
|
200 |
+

|
201 |
+
|
202 |
# Prompt Format
|
203 |
|
204 |
OpenHermes 2.5 now uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue.
|