Commit
•
c693c8a
1
Parent(s):
e0e60ef
Adding Evaluation Results (#1)
Browse files- Adding Evaluation Results (0f84cc045760a41a54be36fabb1b101c37a99bd8)
Co-authored-by: Open LLM Leaderboard PR Bot <[email protected]>
README.md
CHANGED
@@ -39,4 +39,17 @@ You are a helpful AI assistant.
|
|
39 |
|
40 |
USER: <prompt>
|
41 |
ASSISTANT:
|
42 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
39 |
|
40 |
USER: <prompt>
|
41 |
ASSISTANT:
|
42 |
+
```
|
43 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
44 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ajibawa-2023__Python-Code-13B)
|
45 |
+
|
46 |
+
| Metric | Value |
|
47 |
+
|-----------------------|---------------------------|
|
48 |
+
| Avg. | 47.16 |
|
49 |
+
| ARC (25-shot) | 58.79 |
|
50 |
+
| HellaSwag (10-shot) | 81.66 |
|
51 |
+
| MMLU (5-shot) | 54.78 |
|
52 |
+
| TruthfulQA (0-shot) | 42.83 |
|
53 |
+
| Winogrande (5-shot) | 74.03 |
|
54 |
+
| GSM8K (5-shot) | 9.55 |
|
55 |
+
| DROP (3-shot) | 8.5 |
|