File size: 1,325 Bytes
4f5143f
 
2bc7742
 
 
 
 
4f5143f
2bc7742
 
 
 
 
 
 
 
 
 
 
1298ae5
2bc7742
9efec69
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
---
license: apache-2.0
datasets:
- totally-not-an-llm/EverythingLM-data-V2-sharegpt
language:
- en
library_name: transformers
---

Trained on 3 epochs of the `totally-not-an-llm/EverythingLM-data-V2-sharegpt` dataset.
```
### HUMAN:
{prompt}

### RESPONSE:
<leave a newline for the model to answer>
 ```


note: Changed a few of the finetuning parameters this time around. I have no idea if its any good but Feel free to give it a try!

[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_harborwater__open-llama-3b-everything-v2)

| Metric                | Value                     |
|-----------------------|---------------------------|
| Avg.                  | 36.29   |
| ARC (25-shot)         | 42.83          |
| HellaSwag (10-shot)   | 73.28    |
| MMLU (5-shot)         | 26.87         |
| TruthfulQA (0-shot)   | 37.26   |
| Winogrande (5-shot)   | 66.61   |
| GSM8K (5-shot)        | 1.59        |
| DROP (3-shot)         | 5.61         |