MusePy-1-2 / README.md
leaderboard-pr-bot's picture
Adding Evaluation Results
066c0da
|
raw
history blame
703 Bytes
metadata
datasets:
  - breadlicker45/musenet-encoders-12k

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 25.26
ARC (25-shot) 25.77
HellaSwag (10-shot) 25.94
MMLU (5-shot) 25.22
TruthfulQA (0-shot) 49.33
Winogrande (5-shot) 50.51
GSM8K (5-shot) 0.0
DROP (3-shot) 0.07