Sparse0.5_OPT-1.3 / README.md
leaderboard-pr-bot's picture
Adding Evaluation Results
f837a6a
|
raw
history blame
682 Bytes
metadata
license: apache-2.0

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 29.14
ARC (25-shot) 27.13
HellaSwag (10-shot) 48.69
MMLU (5-shot) 25.6
TruthfulQA (0-shot) 39.11
Winogrande (5-shot) 58.56
GSM8K (5-shot) 0.08
DROP (3-shot) 4.83