test-3b / README.md
leaderboard-pr-bot's picture
Adding Evaluation Results
91e317d
|
raw
history blame
644 Bytes

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 28.25
ARC (25-shot) 27.65
HellaSwag (10-shot) 44.79
MMLU (5-shot) 23.53
TruthfulQA (0-shot) 41.42
Winogrande (5-shot) 55.49
GSM8K (5-shot) 0.3
DROP (3-shot) 4.61