L2-7b-Base-WVG-Test / README.md
leaderboard-pr-bot's picture
Adding Evaluation Results
60f9c85
|
raw
history blame
663 Bytes

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 45.35
ARC (25-shot) 54.27
HellaSwag (10-shot) 77.81
MMLU (5-shot) 51.07
TruthfulQA (0-shot) 46.28
Winogrande (5-shot) 73.56
GSM8K (5-shot) 6.97
DROP (3-shot) 7.48