mpt-7b / README.md
leaderboard-pr-bot's picture
Adding Evaluation Results
ff7d7e5
|
raw
history blame
650 Bytes

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 38.74
ARC (25-shot) 47.7
HellaSwag (10-shot) 77.57
MMLU (5-shot) 30.8
TruthfulQA (0-shot) 33.44
Winogrande (5-shot) 72.14
GSM8K (5-shot) 4.02
DROP (3-shot) 5.55