metadata
license: apache-2.0
language:
- en
datasets:
- c4
mpt-125m-c4
Model Description
Pretrained model for MPT-125M trained on C4 dataset
Training data
Trained on HuggingFace C4 dataset
Training procedure
This model was trained on C4 for ~2.5B tokens. Training time was ~1 hour with 104 A100-40gb GPUs.
Intended Use and Limitations
This model is primarily for generating texts from a prompt. The purpose is to explore pretraining models for research.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 17.2 |
ARC (25-shot) | 22.7 |
HellaSwag (10-shot) | 25.04 |
MMLU (5-shot) | 23.12 |
TruthfulQA (0-shot) | 0.0 |
Winogrande (5-shot) | 49.57 |
GSM8K (5-shot) | 0.0 |
DROP (3-shot) | 0.0 |