UNA-POLAR-10.7B-InstructMath-v2
Model description
Its a UNA version with DPO over MathPILE Books out of the UNA-SOLAR-10.7B-Instruct-1.0
I used MathPILE OUTSTANDING Dataset of great Mathematic material in order to produce this beautiful model :)
Intended uses & limitations
If your model has inside UNA technology, cite.
Training and evaluation data
UNA-DPO over Attention and MLP's
Framework versions
- PEFT 0.7.1
- Transformers 4.36.2-UNA
- Pytorch 2.1.2+cu121
- Datasets 2.16.0
- Tokenizers 0.15.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 74.07 |
AI2 Reasoning Challenge (25-Shot) | 70.73 |
HellaSwag (10-Shot) | 88.20 |
MMLU (5-Shot) | 66.03 |
TruthfulQA (0-shot) | 71.73 |
Winogrande (5-shot) | 82.95 |
GSM8k (5-shot) | 64.75 |
- Downloads last month
- 2,006
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for fblgit/UNA-POLAR-10.7B-InstructMath-v2
Dataset used to train fblgit/UNA-POLAR-10.7B-InstructMath-v2
Spaces using fblgit/UNA-POLAR-10.7B-InstructMath-v2 7
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard70.730
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard88.200
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard66.030
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard71.730
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard82.950
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard64.750