license: apache-2.0 | |
datasets: | |
- abacusai/MetaMathFewshot | |
Finetune of the DPO Bagel model (https://huggingface.co/jondurbin/nontoxic-bagel-34b-v0.2) on the MetamathFewshot (https://huggingface.co/datasets/abacusai/MetaMathFewshot) dataset | |
### Evaluation Results | |
| Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K | | |
| --- | --- | --- | --- | --- | --- | --- | | |
| | | | | | | | | |
For comparison the GSM8K score for the original `nontoxic-bagel-34b-v0.2` model was 58.45 and average score was 74.69 |