Model Card for Model ID
Just testing out LLM Finetuning. Finetuned on upstage/SOLAR-10.7B-Instruct-v1.0 using argilla/distilabel-intel-orca-dpo-pairs. Followed the Google Colab mentioned in this article: https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 74.08 |
AI2 Reasoning Challenge (25-Shot) | 71.25 |
HellaSwag (10-Shot) | 88.34 |
MMLU (5-Shot) | 66.04 |
TruthfulQA (0-shot) | 71.36 |
Winogrande (5-shot) | 83.19 |
GSM8k (5-shot) | 64.29 |
- Downloads last month
- 181
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for dhanushreddy29/BrokenKeyboard
Base model
upstage/SOLAR-10.7B-v1.0
Finetuned
upstage/SOLAR-10.7B-Instruct-v1.0
Dataset used to train dhanushreddy29/BrokenKeyboard
Spaces using dhanushreddy29/BrokenKeyboard 6
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard71.250
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard88.340
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard66.040
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard71.360
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard83.190
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard64.290