Trained on 1 epoch of the WizardLM_evol_instruct_v2_196k dataset
Link to GGUF formats.
Prompt template:
### HUMAN:
{prompt}
### RESPONSE:
<leave a newline for the model to answer>
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 41.46 |
AI2 Reasoning Challenge (25-Shot) | 41.81 |
HellaSwag (10-Shot) | 73.01 |
MMLU (5-Shot) | 26.36 |
TruthfulQA (0-shot) | 38.99 |
Winogrande (5-shot) | 66.69 |
GSM8k (5-shot) | 1.90 |
- Downloads last month
- 1,310
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for harborwater/open-llama-3b-v2-wizard-evol-instuct-v2-196k
Dataset used to train harborwater/open-llama-3b-v2-wizard-evol-instuct-v2-196k
Space using harborwater/open-llama-3b-v2-wizard-evol-instuct-v2-196k 1
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard41.810
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard73.010
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard26.360
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard38.990
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard66.690
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard1.900