Trained on 3 epoch of the Norquinal's claude_multiround_chat_30k dataset.
note: this is another expeiment feel free to give it a try!
Prompt template:
### HUMAN:
{prompt}
### RESPONSE:
<leave a newline for the model to answer>
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 40.93 |
AI2 Reasoning Challenge (25-Shot) | 41.72 |
HellaSwag (10-Shot) | 72.64 |
MMLU (5-Shot) | 24.03 |
TruthfulQA (0-shot) | 38.46 |
Winogrande (5-shot) | 66.54 |
GSM8k (5-shot) | 2.20 |
- Downloads last month
- 1,041
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for harborwater/open-llama-3b-claude-30k
Dataset used to train harborwater/open-llama-3b-claude-30k
Spaces using harborwater/open-llama-3b-claude-30k 2
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard41.720
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard72.640
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard24.030
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard38.460
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard66.540
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard2.200