|
--- |
|
license: other |
|
language: |
|
- en |
|
--- |
|
## Model Details |
|
This is an unofficial implementation of "[AlpaGasus: Training a better Alpaca with Fewer Data.](https://github.com/Lichang-Chen/AlpaGasus)" with [LLaMA2](https://huggingface.co/meta-llama/Llama-2-13b-hf) & QLoRA! Training code is available at our [repo](https://github.com/gauss5930/AlpaGasus2-QLoRA). |
|
|
|
- **Developed by:** [Yunsang Yoo](https://huggingface.co/ryan0712) and [Hyunwoo Ko](https://huggingface.co/Cartinoe5930) |
|
- **Model type:** Auto-regressive model |
|
- **Language(s):** English |
|
- **Base Model:** [meta-llama/Llama-2-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf) |
|
- **License**: Non-Commercial Creative Commons license ([CC BY-NC-4.0](https://creativecommons.org/licenses/by-nc/4.0/)) |
|
|
|
|
|
|
|
### Benchmark Metrics |
|
| Metric | Value | |
|
|-----------------------|-------| |
|
| MMLU | 55.27 | |
|
| ARC | 61.09 | |
|
| HellaSwag | 82.46 | |
|
| TruthfulQA | 38.53 | |
|
| Avg. | 59.34 | |
|
|
|
### Training Dataset |
|
|
|
"StudentLLM/Alpagasus-2-13b-QLoRA-merged" used [gpt4life](https://github.com/gpt4life/alpagasus)'s gpt-3.5-turbo filtered dataset, 'alpaca_t45.json'. |
|
Configuration of the dataset is as follows: |
|
|
|
### Prompt Template |
|
``` |
|
### Instruction: |
|
|
|
<prompt> (without the <>) |
|
|
|
### Response: |
|
``` |
|
|