leaderboard-pr-bot's picture
Adding Evaluation Results
4f04766
|
raw
history blame
4.83 kB
metadata
license: other
language:
  - en

Model Details

This is an unofficial implementation of "AlpaGasus: Training a better Alpaca with Fewer Data." with LLaMA2 & QLoRA! Training code is available at our repo.

Training dataset

"StudentLLM/Alpagasus-2-13b-QLoRA-merged" used gpt4life's gpt-3.5-turbo filtered dataset, 'alpaca_t45.json'.

Configuration of the dataset is as follows:

{
    'instruction': Give the instruction describing the question.
    'input': Occasionally present, detailed instructions accompany the question if available.
    'output': Give answers to questions.
}
.
.
.

Prompt Template: Alpaca style prompt

Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
<prompt> (without the <>)

### Input:
<prompt> (if input exists)

### Response:

Fine-tuning Procedure

Our model was finetuned using QLoRA on single A100 80GB GPU. Training details are described in repo.

Benchmark Metrics

"StudentLLM/Alpagasus-2-13b-QLoRA-merged" model performance is uploaded on Huggingface's OpenLLM Leaderboard. Model was evaluated on the tasks specified in HF's Open LLM Leaderboard(ARC, HellaSwag, MMLU, TruthfulQA).

Metric Value
Avg. 59.34
MMLU 55.27
ARC 61.09
HellaSwag 82.46
TruthfulQA 38.53

LLM Evaluation

We tried to follow the evaluation metric introduced by the AlpaGasus paper. During the process, we consulted the code by gpt4life. We used OpenAI's gpt-3.5-turbo as the evaluator model, and Alpaca2-LoRA-13B(it doesn't exist now...) as the comparison model. For more detailed information, please refer to our Github repo.

The evaluation result of AlpaGasus2-QLoRA is as follows: results

How to use

To use "StudentLLM/Alpagasus-2-13b-QLoRA-merged", please follow the following code! The use of the 7B model is the same!

from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM
import torch

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

config = PeftConfig.from_pretrained("StudentLLM/Alpagasus-2-13B-QLoRA")
model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-13b-hf", use_auth_token="yotu_HuggingFace_token").to(device)
model = PeftModel.from_pretrained(model, "StudentLLM/Alpagasus-2-13B-QLoRA")

tokenizer = AutoTokenizer.from_pretrained("StudentLLM/Alpagasus-2-13B-QLoRA")
tokenizer.pad_token = tokenizer.eos_token

input_data = "Please tell me 3 ways to relieve stress."   # You can enter any questions!!

model_inputs = tokenizer(input_data, return_tensors='pt').to(device)
model_output = model.generate(**model_inputs, max_length=256)
model_output = tokenizer.decode(model_output[0], skip_special_tokens=True)
print(model_output)

Citations

@article{chen2023alpagasus,
  title={AlpaGasus: Training a Better Alpaca with Fewer Data},
  author={Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin},
  journal={arXiv preprint arXiv:2307.08701},
  year={2023}
}

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 45.55
ARC (25-shot) 58.28
HellaSwag (10-shot) 80.98
MMLU (5-shot) 54.14
TruthfulQA (0-shot) 34.21
Winogrande (5-shot) 75.93
GSM8K (5-shot) 9.25
DROP (3-shot) 6.07