metadata
language:
- en
tags:
- upstage
- llama-2
- instruct
- instruction
pipeline_tag: text-generation
LLaMa-2-70b-instruct-v2 model card
Model Details
- Developed by: Upstage
- Backbone Model: LLaMA-2
- Language(s): English
- Library: HuggingFace Transformers
- License: Fine-tuned checkpoints is licensed under the Non-Commercial Creative Commons license (CC BY-NC-4.0)
- Where to send comments: Instructions on how to provide feedback or comments on a model can be found by opening an issue in the Hugging Face community's model repository
- Contact: For questions and comments about the model, please email [email protected]
Dataset Details
Used Datasets
- Orca-style dataset
- Alpaca-Style Dataset
Prompt Template
### System:
{System}
### User:
{User}
### Assistant:
{Assistant}
Usage
Tested on A100 80GB
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
tokenizer = AutoTokenizer.from_pretrained("upstage/Llama-2-70b-instruct-v2")
model = AutoModelForCausalLM.from_pretrained(
"upstage/Llama-2-70b-instruct-v2",
device_map='auto',
torch_dtype=torch.float16,
load_in_8bit=True,
rope_scaling={'type': 'dynamic', 'factor': 2} # longer inputs possible
)
prompt = "### User:\nThomas is very healthy, but he has to go to the hospital every day. What could be the reasons?\n\n### Assistant:\n"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
del inputs['token_type_ids']
streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
output = model.generate(**inputs, streamer=streamer, use_cache=True, max_new_tokens=float('inf'))
output_text = tokenizer.decode(output[0], skip_special_tokens=True)
Our model can handle >10k input tokens thanks to the rope_scaling
option.
Hardware and Software
- Hardware: We utilized an A100x8 * 4 for training our model
- Training Factors: We fine-tuned this model using a combination of the DeepSpeed library and the HuggingFace trainer / HuggingFace Accelerate
Evaluation Results
Overview
- We conducted a performance evaluation based on the tasks being evaluated on the Open LLM Leaderboard.
We evaluated our model on four benchmark datasets, which include
ARC-Challenge
,HellaSwag
,MMLU
, andTruthfulQA
. We used the lm-evaluation-harness repository, specifically commit b281b0921b636bc36ad05c0b0b0763bd6dd43463.
Main Results
Model | H4 Average | ARC | HellaSwag | MMLU | TruthfulQA | MT_Bench | |
---|---|---|---|---|---|---|---|
Llama-2-70b-instruct-v2 (Ours, Local Reproduction) | 72.7 | 71.6 | 87.7 | 69.7 | 61.6 | 7.440625 | |
Llama-2-70b-instruct (Ours, Local Reproduction) | 72.0 | 70.7 | 87.4 | 69.3 | 60.7 | 7.24375 | |
llama-65b-instruct (Ours, Local Reproduction) | 69.4 | 67.6 | 86.5 | 64.9 | 58.8 | ||
Llama-2-70b-hf | 67.3 | 67.3 | 87.3 | 69.8 | 44.9 | ||
llama-30b-instruct-2048 (Ours, Open LLM Leaderboard) | 67.0 | 64.9 | 84.9 | 61.9 | 56.3 | ||
llama-30b-instruct-2048 (Ours, Local Reproduction) | 67.0 | 64.9 | 85.0 | 61.9 | 56.0 | 6.88125 | |
llama-30b-instruct (Ours, Open LLM Leaderboard) | 65.2 | 62.5 | 86.2 | 59.4 | 52.8 | ||
llama-65b | 64.2 | 63.5 | 86.1 | 63.9 | 43.4 | ||
falcon-40b-instruct | 63.4 | 61.6 | 84.3 | 55.4 | 52.5 |
Scripts
- Prepare evaluation environments:
# clone the repository
git clone https://github.com/EleutherAI/lm-evaluation-harness.git
# check out the specific commit
git checkout b281b0921b636bc36ad05c0b0b0763bd6dd43463
# change to the repository directory
cd lm-evaluation-harness
Ethical Issues
Ethical Considerations
- There were no ethical issues involved, as we did not include the benchmark test set or the training set in the model's training process.
Contact Us
Why Upstage LLM?
- Upstage's LLM research has yielded remarkable results. Our 30B model outperforms all models around the world, positioning itself as the leading performer. Recognizing the immense potential in implementing private LLM to actual businesses, we invite you to easily apply private LLM and fine-tune it with your own data. For a seamless and tailored solution, please do not hesitate to reach out to us. ► click here to contact.