rwmasood commited on
Commit
f62ce55
·
verified ·
1 Parent(s): 8f73a06

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +116 -0
README.md ADDED
@@ -0,0 +1,116 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ tags:
5
+ - llama
6
+ - instruct
7
+ - instruction
8
+ pipeline_tag: text-generation
9
+ ---
10
+ # LLaMa-65b-instruct model card
11
+
12
+ ## Model Details
13
+
14
+ * **Developed by**: [Upstage](https://en.upstage.ai)
15
+ * **Backbone Model**: [LLaMA](https://github.com/facebookresearch/llama/tree/llama_v1)
16
+ * **Variations**: It has different model parameter sizes and sequence lengths: [30B/1024](https://huggingface.co/upstage/llama-30b-instruct), [30B/2048](https://huggingface.co/upstage/llama-30b-instruct-2048), [65B/1024](https://huggingface.co/upstage/llama-65b-instruct)
17
+ * **Language(s)**: English
18
+ * **Library**: [HuggingFace Transformers](https://github.com/huggingface/transformers)
19
+ * **License**: This model is under a **Non-commercial** Bespoke License and governed by the Meta license. You should only use this repository if you have been granted access to the model by filling out [this form](https://docs.google.com/forms/d/e/1FAIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA/viewform), but have either lost your copy of the weights or encountered issues converting them to the Transformers format
20
+ * **Where to send comments**: Instructions on how to provide feedback or comments on a model can be found by opening an issue in the [Hugging Face community's model repository](https://huggingface.co/upstage/llama-30b-instruct-2048/discussions)
21
+ * **Contact**: For questions and comments about the model, please email [[email protected]](mailto:[email protected])
22
+
23
+ ## Dataset Details
24
+
25
+ ### Used Datasets
26
+
27
+ - Orca-style dataset
28
+ - No other data was used except for the dataset mentioned above
29
+
30
+ ### Prompt Template
31
+ ```
32
+ ### System:
33
+ {System}
34
+
35
+ ### User:
36
+ {User}
37
+
38
+ ### Assistant:
39
+ {Assistant}
40
+ ```
41
+
42
+ ## Usage
43
+
44
+ - Tested on A100 80GB
45
+ - Our model can handle up to 10k+ input tokens, thanks to the `rope_scaling` option
46
+
47
+ ```python
48
+ import torch
49
+ from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
50
+
51
+ tokenizer = AutoTokenizer.from_pretrained("upstage/llama-65b-instruct")
52
+ model = AutoModelForCausalLM.from_pretrained(
53
+ "upstage/llama-65b-instruct",
54
+ device_map="auto",
55
+ torch_dtype=torch.float16,
56
+ load_in_8bit=True,
57
+ rope_scaling={"type": "dynamic", "factor": 2} # allows handling of longer inputs
58
+ )
59
+
60
+ prompt = "### User:\nThomas is healthy, but he has to go to the hospital. What could be the reasons?\n\n### Assistant:\n"
61
+ inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
62
+ del inputs["token_type_ids"]
63
+ streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
64
+
65
+ output = model.generate(**inputs, streamer=streamer, use_cache=True, max_new_tokens=float('inf'))
66
+ output_text = tokenizer.decode(output[0], skip_special_tokens=True)
67
+ ```
68
+
69
+ ## Hardware and Software
70
+
71
+ * **Hardware**: We utilized an A100x8 * 4 for training our model
72
+ * **Training Factors**: We fine-tuned this model using a combination of the [DeepSpeed library](https://github.com/microsoft/DeepSpeed) and the [HuggingFace Trainer](https://huggingface.co/docs/transformers/main_classes/trainer)
73
+
74
+ ## Evaluation Results
75
+
76
+ ### Overview
77
+ - We conducted a performance evaluation based on the tasks being evaluated on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
78
+ We evaluated our model on four benchmark datasets, which include `ARC-Challenge`, `HellaSwag`, `MMLU`, and `TruthfulQA`.
79
+ We used the [lm-evaluation-harness repository](https://github.com/EleutherAI/lm-evaluation-harness), specifically commit [b281b0921b636bc36ad05c0b0b0763bd6dd43463](https://github.com/EleutherAI/lm-evaluation-harness/tree/b281b0921b636bc36ad05c0b0b0763bd6dd43463)
80
+ - We used [MT-bench](https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge), a set of challenging multi-turn open-ended questions, to evaluate the models
81
+
82
+ ### Main Results
83
+ | Model | H4(Avg) | ARC | HellaSwag | MMLU | TruthfulQA | | MT_Bench |
84
+ |--------------------------------------------------------------------|----------|----------|----------|------|----------|-|-------------|
85
+ | **[Llama-2-70b-instruct-v2](https://huggingface.co/upstage/Llama-2-70b-instruct-v2)**(Ours, Open LLM Leaderboard) | **73** | **71.1** | **87.9** | **70.6** | **62.2** | | **7.44063** |
86
+ | [Llama-2-70b-instruct](https://huggingface.co/upstage/Llama-2-70b-instruct) (Ours, Open LLM Leaderboard) | 72.3 | 70.9 | 87.5 | 69.8 | 61 | | 7.24375 |
87
+ | [llama-65b-instruct](https://huggingface.co/upstage/llama-65b-instruct) (***Ours***, ***Open LLM Leaderboard***) | 69.4 | 67.6 | 86.5 | 64.9 | 58.8 | | |
88
+ | Llama-2-70b-hf | 67.3 | 67.3 | 87.3 | 69.8 | 44.9 | | |
89
+ | [llama-30b-instruct-2048](https://huggingface.co/upstage/llama-30b-instruct-2048) (Ours, Open LLM Leaderboard) | 67.0 | 64.9 | 84.9 | 61.9 | 56.3 | | |
90
+ | [llama-30b-instruct](https://huggingface.co/upstage/llama-30b-instruct) (Ours, Open LLM Leaderboard) | 65.2 | 62.5 | 86.2 | 59.4 | 52.8 | | |
91
+ | llama-65b | 64.2 | 63.5 | 86.1 | 63.9 | 43.4 | | |
92
+ | falcon-40b-instruct | 63.4 | 61.6 | 84.3 | 55.4 | 52.5 | | |
93
+
94
+
95
+ ### Scripts for H4 Score Reproduction
96
+ - Prepare evaluation environments:
97
+ ```
98
+ # clone the repository
99
+ git clone https://github.com/EleutherAI/lm-evaluation-harness.git
100
+
101
+ # check out the specific commit
102
+ git checkout b281b0921b636bc36ad05c0b0b0763bd6dd43463
103
+
104
+ # change to the repository directory
105
+ cd lm-evaluation-harness
106
+ ```
107
+
108
+ ## Ethical Issues
109
+
110
+ ### Ethical Considerations
111
+ - There were no ethical issues involved, as we did not include the benchmark test set or the training set in the model's training process
112
+
113
+ ## Contact Us
114
+
115
+ ### Why Upstage LLM?
116
+ - [Upstage](https://en.upstage.ai)'s LLM research has yielded remarkable results. As of August 1st, our 70B model has reached the top spot in openLLM rankings, marking itself as the current leading performer globally. Recognizing the immense potential in implementing private LLM to actual businesses, we invite you to easily apply private LLM and fine-tune it with your own data. For a seamless and tailored solution, please do not hesitate to reach out to us. ► [click here to contact](https://www.upstage.ai/private-llm?utm_source=huggingface&utm_medium=link&utm_campaign=privatellm)