rwmasood commited on
Commit
c261d05
·
verified ·
1 Parent(s): 4b6ddbb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +32 -31
README.md CHANGED
@@ -7,43 +7,29 @@ tags:
7
  - instruction
8
  - empirischtech
9
  pipeline_tag: text-generation
 
 
10
  ---
11
  # LLaMa-10b-instruct model card
12
 
13
  ## Model Details
14
 
15
- * **Developed by**: [EmpirischTech](https://en.upstage.ai)/[ChaperoneAI](https://en.upstage.ai)
16
- * **Backbone Model**: [LLaMA](https://github.com/facebookresearch/llama/tree/llama_v1)
17
- * **Variations**: It has different model parameter sizes and sequence lengths: [30B/1024](https://huggingface.co/upstage/llama-30b-instruct), [30B/2048](https://huggingface.co/upstage/llama-30b-instruct-2048), [65B/1024](https://huggingface.co/upstage/llama-65b-instruct)
18
  * **Language(s)**: English
19
  * **Library**: [HuggingFace Transformers](https://github.com/huggingface/transformers)
20
  * **License**: This model is under a **Non-commercial** Bespoke License and governed by the Meta license. You should only use this repository if you have been granted access to the model by filling out [this form](https://docs.google.com/forms/d/e/1FAIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA/viewform), but have either lost your copy of the weights or encountered issues converting them to the Transformers format
21
  * **Where to send comments**: Instructions on how to provide feedback or comments on a model can be found by opening an issue in the [Hugging Face community's model repository](https://huggingface.co/upstage/llama-30b-instruct-2048/discussions)
22
- * **Contact**: For questions and comments about the model, please email [[email protected]](mailto:contact@upstage.ai)
23
 
24
- ## Dataset Details
25
 
26
- ### Used Datasets
27
 
28
- - Orca-style dataset
29
- - No other data was used except for the dataset mentioned above
30
-
31
- ### Prompt Template
32
- ```
33
- ### System:
34
- {System}
35
-
36
- ### User:
37
- {User}
38
-
39
- ### Assistant:
40
- {Assistant}
41
- ```
42
 
43
  ## Usage
44
 
45
  - Tested on A100 80GB
46
- - Our model can handle up to 10k+ input tokens, thanks to the `rope_scaling` option
47
 
48
  ```python
49
  import torch
@@ -69,8 +55,8 @@ output_text = tokenizer.decode(output[0], skip_special_tokens=True)
69
 
70
  ## Hardware and Software
71
 
72
- * **Hardware**: We utilized an A100x8 * 4 for training our model
73
- * **Training Factors**: We fine-tuned this model using a combination of the [DeepSpeed library](https://github.com/microsoft/DeepSpeed) and the [HuggingFace Trainer](https://huggingface.co/docs/transformers/main_classes/trainer)
74
 
75
  ## Evaluation Results
76
 
@@ -93,17 +79,32 @@ We used the [lm-evaluation-harness repository](https://github.com/EleutherAI/lm-
93
  | falcon-40b-instruct | 63.4 | 61.6 | 84.3 | 55.4 | 52.5 | | |
94
 
95
 
96
- ### Scripts for H4 Score Reproduction
97
  - Prepare evaluation environments:
98
  ```
99
- # clone the repository
100
- git clone https://github.com/EleutherAI/lm-evaluation-harness.git
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
101
 
102
- # check out the specific commit
103
- git checkout b281b0921b636bc36ad05c0b0b0763bd6dd43463
 
104
 
105
- # change to the repository directory
106
- cd lm-evaluation-harness
107
  ```
108
 
109
  ## Ethical Issues
@@ -114,4 +115,4 @@ cd lm-evaluation-harness
114
  ## Contact Us
115
 
116
  ### Why Upstage LLM?
117
- - [Upstage](https://en.upstage.ai)'s LLM research has yielded remarkable results. As of August 1st, our 70B model has reached the top spot in openLLM rankings, marking itself as the current leading performer globally. Recognizing the immense potential in implementing private LLM to actual businesses, we invite you to easily apply private LLM and fine-tune it with your own data. For a seamless and tailored solution, please do not hesitate to reach out to us. ► [click here to contact](https://www.upstage.ai/private-llm?utm_source=huggingface&utm_medium=link&utm_campaign=privatellm)
 
7
  - instruction
8
  - empirischtech
9
  pipeline_tag: text-generation
10
+ base_model:
11
+ - meta-llama/Llama-3.1-8B-Instruct
12
  ---
13
  # LLaMa-10b-instruct model card
14
 
15
  ## Model Details
16
 
17
+ * **Developed by**: [EmpirischTech](https://empirischtech.at)/[ChaperoneAI](https://chaperoneai.net)
18
+ * **Backbone Model**: [LLaMA](https://github.com/meta-llama/llama3)
 
19
  * **Language(s)**: English
20
  * **Library**: [HuggingFace Transformers](https://github.com/huggingface/transformers)
21
  * **License**: This model is under a **Non-commercial** Bespoke License and governed by the Meta license. You should only use this repository if you have been granted access to the model by filling out [this form](https://docs.google.com/forms/d/e/1FAIpQLSfqNECQnMkycAp2jP4Z9TFX0cGR4uf7b_fBxjY_OjhJILlKGA/viewform), but have either lost your copy of the weights or encountered issues converting them to the Transformers format
22
  * **Where to send comments**: Instructions on how to provide feedback or comments on a model can be found by opening an issue in the [Hugging Face community's model repository](https://huggingface.co/upstage/llama-30b-instruct-2048/discussions)
23
+ * **Contact**: For questions and comments about the model, please email [[email protected]](mailto:contact@empirischtech.at)
24
 
25
+ ## Training
26
 
 
27
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
28
 
29
  ## Usage
30
 
31
  - Tested on A100 80GB
32
+ - Our model can handle up to 132k input tokens as supported by the Llama-3.1 architecture.
33
 
34
  ```python
35
  import torch
 
55
 
56
  ## Hardware and Software
57
 
58
+ * **Hardware**: We utilized an A100x8 for training our model
59
+ * **Training Factors**: The model was pretrained using a combination of the [DeepSpeed library](https://github.com/microsoft/DeepSpeed) and the [HuggingFace Trainer](https://huggingface.co/docs/transformers/main_classes/trainer)
60
 
61
  ## Evaluation Results
62
 
 
79
  | falcon-40b-instruct | 63.4 | 61.6 | 84.3 | 55.4 | 52.5 | | |
80
 
81
 
82
+ ### Scripts to generate evalution results
83
  - Prepare evaluation environments:
84
  ```
85
+ # install from https://github.com/EleutherAI/lm-evaluation-harness
86
+ pip install lm-eval>=0.4.7
87
+
88
+ from lm_eval import evaluator
89
+
90
+ tasks_list = ["arc_challenge", "gpqa", "ifeval", "mmlu_pro", "hellaswag"] # Benchmark dataset
91
+
92
+ model_path='rwmasood/llama-3.1-10b-instruct'
93
+
94
+ # Run evaluation
95
+ results = evaluator.simple_evaluate(
96
+ model="hf", # Hugging Face model
97
+ cache_requests=False,
98
+ model_args=f"pretrained={model_path}",
99
+ tasks=tasks_list,
100
+ batch_size=4,
101
+ device="cuda:0"
102
+ )
103
 
104
+ # Extract results
105
+ results = results['results']
106
+ json_string = json.dumps(results, indent=4)
107
 
 
 
108
  ```
109
 
110
  ## Ethical Issues
 
115
  ## Contact Us
116
 
117
  ### Why Upstage LLM?
118
+ - [EmpirischTech](https://empirischtech.at)/[ChaperoneAI](https://chaperoneai.net) Unlock the full potential of private LLMs for your business with ease. Customize and fine-tune them using your own data for a solution that fits your unique needs. Want a seamless integration? Let’s connect! ► [Get in touch](https://chaperoneai.net/contact)