add model card
Browse files
README.md
CHANGED
@@ -1,142 +1,147 @@
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
---
|
5 |
|
6 |
-
# Model Card for Model ID
|
7 |
-
|
8 |
-
<!-- Provide a quick summary of what the model is/does. -->
|
9 |
-
|
10 |
-
|
11 |
|
12 |
## Model Details
|
13 |
|
|
|
|
|
14 |
### Model Description
|
15 |
|
16 |
<!-- Provide a longer summary of what this model is. -->
|
17 |
|
18 |
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
|
19 |
|
20 |
-
- **Developed by:**
|
21 |
-
- **
|
22 |
-
- **
|
23 |
-
- **
|
24 |
-
- **Language(s) (NLP):** [More Information Needed]
|
25 |
-
- **License:** [More Information Needed]
|
26 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
27 |
-
|
28 |
-
### Model Sources [optional]
|
29 |
|
30 |
-
<!-- Provide the basic links for the model. -->
|
31 |
-
|
32 |
-
- **Repository:** [More Information Needed]
|
33 |
-
- **Paper [optional]:** [More Information Needed]
|
34 |
-
- **Demo [optional]:** [More Information Needed]
|
35 |
|
36 |
## Uses
|
37 |
|
38 |
-
|
39 |
-
|
40 |
-
### Direct Use
|
41 |
-
|
42 |
-
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
43 |
-
|
44 |
-
[More Information Needed]
|
45 |
-
|
46 |
-
### Downstream Use [optional]
|
47 |
-
|
48 |
-
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
49 |
-
|
50 |
-
[More Information Needed]
|
51 |
|
52 |
### Out-of-Scope Use
|
53 |
|
54 |
-
|
55 |
|
56 |
[More Information Needed]
|
57 |
|
58 |
-
## Bias, Risks, and Limitations
|
59 |
-
|
60 |
-
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
61 |
-
|
62 |
-
[More Information Needed]
|
63 |
|
64 |
### Recommendations
|
65 |
|
66 |
-
|
67 |
-
|
68 |
-
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
69 |
|
70 |
## How to Get Started with the Model
|
71 |
|
72 |
-
|
73 |
-
|
74 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
75 |
|
76 |
## Training Details
|
77 |
|
78 |
### Training Data
|
79 |
|
80 |
-
|
81 |
-
|
82 |
-
[More Information Needed]
|
83 |
|
84 |
### Training Procedure
|
85 |
|
86 |
-
|
87 |
-
|
88 |
-
#### Preprocessing [optional]
|
89 |
-
|
90 |
-
[More Information Needed]
|
91 |
-
|
92 |
|
93 |
#### Training Hyperparameters
|
94 |
|
95 |
-
|
96 |
-
|
97 |
-
#### Speeds, Sizes, Times [optional]
|
98 |
|
99 |
-
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
|
100 |
-
|
101 |
-
[More Information Needed]
|
102 |
|
103 |
## Evaluation
|
104 |
|
105 |
-
<!-- This section describes the evaluation protocols and provides the results. -->
|
106 |
-
|
107 |
-
### Testing Data, Factors & Metrics
|
108 |
-
|
109 |
-
#### Testing Data
|
110 |
-
|
111 |
-
<!-- This should link to a Dataset Card if possible. -->
|
112 |
-
|
113 |
-
[More Information Needed]
|
114 |
-
|
115 |
-
#### Factors
|
116 |
-
|
117 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
118 |
-
|
119 |
-
[More Information Needed]
|
120 |
-
|
121 |
-
#### Metrics
|
122 |
-
|
123 |
-
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
124 |
-
|
125 |
-
[More Information Needed]
|
126 |
-
|
127 |
### Results
|
128 |
|
129 |
-
|
130 |
-
|
131 |
-
|
132 |
-
|
|
|
|
|
|
|
133 |
|
|
|
134 |
|
135 |
-
## Model Examination [optional]
|
136 |
-
|
137 |
-
<!-- Relevant interpretability work for the model goes here -->
|
138 |
-
|
139 |
-
[More Information Needed]
|
140 |
|
141 |
## Environmental Impact
|
142 |
|
@@ -144,56 +149,35 @@ Use the code below to get started with the model.
|
|
144 |
|
145 |
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
146 |
|
147 |
-
- **Hardware Type:**
|
148 |
-
- **Hours used:**
|
149 |
-
- **Cloud Provider:** [More Information Needed]
|
150 |
-
- **Compute Region:** [More Information Needed]
|
151 |
-
- **Carbon Emitted:** [More Information Needed]
|
152 |
|
153 |
-
## Technical Specifications [optional]
|
154 |
|
155 |
### Model Architecture and Objective
|
156 |
|
157 |
-
|
158 |
|
159 |
### Compute Infrastructure
|
160 |
|
161 |
-
|
162 |
|
163 |
#### Hardware
|
164 |
|
165 |
-
|
166 |
|
167 |
#### Software
|
168 |
|
169 |
-
|
170 |
-
|
171 |
-
## Citation [optional]
|
172 |
-
|
173 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
174 |
-
|
175 |
-
**BibTeX:**
|
176 |
-
|
177 |
-
[More Information Needed]
|
178 |
-
|
179 |
-
**APA:**
|
180 |
-
|
181 |
-
[More Information Needed]
|
182 |
-
|
183 |
-
## Glossary [optional]
|
184 |
|
185 |
-
|
186 |
|
187 |
-
|
188 |
-
|
189 |
-
## More Information [optional]
|
190 |
|
191 |
-
[More Information Needed]
|
192 |
|
193 |
-
## Model Card Authors
|
194 |
|
195 |
-
|
196 |
|
197 |
## Model Card Contact
|
198 |
|
199 |
-
|
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
+
license: apache-2.0
|
4 |
+
datasets:
|
5 |
+
- cerebras/SlimPajama-627B
|
6 |
+
- HuggingFaceH4/ultrachat_200k
|
7 |
+
- hkust-nlp/deita-10k-v0
|
8 |
+
- Open-Orca/SlimOrca-Dedup
|
9 |
+
- cognitivecomputations/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split
|
10 |
+
- HuggingFaceH4/capybara
|
11 |
+
- meta-math/MetaMathQA
|
12 |
+
- argilla/ultrafeedback-binarized-preferences-cleaned
|
13 |
+
- Intel/orca_dpo_pairs
|
14 |
+
- alexredna/oasst2_dpo_pairs
|
15 |
+
pipeline_tag: text-generation
|
16 |
---
|
17 |
|
|
|
|
|
|
|
|
|
|
|
18 |
|
19 |
## Model Details
|
20 |
|
21 |
+
With great enthusiasm, we unveil the Prem-1B series, open-source, multipurpose large language models developed by Prem AI. This cutting-edge SLM offers the open community and enterprises the opportunity to harness capabilities that were once exclusively available through closed model APIs, empowering them to build their own advanced language models. Our objective is to develop a model that excels at Retrieval-Augmented Generation (RAG). While Large Language Models (LLMs) store a vast amount of information within their parameters, RAG operates differently by ingesting information during runtime. This approach suggests that for RAG applications, we may not require models of immense size. With this initiative, we aim to create a Small Language Model (SLM) with an extended context length of 8192 tokens, enabling it to handle multi-turn conversations effectively. This endeavor represents our inaugural attempt to craft an SLM tailored for RAG tasks.
|
22 |
+
|
23 |
### Model Description
|
24 |
|
25 |
<!-- Provide a longer summary of what this model is. -->
|
26 |
|
27 |
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
|
28 |
|
29 |
+
- **Developed by:** https://premai.io/
|
30 |
+
- **Model type:** Llama
|
31 |
+
- **Language(s) (NLP):** Python
|
32 |
+
- **License:** Apache License 2.0
|
|
|
|
|
|
|
|
|
|
|
33 |
|
|
|
|
|
|
|
|
|
|
|
34 |
|
35 |
## Uses
|
36 |
|
37 |
+
The Prem-1B language model is designed for commercial and research applications involving the English language. The instruction-tuned versions of the model are tailored for conversational interactions akin to a virtual assistant. On the other hand, the pretrained variants can be fine-tuned and adapted for various natural language generation tasks beyond just dialogue.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
38 |
|
39 |
### Out-of-Scope Use
|
40 |
|
41 |
+
The model must not be used in any manner that violates applicable laws or regulations, including trade compliance laws. It is also prohibited to use the model in any way that goes against the Acceptable Use Policy and the Prem-1B Community License. While the base model is intended for English language use, developers are permitted to fine-tune the Prem-1B models for other languages, provided they comply with the Prem-1B Community License and the Acceptable Use Policy.
|
42 |
|
43 |
[More Information Needed]
|
44 |
|
|
|
|
|
|
|
|
|
|
|
45 |
|
46 |
### Recommendations
|
47 |
|
48 |
+
Users (both direct and downstream) should be made aware of the risks, biases, and limitations of the model. More information needed for further recommendations.
|
|
|
|
|
49 |
|
50 |
## How to Get Started with the Model
|
51 |
|
52 |
+
Using `AutoModelForCausalLM` and `AutoTokenizer`
|
53 |
+
```py
|
54 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
55 |
+
|
56 |
+
# Load the model and tokenizer
|
57 |
+
tokenizer = AutoTokenizer.from_pretrained("premai-io/prem-1B-chat")
|
58 |
+
model = AutoModelForCausalLM.from_pretrained('premai-io/prem-1B-chat')
|
59 |
+
model = model.to('cuda')
|
60 |
+
|
61 |
+
# Setup terminators
|
62 |
+
terminators = [tokenizer.eos_token_id, tokenizer.encode('<|eot_id|>', add_special_tokens=False)[0]]
|
63 |
+
|
64 |
+
# Prepare the prompt
|
65 |
+
messages = [
|
66 |
+
{
|
67 |
+
"role": "system",
|
68 |
+
"content": "You are a helpful AI assistant. You should give concise responses to very simple questions, but provide thorough responses to more complex and open-ended questions."
|
69 |
+
},
|
70 |
+
{
|
71 |
+
'role': 'user',
|
72 |
+
'content': 'Help me understand machine learning.'
|
73 |
+
}
|
74 |
+
]
|
75 |
+
|
76 |
+
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
77 |
+
|
78 |
+
# Generate
|
79 |
+
inputs = tokenizer(prompt, return_attention_mask=False, return_tensors="pt", add_special_tokens=False)
|
80 |
+
input_ids = inputs['input_ids']
|
81 |
+
input_ids = input_ids.to(model.device)
|
82 |
+
res = model.generate(input_ids=input_ids, max_new_tokens=400, pad_token_id=tokenizer.pad_token_id, eos_token_id=terminators)
|
83 |
+
generated_text = tokenizer.decode(res[0][input_ids.shape[1]:], skip_special_tokens=True).strip()
|
84 |
+
print(generated_text)
|
85 |
+
```
|
86 |
+
|
87 |
+
Using pipelines:
|
88 |
+
```py
|
89 |
+
import torch
|
90 |
+
from transformers import pipeline
|
91 |
+
|
92 |
+
# Load the pipeline
|
93 |
+
pipe = pipeline("text-generation", model="premai-io/prem-1B-chat", torch_dtype=torch.bfloat16, device=0)
|
94 |
+
|
95 |
+
# Prepare prompt
|
96 |
+
messages = [
|
97 |
+
{
|
98 |
+
"role": "system",
|
99 |
+
"content": "You are a helpful AI assistant. You should give concise responses to very simple questions, but provide thorough responses to more complex and open-ended questions."
|
100 |
+
},
|
101 |
+
{
|
102 |
+
'role': 'user',
|
103 |
+
'content': 'Help me understand machine learning.'
|
104 |
+
}
|
105 |
+
]
|
106 |
+
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
107 |
+
|
108 |
+
# Setup terminators
|
109 |
+
terminators = [pipe.tokenizer.eos_token_id, pipe.tokenizer.encode('<|eot_id|>', add_special_tokens=False)[0]]
|
110 |
+
|
111 |
+
# Generate
|
112 |
+
outputs = pipe(prompt, max_new_tokens=400, do_sample=True, temperature=0.7, top_k=50, top_p=0.95, pad_token_id=pipe.tokenizer.pad_token_id, eos_token_id=terminators)
|
113 |
+
print(outputs[0]["generated_text"][len(prompt):])
|
114 |
+
```
|
115 |
|
116 |
## Training Details
|
117 |
|
118 |
### Training Data
|
119 |
|
120 |
+
Mentioned in blogpost: https://blog.premai.io/p/e4168cd0-36f2-4a7f-b810-50393dd65601/
|
|
|
|
|
121 |
|
122 |
### Training Procedure
|
123 |
|
124 |
+
Mentioned in blogpost: https://blog.premai.io/p/e4168cd0-36f2-4a7f-b810-50393dd65601/
|
|
|
|
|
|
|
|
|
|
|
125 |
|
126 |
#### Training Hyperparameters
|
127 |
|
128 |
+
Mentioned in blogpost: https://blog.premai.io/p/e4168cd0-36f2-4a7f-b810-50393dd65601/
|
|
|
|
|
129 |
|
|
|
|
|
|
|
130 |
|
131 |
## Evaluation
|
132 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
133 |
### Results
|
134 |
|
135 |
+
|Model |Avg |Arc-c|Arc-e|Hellaswag|MMLU |Obqa |Piqa |Winogrande|
|
136 |
+
|------------------------|-----|-----|-----|---------|-----|-----|-----|----------|
|
137 |
+
|prem-1B |42.64|24.74|57.40|42.01 |24.75|21.00|72.14|56.43 |
|
138 |
+
|prem-1B-chat |41.76|24.48|53.32|40.28 |25.27|22.20|70.89|55.88 |
|
139 |
+
|TinyLlama-1.1B-Chat-v1.0|46.16|30.03|61.53|46.56 |24.72|25.80|74.21|60.29 |
|
140 |
+
|opt-1.3b |42.94|23.37|57.44|41.49 |24.86|23.20|71.49|58.72 |
|
141 |
+
|pythia-1b |40.71|24.31|56.90|37.72 |23.20|18.80|70.62|53.43 |
|
142 |
|
143 |
+

|
144 |
|
|
|
|
|
|
|
|
|
|
|
145 |
|
146 |
## Environmental Impact
|
147 |
|
|
|
149 |
|
150 |
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
151 |
|
152 |
+
- **Hardware Type:** H100 GPUs
|
153 |
+
- **Hours used:** 8500
|
|
|
|
|
|
|
154 |
|
|
|
155 |
|
156 |
### Model Architecture and Objective
|
157 |
|
158 |
+
Llama based
|
159 |
|
160 |
### Compute Infrastructure
|
161 |
|
162 |
+
16-H100 GPUs
|
163 |
|
164 |
#### Hardware
|
165 |
|
166 |
+
H100 GPUs
|
167 |
|
168 |
#### Software
|
169 |
|
170 |
+
PyTorch, transformers, PyTorch Lightning
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
171 |
|
172 |
+
## Citation
|
173 |
|
174 |
+
https://blog.premai.io/p/e4168cd0-36f2-4a7f-b810-50393dd65601/
|
|
|
|
|
175 |
|
|
|
176 |
|
177 |
+
## Model Card Authors
|
178 |
|
179 |
+
https://huggingface.co/goku, https://huggingface.co/nsosio, https://huggingface.co/ucalyptus, https://huggingface.co/filopedraz
|
180 |
|
181 |
## Model Card Contact
|
182 |
|
183 |
+
https://huggingface.co/goku, https://huggingface.co/nsosio, https://huggingface.co/ucalyptus, https://huggingface.co/filopedraz
|