File size: 10,677 Bytes
aaf5ce5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9cdaf9a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
aaf5ce5
 
 
 
2e3ffe4
 
aaf5ce5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1b52438
aaf5ce5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2d40ec4
aaf5ce5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2e3ffe4
 
 
 
 
 
 
 
 
 
 
aaf5ce5
8098a80
2e3ffe4
aaf5ce5
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
---
license: mit
datasets:
- pints-ai/Expository-Prose-V1
- HuggingFaceH4/ultrachat_200k
- Open-Orca/SlimOrca-Dedup
- meta-math/MetaMathQA
- HuggingFaceH4/deita-10k-v0-sft
- WizardLM/WizardLM_evol_instruct_V2_196k
- togethercomputer/llama-instruct
- LDJnr/Capybara
- HuggingFaceH4/ultrafeedback_binarized
language:
- en
model-index:
- name: 1.5-Pints
  results:
  - task:
      type: text-generation
    dataset:
      name: MTBench
      type: ai2_arc
    metrics:
    - name: MTBench
      type: LLM-as-a-Judge
      value: 3.4
    source:
      name: MTBench
      url: https://huggingface.co/spaces/lmsys/mt-bench
pipeline_tag: text-generation
extra_gated_prompt: >-
  Though best efforts has been made to ensure, as much as possible, that all
  texts in the training corpora are royalty free, this does not constitute a
  legal guarantee that such is the case. **By using any of the models, corpora
  or part thereof, the user agrees to bear full responsibility to do the
  necessary due diligence to ensure that he / she is in compliance with their
  local copyright laws. Additionally, the user agrees to bear any damages
  arising as a direct cause (or otherwise) of using any artifacts released by
  the pints research team, as well as full responsibility for the consequences
  of his / her usage (or implementation) of any such released artifacts. The
  user also indemnifies Pints Research Team (and any of its members or agents)
  of any damage, related or unrelated, to the release or subsequent usage of any
  findings, artifacts or code by the team. For the avoidance of doubt, any
  artifacts released by the Pints Research team are done so in accordance with
  the 'fair use' clause of Copyright Law, in hopes that this will aid the
  research community in bringing LLMs to the next frontier.
extra_gated_fields:
  Company: text
  Country: country
  Specific date: date_picker
  I want to use this model for:
    type: select
    options:
    - Research
    - Education
    - label: Other
      value: other
  I agree to use this model for in accordance to the afore-mentioned Terms of Use: checkbox
---

# 1.5-Pints -- A model pretrained in 9 days by using high quality data

Join us at Discord: https://discord.gg/eGTRzDdH

## How to use

**Install dependencies**
```bash
pip install transformers
# Omit `flash-attn` if not supported by your hardware
pip install flash-attn --no-build-isolation
```

**Usage**
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto

# INITIALIZE the model and the tokenizer
model = AutoModelForCausalLM.from_pretrained(
    "pints-ai/1.5-Pints-2k-v0.1",
    device_map=device,
    attn_implementation="flash_attention_2" # can be omitted if not supported
)
tokenizer = AutoTokenizer.from_pretrained("pints-ai/1.5-Pints-2k-v0.1")

# PREPARE and tokenize the prompt
prompt = "Predict what life will be like 100 years from now."
messages = [
    {"role": "system", "content": "You are an AI assistant that follows instruction extremely well. Help as much as you can."},
    {"role": "user", "content": prompt}
]

text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)

input = tokenizer([text], return_tensors="pt").to(device)

# GENERATE the response
generated_ids = model.generate(
    input.input_ids,
    max_new_tokens=512
)

# DECODE the response
input_length = len(input.input_ids[0])
# Remove the input and decode only the output
response = tokenizer.decode(generated_ids[0][input_length:])

print(response)
```

**Compute Infrastructure**<br>
This model can be served with a GPU containing at least 8GB of VRAM.
<br><br>

## Description
1.5 Pints is a Large Language Model that significantly advances the efficiency of LLM training by emphasizing data quality over quantity. Our [pre-training corpus](https://huggingface.co/datasets/pints-ai/Expository-Prose-V1) is a meticulously curated dataset of 57 billion tokens, thus making pre-training more accessible and environmentally-friendly.
<br><br>

## Results
**MTBench**<br>
[MTBench](https://huggingface.co/spaces/lmsys/mt-bench) is a popular evaluation harness that uses strong LLMs like GPT-4 to act as judges and assess the quality of the models' responses.
| Model | Score | Parameter Size | Pretrain Tokens |
|:-:|:-:|:-:|:-:|
| meta-llama/Llama-2-7b-chat-hf | 6.27 | 7B | 2T |
| microsoft/phi-2 | 5.83 | 2.7B | 1.4T |
| google/gemma-2b-it | 5.44 | 2B | 3T |
| stabilityai/stablelm-2-1_6b-chat | 4.7 | 1.6B | 2T |
| **1.5-Pints-2K** | **3.73** | **1.57B** | **0.115T** |
| TinyLlama/TinyLlama-1.1B-Chat-v1.0 | 3.72 | 1.1B | 3T |
| **1.5-Pints-16K** | **3.40** | **1.57B** | **0.115T** |
| apple/OpenELM-1_1B-Instruct | 3.34 | 1B | 1.8T |
| microsoft/phi-1_5 | 3.33 | 1.3B | 0.15T |
| databricks/dolly-v2-3b | 2.33 | 3B | 0.3T |
| EleutherAI/pythia-2.8b | 1.81 | 2.8B | 0.3T |
| tiiuae/falcon-rw-1b | 1.18 | 1B | 0.35T |

The 2K context window version of 1.5-Pints can be found [here](https://huggingface.co/pints-ai/1.5-Pints-2K-v0.1).

## Technical Specifications
**Architecture**<br>
Llama 2 Autoregressive Model with **16K Context Window** and Mistral tokenizer. The model uses Float32 precision.

| Parameters | Vocab Size | Embedding Size | Context Length | Layers | Heads | Query Groups | Intermediate Hidden Size |
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
| 1,565,886,464 | 32,064 | 2,048 | 16,384 | 24 | 32 | 4 | 8,192 |

**Context Lengths**<br>
1.5-Pints comes in 2 context lengths - 16k (16,384) and 2k (2,048).

**Prompt template**<br>
This model has been finetuned and preference-optimized using the ChatML template.
```
<|im_start|>system 
{SYSTEM_PROMPT}<|im_end|> 
<|im_start|>user 
{PROMPT}<|im_end|> 
<|im_start|>assistant 
```
<br><br>

## Uses
**Direct Use**<br>
This model is meant to be an efficient and fine-tunable helpful assistant. It is designed to excel in user assistance and reasoning, and rely less on internal knowledge and factuals. Thus, for knowledge retrieval purposes, it should be used with Retrieval Augmented Generation.

**Downstream Use**<br>
Given the size of this model, it is possible to launch multiple instances of it for use in agentic context without breaking the compute bank.

**Recommendations**<br>
- It is recommended to finetune this model for domain adaption, and use it for a specialized tasks.
- To reap full performance, use a repetition penalty of 1.3 rather than 1.
<br><br>

## Training Data
**Pre-Train Data**<br>
Dataset: [pints-ai/Expository-Prose-V1](https://huggingface.co/datasets/pints-ai/Expository-Prose-V1)


**Fine-Tune Data**<br>
Corpora:
- [HuggingFaceH4/ultrachat](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k)
- [Open-Orca/SlimOrca-Dedup](https://huggingface.co/datasets/Open-Orca/SlimOrca-Dedup)
- [meta-math/MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA)
- [HuggingFaceH4/deita-10k-v0-sft](https://huggingface.co/datasets/HuggingFaceH4/deita-10k-v0-sft)
- [WizardLM/WizardLM_evol_instruct_V2_196k](https://huggingface.co/datasets/WizardLMTeam/WizardLM_evol_instruct_V2_196k)
- [togethercomputer/llama-instruct](https://huggingface.co/togethercomputer/Llama-2-7B-32K-Instruct)
- [LDJnr/Capybara](https://huggingface.co/datasets/LDJnr/Capybara)

**DPO Data**<br>
Dataset: [HuggingFaceH4/ultrafeedback_binarized](https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized)
<br><br>

## Training Procedure
Both Pre-Train and Finetuning used [our fork](https://github.com/Pints-AI/1.5-Pints) of the [LitGPT Framework](https://github.com/Lightning-AI/litgpt). For DPO, we used the methods set out in [The Alignment Handbook](https://github.com/huggingface/alignment-handbook/blob/main/scripts/run_dpo.py). More details can be found in our [paper](https://arxiv.org/abs/2408.03506).

## Training Hyperparameters
**Pre-Train**<br>
| Hyperparameter | Value |
|:-:|:-:|
| Optimizer | AdamW(Beta1=0.9, Beta2=0.95) |
| Learning Rate Scheduler | Cosine |
| Max Learning Rate | 4.0x10-4 |
| Min Learning Rate | 4.0x10-5 |
| Warmup Steps | 2,000 |
| Batch Size | 2,097,152 |
| Weight Decay | 0.1 |
| Gradient Clipping Threshold | 1.0 |

**SFT**<br>
| Hyperparameter | Value |
|:-:|:-:|
| Optimizer | AdamW(Beta1=0.9, Beta2=0.95) |
| Warmup steps | 1,126 (10%)
| Peak learning rate | 2e-5 |
| Learning rate scheduler | Cosine |
| Weight Decay | 0.1 |

**DPO**<br>
DPO parameters used are the exact same as those specified in [The Alignment Handbook](https://github.com/huggingface/alignment-handbook).
<br><br>

## Citation
**Attribution**
- **Developed by:** [calvintwr](https://huggingface.co/calvintwr), [lemousehunter](https://huggingface.co/lemousehunter)
- **Funded by** [PintsAI](https://pints.ai/)
- **Released by:** [PintsAI](https://pints.ai/)
- **Model type:** Large Language Model
- **Language(s) (NLP):** English
- **License:** [MIT License](https://opensource.org/license/mit)
<br><br>

**BibTeX:**
 ```latex
 @misc{tan202415pintstechnicalreportpretraining,
       title={1.5-Pints Technical Report: Pretraining in Days, Not Months -- Your Language Model Thrives on Quality Data}, 
       author={Calvin Tan and Jerome Wang},
       year={2024},
       eprint={2408.03506},
       archivePrefix={arXiv},
       primaryClass={cs.CL},
       url={https://arxiv.org/abs/2408.03506}, 
 }
 ```

**APA**<br>
Tan, C., & Wang, J. (2024). 1.5-Pints Technical Report: Pretraining in days, not months -- Your language model thrives on quality data. arXiv. https://arxiv.org/abs/2408.03506
<br><br>

## Legal Warning
Though best efforts has been made to ensure, as much as possible, that all texts in the training corpora are royalty free, this does not constitute a legal guarantee that such is the case. **By using any of the models, corpora or part thereof, the user agrees to bear full responsibility to do the necessary due diligence to ensure that he / she is in compliance with their local copyright laws**. 

Additionally, the **user agrees to bear any damages** arising as a direct cause (or otherwise) of using any artifacts released by the pints research team, as well as full responsibility for the consequences of his / her usage (or implementation) of any such released artifacts. The user also indemnifies Pints Research Team (and any of its members or agents) of any damage, related or unrelated, to the release or subsequent usage of any findings, artifacts or code by the team.

For the avoidance of doubt, **any artifacts released by the Pints Research team are done so in accordance with the "fair use"** clause of Copyright Law, in hopes that this will aid the research community in bringing LLMs to the next frontier.