Nikhil Pinnaparaju
commited on
Commit
•
12caa83
1
Parent(s):
0032b14
Adding Model Card
Browse files
README.md
CHANGED
@@ -1,5 +1,174 @@
|
|
1 |
---
|
2 |
license: other
|
3 |
-
|
4 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
5 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: other
|
3 |
+
datasets:
|
4 |
+
- tiiuae/falcon-refinedweb
|
5 |
+
- bigcode/the-stack-github-issues
|
6 |
+
- bigcode/commitpackft
|
7 |
+
- bigcode/starcoderdata
|
8 |
+
- EleutherAI/proof-pile-2
|
9 |
+
- meta-math/MetaMathQA
|
10 |
+
language:
|
11 |
+
- en
|
12 |
+
tags:
|
13 |
+
- causal-lm
|
14 |
+
- code
|
15 |
+
metrics:
|
16 |
+
- code_eval
|
17 |
+
library_name: transformers
|
18 |
---
|
19 |
+
# `stable-code-completion-1.0-3b`
|
20 |
+
|
21 |
+
## Model Description
|
22 |
+
|
23 |
+
`stable-code-completion-1.0-3b` is a 2.7B billion parameter decoder-only language model pre-trained on 1.3 trillion tokens of diverse textual and code datasets. `stable-code-completion-1.0-3b` is trained on nearly 20 programming languages (selected based on the 2023 StackOverflow Developer Survey) and demonstrates state-of-the-art performance (compared to models of similar size) on the MultiPL-E metrics across multiple programming languages tested using [BigCode's Evaluation Harness](https://github.com/bigcode-project/bigcode-evaluation-harness/tree/main).
|
24 |
+
|
25 |
+
**Key Features**
|
26 |
+
* Fill in Middle Capability (FIM)
|
27 |
+
* Supports Long Context, trained with Sequences upto 16,384
|
28 |
+
|
29 |
+
## Usage
|
30 |
+
|
31 |
+
Get started generating text with `stable-code-completion-1.0-3b` by using the following code snippet:
|
32 |
+
|
33 |
+
```python
|
34 |
+
import torch
|
35 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
36 |
+
tokenizer = AutoTokenizer.from_pretrained("stabilityai/stable-code-completion-1.0-3b", trust_remote_code=True)
|
37 |
+
model = AutoModelForCausalLM.from_pretrained(
|
38 |
+
"stabilityai/stable-code-completion-1.0-3b",
|
39 |
+
trust_remote_code=True,
|
40 |
+
torch_dtype="auto",
|
41 |
+
)
|
42 |
+
|
43 |
+
device = "cpu"
|
44 |
+
if torch.cuda.is_available():
|
45 |
+
device = "cuda"
|
46 |
+
|
47 |
+
inputs = tokenizer("import torch\nimport torch.nn as nn", return_tensors="pt").to(device)
|
48 |
+
tokens = model.generate(
|
49 |
+
**inputs,
|
50 |
+
max_new_tokens=48,
|
51 |
+
temperature=0.2,
|
52 |
+
do_sample=True,
|
53 |
+
)
|
54 |
+
print(tokenizer.decode(tokens[0], skip_special_tokens=True))
|
55 |
+
```
|
56 |
+
|
57 |
+
### Run with Fill in Middle (FIM) ⚡️
|
58 |
+
|
59 |
+
<details>
|
60 |
+
<summary> Click to expand </summary>
|
61 |
+
|
62 |
+
```python
|
63 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
64 |
+
tokenizer = AutoTokenizer.from_pretrained("stabilityai/stable-code-completion-1.0-3b", trust_remote_code=True)
|
65 |
+
model = AutoModelForCausalLM.from_pretrained(
|
66 |
+
"stabilityai/stable-code-completion-1.0-3b",
|
67 |
+
trust_remote_code=True,
|
68 |
+
torch_dtype="auto",
|
69 |
+
+ attn_implementation="flash_attention_2",
|
70 |
+
)
|
71 |
+
|
72 |
+
device = "cpu"
|
73 |
+
if torch.cuda.is_available():
|
74 |
+
device = "cuda"
|
75 |
+
|
76 |
+
inputs = tokenizer("<fim_prefix>def fib(n):<fim_suffix> else:\n return fib(n - 2) + fib(n - 1)<fim_middle>", return_tensors="pt").to("cuda")
|
77 |
+
tokens = model.generate(
|
78 |
+
**inputs,
|
79 |
+
max_new_tokens=48,
|
80 |
+
temperature=0.2,
|
81 |
+
do_sample=True,
|
82 |
+
)
|
83 |
+
print(tokenizer.decode(tokens[0], skip_special_tokens=True))
|
84 |
+
```
|
85 |
+
|
86 |
+
</details>
|
87 |
+
|
88 |
+
### Run with Flash Attention 2 ⚡️
|
89 |
+
|
90 |
+
<details>
|
91 |
+
<summary> Click to expand </summary>
|
92 |
+
|
93 |
+
```python
|
94 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
95 |
+
tokenizer = AutoTokenizer.from_pretrained("stabilityai/stable-code-completion-1.0-3b", trust_remote_code=True)
|
96 |
+
model = AutoModelForCausalLM.from_pretrained(
|
97 |
+
"stabilityai/stable-code-completion-1.0-3b",
|
98 |
+
trust_remote_code=True,
|
99 |
+
torch_dtype="auto",
|
100 |
+
+ attn_implementation="flash_attention_2",
|
101 |
+
)
|
102 |
+
|
103 |
+
device = "cpu"
|
104 |
+
if torch.cuda.is_available():
|
105 |
+
device = "cuda"
|
106 |
+
|
107 |
+
inputs = tokenizer("import torch\nimport torch.nn as nn", return_tensors="pt").to("cuda")
|
108 |
+
tokens = model.generate(
|
109 |
+
**inputs,
|
110 |
+
max_new_tokens=48,
|
111 |
+
temperature=0.2,
|
112 |
+
do_sample=True,
|
113 |
+
)
|
114 |
+
print(tokenizer.decode(tokens[0], skip_special_tokens=True))
|
115 |
+
```
|
116 |
+
|
117 |
+
</details>
|
118 |
+
|
119 |
+
|
120 |
+
## Model Details
|
121 |
+
|
122 |
+
* **Developed by**: [Stability AI](https://stability.ai/)
|
123 |
+
* **Model type**: `stable-code-completion-1.0-3b` models are auto-regressive language models based on the transformer decoder architecture.
|
124 |
+
* **Language(s)**: English, Code
|
125 |
+
* **Library**: [GPT-NeoX](https://github.com/EleutherAI/gpt-neox)
|
126 |
+
* **License**: Other
|
127 |
+
* **Contact**: For questions and comments about the model, please email `[email protected]`
|
128 |
+
|
129 |
+
### Model Architecture
|
130 |
+
|
131 |
+
The model is a decoder-only transformer similar to the LLaMA ([Touvron et al., 2023](https://arxiv.org/abs/2307.09288)) architecture with the following modifications:
|
132 |
+
|
133 |
+
| Parameters | Hidden Size | Layers | Heads | Sequence Length |
|
134 |
+
|----------------|-------------|--------|-------|-----------------|
|
135 |
+
| 2,796,431,360 | 2560 | 32 | 32 | 16384 |
|
136 |
+
|
137 |
+
* **Position Embeddings**: Rotary Position Embeddings ([Su et al., 2021](https://arxiv.org/abs/2104.09864)) applied to the first 25% of head embedding dimensions for improved throughput following [Black et al. (2022)](https://arxiv.org/pdf/2204.06745.pdf).
|
138 |
+
* **Tokenizer**: We use a modified version of the GPTNeoX Tokenizer.[`NeoX`](https://github.com/EleutherAI/gpt-neox). We add special tokens to train for Fill in the Middle (FIM) capabilities like `<FIM_PREFIX>` and `<FIM_SUFFIX>` along with other special tokens.
|
139 |
+
|
140 |
+
## Training
|
141 |
+
|
142 |
+
### Training Dataset
|
143 |
+
|
144 |
+
The dataset is comprised of a filtered mixture of open-source large-scale datasets available on the [HuggingFace Hub](https://huggingface.co/datasets): Falcon RefinedWeb extract ([Penedo et al., 2023](https://huggingface.co/datasets/tiiuae/falcon-refinedweb)), along with [CommitPackFT](https://huggingface.co/datasets/bigcode/commitpackft) and [Github Issues](https://huggingface.co/datasets/bigcode/the-stack-github-issues) (BigCode., 2023), and StarCoder ([Li et al., 2023](https://arxiv.org/abs/2305.06161)). We further supplement our training with data from mathematical domains ([Azerbayev, Zhangir, et al., 2023](https://arxiv.org/abs/2310.10631) and, [Yu, Longhui, et al., 2023](https://arxiv.org/abs/2309.12284)).
|
145 |
+
|
146 |
+
### Training Procedure
|
147 |
+
|
148 |
+
The model is pre-trained on the aforementioned datasets in `bfloat16` precision, optimized with AdamW.
|
149 |
+
|
150 |
+
### Training Infrastructure
|
151 |
+
|
152 |
+
* **Hardware**: `stable-code-completion-1.0-3b` was trained on the Stability AI cluster across 256 NVIDIA A100 40GB GPUs (AWS P4d instances).
|
153 |
+
|
154 |
+
* **Software**: We use a fork of `gpt-neox` ([EleutherAI, 2021](https://github.com/EleutherAI/gpt-neox)), train under 2D parallelism (Data and Tensor Parallel) with ZeRO-1 ([Rajbhandari et al., 2019](https://arxiv.org/abs/1910.02054v3)), and rely on flash-attention as well as SwiGLU and Rotary Embedding kernels from FlashAttention-2 ([Dao et al., 2023](https://tridao.me/publications/flash2/flash2.pdf))
|
155 |
+
|
156 |
+
## Use and Limitations
|
157 |
+
|
158 |
+
### Intended Use
|
159 |
+
|
160 |
+
The model is intended to be used as a foundational base model for application-specific fine-tuning. Developers must evaluate and fine-tune the model for safe performance in downstream applications.
|
161 |
+
|
162 |
+
### Limitations and Bias
|
163 |
+
|
164 |
+
As a base model, this model may exhibit unreliable, unsafe, or other undesirable behaviors that must be corrected through evaluation and fine-tuning prior to deployment. The pre-training dataset may have contained offensive or inappropriate content, even after applying data cleansing filters, which can be reflected in the model-generated text. We recommend that users exercise caution when using these models in production systems. Do not use the models if they are unsuitable for your application, or for any applications that may cause deliberate or unintentional harm to others.
|
165 |
+
|
166 |
+
## How to Cite
|
167 |
+
|
168 |
+
```bibtex
|
169 |
+
@misc{stable-code-completion-1.0-3b,
|
170 |
+
url={[https://huggingface.co/stabilityai/stablecode-3b](https://huggingface.co/stabilityai/stablecode-3b)},
|
171 |
+
title={Stable Code 3B},
|
172 |
+
author={Pinnaparaju, Nikhil and Adithyan, Reshinth and Phung, Duy and Tow, Jonathan and Baicoianu, James and and Cooper, Nathan}
|
173 |
+
}
|
174 |
+
```
|