piratos commited on
Commit
7c8e94e
1 Parent(s): 7f37dae

Add model files

Browse files
README.md ADDED
@@ -0,0 +1,221 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pipeline_tag: text-generation
3
+ inference: true
4
+ widget:
5
+ - text: 'def print_hello_world():'
6
+ example_title: Hello world
7
+ group: Python
8
+ - text: 'Gradient descent is'
9
+ example_title: Machine Learning
10
+ group: English
11
+ - license: bigcode-openrail-m
12
+ datasets:
13
+ - bigcode/the-stack-dedup
14
+ - tiiuae/falcon-refinedweb
15
+ metrics:
16
+ - code_eval
17
+ - mmlu
18
+ - arc
19
+ - hellaswag
20
+ - truthfulqa
21
+ library_name: transformers
22
+ tags:
23
+ - code
24
+ model-index:
25
+ - name: StarCoderPlus
26
+ results:
27
+ - task:
28
+ type: text-generation
29
+ dataset:
30
+ type: openai_humaneval
31
+ name: HumanEval (Prompted)
32
+ metrics:
33
+ - name: pass@1
34
+ type: pass@1
35
+ value: 26.7
36
+ verified: false
37
+ - task:
38
+ type: text-generation
39
+ dataset:
40
+ type: MMLU (5-shot)
41
+ name: MMLU
42
+ metrics:
43
+ - name: Accuracy
44
+ type: Accuracy
45
+ value: 45.1
46
+ verified: false
47
+ - task:
48
+ type: text-generation
49
+ dataset:
50
+ type: HellaSwag (10-shot)
51
+ name: HellaSwag
52
+ metrics:
53
+ - name: Accuracy
54
+ type: Accuracy
55
+ value: 77.3
56
+ verified: false
57
+ - task:
58
+ type: text-generation
59
+ dataset:
60
+ type: ARC (25-shot)
61
+ name: ARC
62
+ metrics:
63
+ - name: Accuracy
64
+ type: Accuracy
65
+ value: 48.9
66
+ verified: false
67
+ - task:
68
+ type: text-generation
69
+ dataset:
70
+ type: ThrutfulQA (0-shot)
71
+ name: ThrutfulQA
72
+ metrics:
73
+ - name: Accuracy
74
+ type: Accuracy
75
+ value: 37.9
76
+ verified: false
77
+ extra_gated_prompt: >-
78
+ ## Model License Agreement
79
+
80
+ Please read the BigCode [OpenRAIL-M
81
+ license](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement)
82
+ agreement before accepting it.
83
+
84
+ extra_gated_fields:
85
+ I accept the above license agreement, and will use the Model complying with the set of use restrictions and sharing requirements: checkbox
86
+ ---
87
+
88
+
89
+ # # Fast-Inference with Ctranslate2
90
+ Speedup inference while reducing memory by 2x-4x using int8 inference in C++ on CPU or GPU.
91
+
92
+ quantized version of [bigcode/starcoderplus](https://huggingface.co/bigcode/starcoderplus)
93
+ ```bash
94
+ pip install hf-hub-ctranslate2>=2.0.10 ctranslate2>=3.16.0
95
+ ```
96
+ Converted on 2023-06-18 using
97
+ ```
98
+ ct2-transformers-converter --model bigcode/starcoderplus --output_dir ./ct2fast-starcoder --force --copy_files merges.txt tokenizer.json README.md tokenizer_config.json vocab.json generation_config.json special_tokens_map.json .gitattributes --quantization int8_float16 --trust_remote_code
99
+ ```
100
+
101
+ Checkpoint compatible to [ctranslate2>=3.16.0](https://github.com/OpenNMT/CTranslate2)
102
+ and [hf-hub-ctranslate2>=2.0.10](https://github.com/michaelfeil/hf-hub-ctranslate2)
103
+ - `compute_type=int8_float16` for `device="cuda"`
104
+ - `compute_type=int8` for `device="cpu"`
105
+
106
+ ```python
107
+ from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
108
+ from transformers import AutoTokenizer
109
+ model_name = "piratos/ct2fast-starcoderplus"
110
+ # use either TranslatorCT2fromHfHub or GeneratorCT2fromHfHub here, depending on model.
111
+ model = GeneratorCT2fromHfHub(
112
+ # load in int8 on CUDA
113
+ model_name_or_path=model_name,
114
+ device="cuda",
115
+ compute_type="int8_float16",
116
+ # tokenizer=AutoTokenizer.from_pretrained("bigcode/starcoderplus")
117
+ )
118
+ outputs = model.generate(
119
+ text=["def fibonnaci(", "User: How are you doing? Bot:"],
120
+ max_length=64,
121
+ include_prompt_in_result=False
122
+ )
123
+ print(outputs)
124
+ ```
125
+
126
+ # Licence and other remarks:
127
+ This is just a quantized version. Licence conditions are intended to be idential to original huggingface repo.
128
+
129
+ # Original description
130
+
131
+ # StarCoderPlus
132
+
133
+ Play with the instruction-tuned StarCoderPlus at [StarChat-Beta](https://huggingface.co/spaces/HuggingFaceH4/starchat-playground).
134
+
135
+ ## Table of Contents
136
+
137
+ 1. [Model Summary](##model-summary)
138
+ 2. [Use](##use)
139
+ 3. [Limitations](##limitations)
140
+ 4. [Training](##training)
141
+ 5. [License](##license)
142
+ 6. [Citation](##citation)
143
+
144
+ ## Model Summary
145
+
146
+ StarCoderPlus is a fine-tuned version of [StarCoderBase](https://huggingface.co/bigcode/starcoderbase) on 600B tokens from the English web dataset [RedefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb)
147
+ combined with [StarCoderData](https://huggingface.co/datasets/bigcode/starcoderdata) from [The Stack (v1.2)](https://huggingface.co/datasets/bigcode/the-stack) and a Wikipedia dataset.
148
+ It's a 15.5B parameter Language Model trained on English and 80+ programming languages. The model uses [Multi Query Attention](https://arxiv.org/abs/1911.02150),
149
+ [a context window of 8192 tokens](https://arxiv.org/abs/2205.14135), and was trained using the [Fill-in-the-Middle objective](https://arxiv.org/abs/2207.14255) on 1.6 trillion tokens.
150
+
151
+ - **Repository:** [bigcode/Megatron-LM](https://github.com/bigcode-project/Megatron-LM)
152
+ - **Project Website:** [bigcode-project.org](https://www.bigcode-project.org)
153
+ - **Point of Contact:** [[email protected]](mailto:[email protected])
154
+ - **Languages:** English & 80+ Programming languages
155
+
156
+
157
+ ## Use
158
+
159
+ ### Intended use
160
+
161
+ The model was trained on English and GitHub code. As such it is _not_ an instruction model and commands like "Write a function that computes the square root." do not work well. However, the instruction-tuned version in [StarChat](hhttps://huggingface.co/spaces/HuggingFaceH4/starchat-playground) makes a capable assistant.
162
+
163
+ **Feel free to share your generations in the Community tab!**
164
+
165
+ ### Generation
166
+ ```python
167
+ # pip install -q transformers
168
+ from transformers import AutoModelForCausalLM, AutoTokenizer
169
+
170
+ checkpoint = "bigcode/starcoderplus"
171
+ device = "cuda" # for GPU usage or "cpu" for CPU usage
172
+
173
+ tokenizer = AutoTokenizer.from_pretrained(checkpoint)
174
+ model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)
175
+
176
+ inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to(device)
177
+ outputs = model.generate(inputs)
178
+ print(tokenizer.decode(outputs[0]))
179
+ ```
180
+
181
+ ### Fill-in-the-middle
182
+ Fill-in-the-middle uses special tokens to identify the prefix/middle/suffix part of the input and output:
183
+
184
+ ```python
185
+ input_text = "<fim_prefix>def print_hello_world():\n <fim_suffix>\n print('Hello world!')<fim_middle>"
186
+ inputs = tokenizer.encode(input_text, return_tensors="pt").to(device)
187
+ outputs = model.generate(inputs)
188
+ print(tokenizer.decode(outputs[0]))
189
+ ```
190
+
191
+ ### Attribution & Other Requirements
192
+
193
+ The training code dataset of the model was filtered for permissive licenses only. Nevertheless, the model can generate source code verbatim from the dataset. The code's license might require attribution and/or other specific requirements that must be respected. We provide a [search index](https://huggingface.co/spaces/bigcode/starcoder-search) that let's you search through the pretraining data to identify where generated code came from and apply the proper attribution to your code.
194
+
195
+ # Limitations
196
+
197
+ The model has been trained on a mixture of English text from the web and GitHub code. Therefore it might encounter limitations when working with non-English text, and can carry the stereotypes and biases commonly encountered online.
198
+ Additionally, the generated code should be used with caution as it may contain errors, inefficiencies, or potential vulnerabilities. For a more comprehensive understanding of the base model's code limitations, please refer to See [StarCoder paper](hhttps://arxiv.org/abs/2305.06161).
199
+
200
+ # Training
201
+ StarCoderPlus is a fine-tuned version on 600B English and code tokens of StarCoderBase, which was pre-trained on 1T code tokens. Below are the fine-tuning details:
202
+
203
+ ## Model
204
+ - **Architecture:** GPT-2 model with multi-query attention and Fill-in-the-Middle objective
205
+ - **Finetuning steps:** 150k
206
+ - **Finetuning tokens:** 600B
207
+ - **Precision:** bfloat16
208
+
209
+ ## Hardware
210
+
211
+ - **GPUs:** 512 Tesla A100
212
+ - **Training time:** 14 days
213
+
214
+ ## Software
215
+
216
+ - **Orchestration:** [Megatron-LM](https://github.com/bigcode-project/Megatron-LM)
217
+ - **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch)
218
+ - **BP16 if applicable:** [apex](https://github.com/NVIDIA/apex)
219
+
220
+ # License
221
+ The model is licensed under the BigCode OpenRAIL-M v1 license agreement. You can find the full agreement [here](https://huggingface.co/spaces/bigcode/bigcode-model-license-agreement).
config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": "<|endoftext|>",
3
+ "eos_token": "<|endoftext|>",
4
+ "layer_norm_epsilon": null,
5
+ "unk_token": "<|endoftext|>",
6
+ "model_type": "gpt_bigcode_ct2fast"
7
+ }
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 0,
4
+ "eos_token_id": 0,
5
+ "transformers_version": "4.27.0.dev0"
6
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1891f60f4714b1e91a9407cadc81560d9ec3db9700afa551f1eb948a9acb8eed
3
+ size 15577671723
special_tokens_map.json ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<|endoftext|>",
4
+ "<fim_prefix>",
5
+ "<fim_middle>",
6
+ "<fim_suffix>",
7
+ "<fim_pad>",
8
+ "<filename>",
9
+ "<gh_stars>",
10
+ "<issue_start>",
11
+ "<issue_comment>",
12
+ "<issue_closed>",
13
+ "<jupyter_start>",
14
+ "<jupyter_text>",
15
+ "<jupyter_code>",
16
+ "<jupyter_output>",
17
+ "<empty_output>",
18
+ "<commit_before>",
19
+ "<commit_msg>",
20
+ "<commit_after>",
21
+ "<reponame>"
22
+ ],
23
+ "bos_token": "<|endoftext|>",
24
+ "eos_token": "<|endoftext|>",
25
+ "unk_token": "<|endoftext|>"
26
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "additional_special_tokens": [
4
+ "<|endoftext|>",
5
+ "<fim_prefix>",
6
+ "<fim_middle>",
7
+ "<fim_suffix>",
8
+ "<fim_pad>",
9
+ "<filename>",
10
+ "<gh_stars>",
11
+ "<issue_start>",
12
+ "<issue_comment>",
13
+ "<issue_closed>",
14
+ "<jupyter_start>",
15
+ "<jupyter_text>",
16
+ "<jupyter_code>",
17
+ "<jupyter_output>",
18
+ "<empty_output>",
19
+ "<commit_before>",
20
+ "<commit_msg>",
21
+ "<commit_after>",
22
+ "<reponame>"
23
+ ],
24
+ "bos_token": "<|endoftext|>",
25
+ "eos_token": "<|endoftext|>",
26
+ "model_max_length": 1000000000000000019884624838656,
27
+ "tokenizer_class": "GPT2Tokenizer",
28
+ "unk_token": "<|endoftext|>",
29
+ "vocab_size": 49152
30
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff
 
vocabulary.json ADDED
The diff for this file is too large to render. See raw diff