aashish1904
commited on
Commit
•
3dc4567
1
Parent(s):
ba81c21
Upload README.md with huggingface_hub
Browse files
README.md
ADDED
@@ -0,0 +1,79 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
---
|
3 |
+
|
4 |
+
tags:
|
5 |
+
- code
|
6 |
+
- starcoder2
|
7 |
+
library_name: transformers
|
8 |
+
pipeline_tag: text-generation
|
9 |
+
license: bigcode-openrail-m
|
10 |
+
|
11 |
+
---
|
12 |
+
|
13 |
+
[![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
|
14 |
+
|
15 |
+
|
16 |
+
# QuantFactory/starcoder2-3b-instruct-GGUF
|
17 |
+
This is quantized version of [TechxGenus/starcoder2-3b-instruct](https://huggingface.co/TechxGenus/starcoder2-3b-instruct) created using llama.cpp
|
18 |
+
|
19 |
+
# Original Model Card
|
20 |
+
|
21 |
+
|
22 |
+
<p align="center">
|
23 |
+
<img width="300px" alt="starcoder2-instruct" src="https://huggingface.co/TechxGenus/starcoder2-3b-instruct/resolve/main/starcoder2-instruct.jpg">
|
24 |
+
</p>
|
25 |
+
|
26 |
+
### starcoder2-instruct
|
27 |
+
|
28 |
+
We've fine-tuned starcoder2-3b with an additional 0.7 billion high-quality, code-related tokens for 3 epochs. We used DeepSpeed ZeRO 3 and Flash Attention 2 to accelerate the training process. It achieves **65.9 pass@1** on HumanEval-Python. This model operates using the Alpaca instruction format (excluding the system prompt).
|
29 |
+
|
30 |
+
### Usage
|
31 |
+
|
32 |
+
Here give some examples of how to use our model:
|
33 |
+
|
34 |
+
```python
|
35 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
36 |
+
import torch
|
37 |
+
PROMPT = """### Instruction
|
38 |
+
{instruction}
|
39 |
+
### Response
|
40 |
+
"""
|
41 |
+
instruction = <Your code instruction here>
|
42 |
+
prompt = PROMPT.format(instruction=instruction)
|
43 |
+
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/starcoder2-3b-instruct")
|
44 |
+
model = AutoModelForCausalLM.from_pretrained(
|
45 |
+
"TechxGenus/starcoder2-3b-instruct",
|
46 |
+
torch_dtype=torch.bfloat16,
|
47 |
+
device_map="auto",
|
48 |
+
)
|
49 |
+
inputs = tokenizer.encode(prompt, return_tensors="pt")
|
50 |
+
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=2048)
|
51 |
+
print(tokenizer.decode(outputs[0]))
|
52 |
+
```
|
53 |
+
|
54 |
+
With text-generation pipeline:
|
55 |
+
|
56 |
+
|
57 |
+
```python
|
58 |
+
from transformers import pipeline
|
59 |
+
import torch
|
60 |
+
PROMPT = """### Instruction
|
61 |
+
{instruction}
|
62 |
+
### Response
|
63 |
+
"""
|
64 |
+
instruction = <Your code instruction here>
|
65 |
+
prompt = PROMPT.format(instruction=instruction)
|
66 |
+
generator = pipeline(
|
67 |
+
model="TechxGenus/starcoder2-3b-instruct",
|
68 |
+
task="text-generation",
|
69 |
+
torch_dtype=torch.bfloat16,
|
70 |
+
device_map="auto",
|
71 |
+
)
|
72 |
+
result = generator(prompt, max_length=2048)
|
73 |
+
print(result[0]["generated_text"])
|
74 |
+
```
|
75 |
+
|
76 |
+
### Note
|
77 |
+
|
78 |
+
Model may sometimes make errors, produce misleading contents, or struggle to manage tasks that are not related to coding. It has undergone very limited testing. Additional safety testing should be performed before any real-world deployments.
|
79 |
+
|