Edit model card

QuantFactory Banner

QuantFactory/starcoder2-7b-instruct-GGUF

This is quantized version of TechxGenus/starcoder2-7b-instruct created using llama.cpp

Original Model Card

starcoder2-instruct

starcoder2-instruct

We've fine-tuned starcoder2-7b with an additional 0.7 billion high-quality, code-related tokens for 3 epochs. We used DeepSpeed ZeRO 3 and Flash Attention 2 to accelerate the training process. It achieves 73.2 pass@1 on HumanEval-Python. This model operates using the Alpaca instruction format (excluding the system prompt).

Usage

Here give some examples of how to use our model:

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
PROMPT = """### Instruction
{instruction}
### Response
"""
instruction = <Your code instruction here>
prompt = PROMPT.format(instruction=instruction)
tokenizer = AutoTokenizer.from_pretrained("TechxGenus/starcoder2-7b-instruct")
model = AutoModelForCausalLM.from_pretrained(
    "TechxGenus/starcoder2-7b-instruct",
    torch_dtype=torch.bfloat16,
    device_map="auto",
)
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=2048)
print(tokenizer.decode(outputs[0]))

With text-generation pipeline:

from transformers import pipeline
import torch
PROMPT = """### Instruction
{instruction}
### Response
"""
instruction = <Your code instruction here>
prompt = PROMPT.format(instruction=instruction)
generator = pipeline(
    model="TechxGenus/starcoder2-7b-instruct",
    task="text-generation",
    torch_dtype=torch.bfloat16,
    device_map="auto",
)
result = generator(prompt, max_length=2048)
print(result[0]["generated_text"])

Note

Model may sometimes make errors, produce misleading contents, or struggle to manage tasks that are not related to coding. It has undergone very limited testing. Additional safety testing should be performed before any real-world deployments.

Downloads last month
411
GGUF
Model size
7.17B params
Architecture
starcoder2

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.