Introduction

This model specializes on the Text-to-SQL task. It is finetuned from the quantized version of Qwen2.5-Coder-7B-Instruct. Its sibling 32B model has an EX score of 63.33 and R-VES score of 60.02 on the BIRD leaderboard.

Quick start

To use this model, craft your prompt to start with your database schema in the form of CREATE TABLE, followed by your natural language query preceded by --. Make sure your prompt ends with SELECT in order for the model to finish the query for you.

from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
from peft import PeftModel

model_name = "unsloth/Qwen2.5-Coder-7B-Instruct-bnb-4bit"
adapter_name = "onekq-ai/OneSQL-v0.1-Qwen-7B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
tokenizer.padding_side = "left"
model = PeftModel.from_pretrained(AutoModelForCausalLM.from_pretrained(model_name, device_map="auto"), adapter_name).to("cuda")

generator = pipeline("text-generation", model=model, tokenizer=tokenizer, return_full_text=False)

prompt = """
CREATE TABLE students (
    id INTEGER PRIMARY KEY,
    name TEXT,
    age INTEGER,
    grade TEXT
);

-- Find the three youngest students
SELECT """

result = generator(f"<|im_start|>system\nYou are a SQL expert. Return code only.<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant\n")[0]
print(result["generated_text"])
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no pipeline_tag.

Model tree for onekq-ai/OneSQL-v0.1-Qwen-7B