Disclaimer

Your email will be used for anonymous survey. It will NOT be shared with anyone.

Introduction

This model is the full-weight version of the adapter model OneSQL-v0.1-Qwen-3B.

Quick start

To use this model, craft your prompt to start with your database schema in the form of CREATE TABLE, followed by your natural language query preceded by --. Make sure your prompt ends with SELECT in order for the model to finish the query for you.

from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
from peft import PeftModel

model_name = "onekq-ai/OneSQL-v0.2-Qwen-3B"
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(model_name)
tokenizer.padding_side = "left"

generator = pipeline("text-generation", model=model, tokenizer=tokenizer, return_full_text=False)

prompt = """
CREATE TABLE students (
    id INTEGER PRIMARY KEY,
    name TEXT,
    age INTEGER,
    grade TEXT
);

-- Find the three youngest students
SELECT """

result = generator(f"<|im_start|>system\nYou are a SQL expert. Return code only.<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant\n")[0]
print(result["generated_text"])

The model response is the finished SQL query without SELECT

* FROM students ORDER BY age ASC LIMIT 3
Downloads last month
4
Safetensors
Model size
3.09B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for onekq-ai/OneSQL-v0.2-Qwen-3B

Base model

Qwen/Qwen2.5-3B
Finetuned
(8)
this model
Quantizations
1 model

Collection including onekq-ai/OneSQL-v0.2-Qwen-3B