Model Card for NL2SQL-StarCoder-15B
Model Inro
NL2SQL-StarCoder-15B is a NLP-SQL model fintuned by QLoRA based on StarCoder 15B Code-LLM。
Requirements
- python>=3.8
- pytorch>=2.0.0
- transformers==4.32.0
- CUDA 11.4
Data Format
The data is in the form of a string spliced by the model in the training data format, which is also how the input PROMPT is spliced during inference:
"""
<|user|>
/* Given the following database schema: */
CREATE TABLE "table_name" (
"col1" int,
...
...
)
/* Write a sql to answer the following question: {Question} */
<|assistant|>
```sql
{Output SQL}
```<|end|>
"""
But from test we recomended using the promt what sqlcoder was given:
### Instructions:
Your task is to convert a question into a SQL query, given a Postgres database schema.
Adhere to these rules:
- **Deliberately go through the question and database schema word by word** to appropriately answer the question
- **Use Table Aliases** to prevent ambiguity. For example, `SELECT table1.col1, table2.col1 FROM table1 JOIN table2 ON table1.id = table2.id`.
- When creating a ratio, always cast the numerator as float
### Input:
Generate a SQL query that answers the question `{question}`.
This query will run on a database whose schema is represented in this string:
CREATE TABLE "table_name" (
"col1" int,
...
...
)
### Response:
Based on your instructions, here is the SQL query I have generated to answer the question `{question}`:
```sql
Quick Start
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_dir = "gabrielpondc/NL2SQL-StarCoder-15B"
tokenizer = AutoTokenizer.from_pretrained(model_dir, device_map="auto",
trust_remote_code=True, torch_dtype=torch.float16)
tokenizer.padding_side = "left"
tokenizer.pad_token_id = tokenizer.convert_tokens_to_ids("<fim_pad>")
tokenizer.eos_token_id = tokenizer.convert_tokens_to_ids("<|endoftext|>")
tokenizer.pad_token = "<fim_pad>"
tokenizer.eos_token = "<|endoftext|>"
model = AutoModelForCausalLM.from_pretrained(model_dir, device_map="auto",
trust_remote_code=True, torch_dtype=torch.float16)
model.eval()
text = '<|user|>\n/* Given the following database schema: */\nCREATE TABLE "singer" (\n"Singer_ID" int,\n"Name" text,\n"Country" text,\n"Song_Name" text,\n"Song_release_year" text,\n"Age" int,\n"Is_male" bool,\nPRIMARY KEY ("Singer_ID")\n)\n\n/* Write a sql to answer the following question: Show countries where a singer above age 40 and a singer below 30 are from. */<|end|>\n'
inputs = tokenizer(text, return_tensors='pt', padding=True, add_special_tokens=False).to("cuda")
outputs = model.generate(
inputs=inputs["input_ids"],
attention_mask=inputs["attention_mask"],
max_new_tokens=512,
top_p=0.95,
temperature=0.1,
do_sample=False,
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.pad_token_id
)
gen_text = tokenizer.batch_decode(outputs[:, inputs["input_ids"].shape[1]:], skip_special_tokens=True)
print(gen_text)
- Downloads last month
- 28
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.