Edit model card

This is a fine-tuned version of LLAMA2 trained (7b) on spider, sql-create-context.

To initialize the model:

bnb_config = BitsAndBytesConfig(
load_in_4bit=use_4bit,
bnb_4bit_quant_type=bnb_4bit_quant_type,
bnb_4bit_compute_dtype=compute_dtype,
bnb_4bit_use_double_quant=use_nested_quant,

)

model = AutoModelForCausalLM.from_pretrained(
model_name,
quantization_config=bnb_config,
device_map=device_map,
trust_remote_code=True

)

Use the tokenizer:

tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "right"

To get the prompt:

dataset = dataset.map(
lambda example: {
    "input": "### Instruction: \nYou are a powerful text-to-SQL model.   \
                Your job is to answer questions about a database. You are given \
                a question and context regarding one or more tables. \n\nYou must \
                output the SQL query that answers the question.   \
                \n\n \
                ### Dialect:\n\nsqlite\n\n \
                ### question:\n\n"+ example["question"]+" \
                \n\n### Context:\n\n"+example["context"],
    "answer": example["answer"]
}
)

To generate text using the model:

output = model.generate(input["input_ids"])
Downloads last month
23
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.