Mistral-7B-text-to-sql
This model is a fine-tuned version of mistralai/Mistral-7B-Instruct-v0.1 on the generator dataset. https://huggingface.co/datasets/b-mc2/sql-create-context
b-mc2/sql-create-context
USE CASE
import torch from peft import AutoPeftModelForCausalLM from transformers import AutoTokenizer, pipeline
peft_model_id = "frankmorales2020/Mistral-7B-text-to-sql"
Load Model with PEFT adapter
model = AutoPeftModelForCausalLM.from_pretrained( peft_model_id, device_map="auto", torch_dtype=torch.float16 )
tokenizer = AutoTokenizer.from_pretrained(peft_model_id)
Load into the pipeline
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
DATASET
https://huggingface.co/datasets/b-mc2/sql-create-context
ARTICLE
https://medium.com/@frankmorales_91352/text-to-sql-generation-a-comprehensive-overview-6feb24f69f3c
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 3
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 6
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
Training results
When evaluated on 1000 samples from the evaluation dataset, our model achieved an impressive accuracy of 76.30%. However, there's room for improvement. We could enhance the model's performance by exploring techniques like few-shot learning, RAG, and Self-healing to generate the SQL query.
Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
- Downloads last month
- 27
Model tree for frankmorales2020/Mistral-7B-text-to-sql
Base model
mistralai/Mistral-7B-v0.1