bugdaryan commited on
Commit
f1eba2b
1 Parent(s): cd1e73a

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +90 -0
README.md ADDED
@@ -0,0 +1,90 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - bugdaryan/sql-create-context-instruction
5
+ language:
6
+ - en
7
+ library_name: peft
8
+ tags:
9
+ - text2sql
10
+ ---
11
+ # Model Card for MistralSQL-7B
12
+
13
+ ## Model Information
14
+ - **Model Name:** MistralSQL-7B
15
+ - **Base Model Name:** mistralai/Mistral-7B-Instruct-v0.1
16
+ - **Base Model URL:** [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1)
17
+ - **Dataset Name:** bugdaryan/sql-create-context-instruction
18
+ - **Dataset URL:** [SQL Create Context Dataset](https://huggingface.co/datasets/bugdaryan/sql-create-context-instruction)
19
+ - **Dataset Description:** This dataset is built upon SQL Create Context, sourced from WikiSQL and Spider, providing 78,577 examples of natural language queries, SQL CREATE TABLE statements, and SQL queries answering questions using the CREATE statement as context.
20
+
21
+ ## Model Parameters
22
+ - **LoRA Attention Dimension:** 64
23
+ - **LoRA Alpha Parameter:** 16
24
+ - **LoRA Dropout Probability:** 0.1
25
+ - **Bitsandbytes Parameters:**
26
+ - Activate 4-bit precision base model loading: True
27
+ - Compute dtype for 4-bit base models: float16
28
+ - Quantization type (fp4 or nf4): nf4
29
+ - Activate nested quantization for 4-bit base models: False
30
+ - **TrainingArguments Parameters:**
31
+ - Output directory: "./results"
32
+ - Number of training epochs: 1
33
+ - Enable fp16/bf16 training: False/True
34
+ - Batch size per GPU for training: 80
35
+ - Batch size per GPU for evaluation: 4
36
+ - Gradient accumulation steps: 1
37
+ - Enable gradient checkpointing: True
38
+ - Maximum gradient norm (gradient clipping): 0.3
39
+ - Initial learning rate (AdamW optimizer): 2e-4
40
+ - Weight decay: 0.001
41
+ - Optimizer: paged_adamw_32bit
42
+ - Learning rate schedule: cosine
43
+ - Number of training steps (overrides num_train_epochs): -1
44
+ - Ratio of steps for a linear warmup: 0.03
45
+ - Group sequences into batches with the same length: True
46
+ - Save checkpoint every X update steps: 0
47
+ - Log every X update steps: 10
48
+ - **SFT Parameters:**
49
+ - Maximum sequence length: 500
50
+ - Packing: False
51
+
52
+ ## Inference Parameters
53
+ - **Temperature:** 0.7
54
+
55
+
56
+ ## Hardware and Software
57
+
58
+ - **Training Hardware**: 2 RTX A6000 48GB GPUs
59
+
60
+ ## License
61
+ - Apache-2.0
62
+
63
+ ## Instruction Format
64
+ To leverage instruction fine-tuning, prompts should be surrounded by `[INST]` and `[/INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.
65
+
66
+ For example:
67
+ ```python
68
+ from transformers import (
69
+ AutoModelForCausalLM,
70
+ AutoTokenizer,
71
+ pipeline
72
+ )
73
+ import torch
74
+
75
+ model_name = 'bugdaryan/MistralSQL-7b'
76
+
77
+ model = AutoModelForCausalLM.from_pretrained(model_name, device_map='auto')
78
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
79
+
80
+ pipe = pipeline('text-generation', model=model, tokenizer=tokenizer)
81
+
82
+ table = "CREATE TABLE sales ( sale_id number PRIMARY KEY, product_id number, customer_id number, salesperson_id number, sale_date DATE, quantity number, FOREIGN KEY (product_id) REFERENCES products(product_id), FOREIGN KEY (customer_id) REFERENCES customers(customer_id), FOREIGN KEY (salesperson_id) REFERENCES salespeople(salesperson_id)); CREATE TABLE product_suppliers ( supplier_id number PRIMARY KEY, product_id number, supply_price number, FOREIGN KEY (product_id) REFERENCES products(product_id)); CREATE TABLE customers ( customer_id number PRIMARY KEY, name text, address text ); CREATE TABLE salespeople ( salesperson_id number PRIMARY KEY, name text, region text ); CREATE TABLE product_suppliers ( supplier_id number PRIMARY KEY, product_id number, supply_price number );"
83
+
84
+ question = 'Find the salesperson who made the most sales.'
85
+
86
+ prompt = f"[INST] Write SQLite query to answer the following question given the database schema. Please wrap your code answer using ```: Schema: {table} Question: {question} [/INST] Here is the SQLite query to answer to the question: {question}: ``` "
87
+
88
+ ans = pipe(prompt, max_new_tokens=100)
89
+ print(ans[0]['generated_text'].split('```')[2])
90
+ ```