Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,126 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: llama2
|
3 |
+
---
|
4 |
+
Resharded version of https://huggingface.co/NumbersStation/nsql-llama-2-7B for low RAM enviroments (e.g. Colab, Kaggle) in safetensors.
|
5 |
+
|
6 |
+
---
|
7 |
+
|
8 |
+
# NSQL-Llama-2-7B
|
9 |
+
|
10 |
+
## Model Description
|
11 |
+
|
12 |
+
NSQL is a family of autoregressive open-source large foundation models (FMs) designed specifically for SQL generation tasks.
|
13 |
+
|
14 |
+
In this repository we are introducing a new member of NSQL, NSQL-Llama-2-7B. It's based on Meta's original [Llama-2 7B model](https://huggingface.co/meta-llama/Llama-2-7b) and further pre-trained on a dataset of general SQL queries and then fine-tuned on a dataset composed of text-to-SQL pairs.
|
15 |
+
|
16 |
+
## Training Data
|
17 |
+
|
18 |
+
The general SQL queries are the SQL subset from [The Stack](https://huggingface.co/datasets/bigcode/the-stack), containing 1M training samples. The labeled text-to-SQL pairs come from more than 20 public sources across the web from standard datasets. We hold out Spider and GeoQuery datasets for use in evaluation.
|
19 |
+
|
20 |
+
## Evaluation Data
|
21 |
+
|
22 |
+
We evaluate our models on two text-to-SQL benchmarks: Spider and GeoQuery.
|
23 |
+
|
24 |
+
## Training Procedure
|
25 |
+
|
26 |
+
NSQL was trained using cross-entropy loss to maximize the likelihood of sequential inputs. For finetuning on text-to-SQL pairs, we only compute the loss over the SQL portion of the pair. The model is trained using 80GB A100s, leveraging data and model parallelism. We pre-trained for 3 epochs and fine-tuned for 10 epochs.
|
27 |
+
|
28 |
+
## Intended Use and Limitations
|
29 |
+
|
30 |
+
The model was designed for text-to-SQL generation tasks from given table schema and natural language prompts. The model works best with the prompt format defined below and outputting `SELECT` queries.
|
31 |
+
|
32 |
+
## How to Use
|
33 |
+
|
34 |
+
Example 1:
|
35 |
+
|
36 |
+
```python
|
37 |
+
import torch
|
38 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
39 |
+
tokenizer = AutoTokenizer.from_pretrained("NumbersStation/nsql-llama-2-7B")
|
40 |
+
model = AutoModelForCausalLM.from_pretrained("NumbersStation/nsql-llama-2-7B", torch_dtype=torch.bfloat16)
|
41 |
+
text = """CREATE TABLE stadium (
|
42 |
+
stadium_id number,
|
43 |
+
location text,
|
44 |
+
name text,
|
45 |
+
capacity number,
|
46 |
+
highest number,
|
47 |
+
lowest number,
|
48 |
+
average number
|
49 |
+
)
|
50 |
+
CREATE TABLE singer (
|
51 |
+
singer_id number,
|
52 |
+
name text,
|
53 |
+
country text,
|
54 |
+
song_name text,
|
55 |
+
song_release_year text,
|
56 |
+
age number,
|
57 |
+
is_male others
|
58 |
+
)
|
59 |
+
CREATE TABLE concert (
|
60 |
+
concert_id number,
|
61 |
+
concert_name text,
|
62 |
+
theme text,
|
63 |
+
stadium_id text,
|
64 |
+
year text
|
65 |
+
)
|
66 |
+
CREATE TABLE singer_in_concert (
|
67 |
+
concert_id number,
|
68 |
+
singer_id text
|
69 |
+
)
|
70 |
+
-- Using valid SQLite, answer the following questions for the tables provided above.
|
71 |
+
-- What is the maximum, the average, and the minimum capacity of stadiums ?
|
72 |
+
SELECT"""
|
73 |
+
input_ids = tokenizer(text, return_tensors="pt").input_ids
|
74 |
+
generated_ids = model.generate(input_ids, max_length=500)
|
75 |
+
print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
|
76 |
+
```
|
77 |
+
|
78 |
+
Example 2:
|
79 |
+
|
80 |
+
```python
|
81 |
+
import torch
|
82 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
83 |
+
tokenizer = AutoTokenizer.from_pretrained("NumbersStation/nsql-llama-2-7B")
|
84 |
+
model = AutoModelForCausalLM.from_pretrained("NumbersStation/nsql-llama-2-7B", torch_dtype=torch.bfloat16)
|
85 |
+
text = """CREATE TABLE stadium (
|
86 |
+
stadium_id number,
|
87 |
+
location text,
|
88 |
+
name text,
|
89 |
+
capacity number,
|
90 |
+
)
|
91 |
+
-- Using valid SQLite, answer the following questions for the tables provided above.
|
92 |
+
-- how many stadiums in total?
|
93 |
+
SELECT"""
|
94 |
+
input_ids = tokenizer(text, return_tensors="pt").input_ids
|
95 |
+
generated_ids = model.generate(input_ids, max_length=500)
|
96 |
+
print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
|
97 |
+
```
|
98 |
+
|
99 |
+
Example 3:
|
100 |
+
|
101 |
+
```python
|
102 |
+
import torch
|
103 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
104 |
+
tokenizer = AutoTokenizer.from_pretrained("NumbersStation/nsql-llama-2-7B")
|
105 |
+
model = AutoModelForCausalLM.from_pretrained("NumbersStation/nsql-llama-2-7B", torch_dtype=torch.bfloat16)
|
106 |
+
text = """CREATE TABLE work_orders (
|
107 |
+
ID NUMBER,
|
108 |
+
CREATED_AT TEXT,
|
109 |
+
COST FLOAT,
|
110 |
+
INVOICE_AMOUNT FLOAT,
|
111 |
+
IS_DUE BOOLEAN,
|
112 |
+
IS_OPEN BOOLEAN,
|
113 |
+
IS_OVERDUE BOOLEAN,
|
114 |
+
COUNTRY_NAME TEXT,
|
115 |
+
)
|
116 |
+
-- Using valid SQLite, answer the following questions for the tables provided above.
|
117 |
+
-- how many work orders are open?
|
118 |
+
SELECT"""
|
119 |
+
input_ids = tokenizer(text, return_tensors="pt").input_ids
|
120 |
+
generated_ids = model.generate(input_ids, max_length=500)
|
121 |
+
print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
|
122 |
+
```
|
123 |
+
|
124 |
+
|
125 |
+
|
126 |
+
For more information (e.g., run with your local database), please find examples in [this repository](https://github.com/NumbersStationAI/NSQL).
|