rishdotblog
commited on
Commit
•
1872c78
1
Parent(s):
c4a442d
Update README.md
Browse files
README.md
CHANGED
@@ -1,69 +1,82 @@
|
|
1 |
---
|
2 |
-
license:
|
3 |
-
|
4 |
-
-
|
5 |
-
base_model: codellama/CodeLlama-7b-hf
|
6 |
-
model-index:
|
7 |
-
- name: sqlcoder_7b_fullft_ds7_linear
|
8 |
-
results: []
|
9 |
---
|
10 |
|
11 |
-
|
12 |
-
should probably proofread and complete it, then remove this comment. -->
|
13 |
|
14 |
-
|
15 |
|
16 |
-
|
17 |
-
It achieves the following results on the evaluation set:
|
18 |
-
- Loss: 0.3517
|
19 |
-
- Sql Exact Match String: 0
|
20 |
-
- Tokens Match Avg: 0.9014
|
21 |
-
- First Index Mismatch Avg: 2.2356
|
22 |
-
- Mean Mismatch I Diff Avg: 12.5313
|
23 |
-
- Count Mismatch I Diff Avg: 6.2756
|
24 |
|
25 |
-
## Model
|
26 |
|
27 |
-
|
28 |
|
29 |
-
|
30 |
|
31 |
-
|
32 |
|
33 |
-
|
|
|
|
|
|
|
34 |
|
35 |
-
|
|
|
|
|
|
|
36 |
|
37 |
-
##
|
38 |
|
39 |
-
|
40 |
|
41 |
-
|
42 |
-
- learning_rate: 5e-05
|
43 |
-
- train_batch_size: 4
|
44 |
-
- eval_batch_size: 16
|
45 |
-
- seed: 42
|
46 |
-
- gradient_accumulation_steps: 4
|
47 |
-
- total_train_batch_size: 16
|
48 |
-
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
49 |
-
- lr_scheduler_type: linear
|
50 |
-
- training_steps: 600
|
51 |
|
52 |
-
|
53 |
|
54 |
-
|
55 |
-
|:-------------:|:-----:|:----:|:---------------:|:----------------------:|:----------------:|:------------------------:|:------------------------:|:-------------------------:|
|
56 |
-
| 0.14 | 0.1 | 100 | 0.3510 | 0 | 0.8940 | 2.0844 | 11.4371 | 6.88 |
|
57 |
-
| 0.1083 | 0.2 | 200 | 0.3677 | 0 | 0.8930 | 2.1733 | 11.3445 | 6.6044 |
|
58 |
-
| 0.0912 | 0.3 | 300 | 0.3710 | 0 | 0.8953 | 2.2444 | 12.0020 | 6.44 |
|
59 |
-
| 0.0699 | 0.4 | 400 | 0.3598 | 0 | 0.8996 | 2.1778 | 12.3582 | 6.3289 |
|
60 |
-
| 0.0619 | 0.5 | 500 | 0.3516 | 0 | 0.9010 | 2.2489 | 12.6065 | 6.2756 |
|
61 |
-
| 0.0766 | 0.6 | 600 | 0.3517 | 0 | 0.9014 | 2.2356 | 12.5313 | 6.2756 |
|
62 |
|
|
|
63 |
|
64 |
-
|
65 |
|
66 |
-
|
67 |
-
|
68 |
-
|
69 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
license: cc-by-sa-4.0
|
3 |
+
library_name: transformers
|
4 |
+
pipeline_tag: text-generation
|
|
|
|
|
|
|
|
|
5 |
---
|
6 |
|
7 |
+
# Model Card for SQLCoder-7B-2
|
|
|
8 |
|
9 |
+
A capable large language model for natural language to SQL generation.
|
10 |
|
11 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/603bbad3fd770a9997b57cb6/ixEoJO8QUS9j5G58WPHcw.png)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
12 |
|
13 |
+
## Model Details
|
14 |
|
15 |
+
### Model Description
|
16 |
|
17 |
+
<!-- Provide a longer summary of what this model is. -->
|
18 |
|
19 |
+
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
|
20 |
|
21 |
+
- **Developed by:** [Defog, Inc](https://defog.ai)
|
22 |
+
- **Model type:** [Text to SQL]
|
23 |
+
- **License:** [CC-by-SA-4.0]
|
24 |
+
- **Finetuned from model:** [CodeLlama-70B]
|
25 |
|
26 |
+
### Model Sources [optional]
|
27 |
+
- [**HuggingFace:**](https://huggingface.co/defog/sqlcoder-70b-alpha)
|
28 |
+
- [**GitHub:**](https://github.com/defog-ai/sqlcoder)
|
29 |
+
- [**Demo:**](https://defog.ai/sqlcoder-demo/)
|
30 |
|
31 |
+
## Uses
|
32 |
|
33 |
+
This model is intended to be used by non-technical users to understand data inside their SQL databases. It is meant as an analytics tool, and not as a database admin tool.
|
34 |
|
35 |
+
This model has not been trained to reject malicious requests from users with write access to databases, and should only be used by users with read-only access.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
36 |
|
37 |
+
## How to Get Started with the Model
|
38 |
|
39 |
+
Use the code [here](https://github.com/defog-ai/sqlcoder/blob/main/inference.py) to get started with the model.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
40 |
|
41 |
+
## Prompt
|
42 |
|
43 |
+
Please use the following prompt for optimal results:
|
44 |
|
45 |
+
```
|
46 |
+
### Task
|
47 |
+
Generate a SQL query to answer [QUESTION]{user_question}[/QUESTION]
|
48 |
+
|
49 |
+
### Database Schema
|
50 |
+
The query will run on a database with the following schema:
|
51 |
+
{table_metadata_string_DDL_statements}
|
52 |
+
|
53 |
+
### Answer
|
54 |
+
Given the database schema, here is the SQL query that [QUESTION]{user_question}[/QUESTION]
|
55 |
+
[SQL]
|
56 |
+
```
|
57 |
+
|
58 |
+
## Evaluation
|
59 |
+
|
60 |
+
This model was evaluated on [SQL-Eval](https://github.com/defog-ai/sql-eval), a PostgreSQL based evaluation framework developed by Defog for testing and alignment of model capabilities.
|
61 |
+
|
62 |
+
You can read more about the methodology behind SQLEval [here](https://defog.ai/blog/open-sourcing-sqleval/).
|
63 |
+
|
64 |
+
### Results
|
65 |
+
|
66 |
+
We classified each generated question into one of 6 categories. The table displays the percentage of questions answered correctly by each model, broken down by category.
|
67 |
+
|
68 |
+
| | date | group_by | order_by | ratio | join | where |
|
69 |
+
| -------------- | ---- | -------- | -------- | ----- | ---- | ----- |
|
70 |
+
| sqlcoder-70b | 96 | 91.4 | 97.1 | 85.7 | 97.1 | 91.4 |
|
71 |
+
| sqlcoder-7b | 96 | 85.7 | 97.1 | 85.7 | 82.8 | 77.1 |
|
72 |
+
| sqlcoder-34b | 80 | 94.3 | 85.7 | 77.1 | 85.7 | 80 |
|
73 |
+
| gpt-4 | 72 | 94.3 | 97.1 | 80 | 91.4 | 80 |
|
74 |
+
| gpt-4-turbo | 76 | 91.4 | 91.4 | 62.8 | 88.6 | 77.1 |
|
75 |
+
| natural-sql-7b | 56 | 88.6 | 85.7 | 60 | 88.6 | 80 |
|
76 |
+
| sqlcoder-7b | 64 | 82.9 | 74.3 | 54.3 | 74.3 | 74.3 |
|
77 |
+
| gpt-3.5 | 72 | 77.1 | 82.8 | 34.3 | 65.7 | 71.4 |
|
78 |
+
| claude-2 | 52 | 71.4 | 74.3 | 57.1 | 65.7 | 62.9 |
|
79 |
+
|
80 |
+
## Model Card Contact
|
81 |
+
|
82 |
+
Contact us on X at [@defogdata](https://twitter.com/defogdata), or on email at [[email protected]](mailto:[email protected])
|