Improve model card: Add prominent GitHub link and sample usage
#1
by
nielsr
HF Staff
- opened
README.md
CHANGED
@@ -1,19 +1,18 @@
|
|
1 |
---
|
2 |
-
pipeline_tag: text-generation
|
3 |
library_name: transformers
|
4 |
license: cc-by-nc-4.0
|
|
|
5 |
tags:
|
6 |
- text-to-sql
|
7 |
- reinforcement-learning
|
8 |
---
|
9 |
|
10 |
-
|
11 |
# SLM-SQL: An Exploration of Small Language Models for Text-to-SQL
|
12 |
|
13 |
### Important Links
|
14 |
|
15 |
-
π[Arxiv Paper](https://arxiv.org/abs/2507.22478) |
|
16 |
-
π€[HuggingFace](https://huggingface.co/collections/cycloneboy/slm-sql-688b02f99f958d7a417658dc) |
|
17 |
π€[ModelScope](https://modelscope.cn/collections/SLM-SQL-624bb6a60e9643) |
|
18 |
|
19 |
## News
|
@@ -55,6 +54,33 @@ Performance Comparison of different Text-to-SQL methods on BIRD dev and test dat
|
|
55 |
|
56 |
<img src="https://raw.githubusercontent.com/CycloneBoy/slm_sql/main/data/image/slmsql_ablation_study.png" height="300" alt="slmsql_ablation_study">
|
57 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
58 |
## Model
|
59 |
|
60 |
| **Model** | Base Model | Train Method | Modelscope | HuggingFace |
|
|
|
1 |
---
|
|
|
2 |
library_name: transformers
|
3 |
license: cc-by-nc-4.0
|
4 |
+
pipeline_tag: text-generation
|
5 |
tags:
|
6 |
- text-to-sql
|
7 |
- reinforcement-learning
|
8 |
---
|
9 |
|
|
|
10 |
# SLM-SQL: An Exploration of Small Language Models for Text-to-SQL
|
11 |
|
12 |
### Important Links
|
13 |
|
14 |
+
π[Arxiv Paper](https://arxiv.org/abs/2507.22478) | πΎ[GitHub](https://github.com/CycloneBoy/slm_sql) |
|
15 |
+
π€[HuggingFace Collection](https://huggingface.co/collections/cycloneboy/slm-sql-688b02f99f958d7a417658dc) |
|
16 |
π€[ModelScope](https://modelscope.cn/collections/SLM-SQL-624bb6a60e9643) |
|
17 |
|
18 |
## News
|
|
|
54 |
|
55 |
<img src="https://raw.githubusercontent.com/CycloneBoy/slm_sql/main/data/image/slmsql_ablation_study.png" height="300" alt="slmsql_ablation_study">
|
56 |
|
57 |
+
## Sample Usage
|
58 |
+
|
59 |
+
You can use the model with the `transformers` library. Here's an example:
|
60 |
+
|
61 |
+
```python
|
62 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
63 |
+
import torch
|
64 |
+
|
65 |
+
# Load the tokenizer and model (e.g., SLM-SQL-1.5B)
|
66 |
+
model_name = "cycloneboy/SLM-SQL-1.5B" # Adjust this to the specific model you want to use
|
67 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
68 |
+
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto")
|
69 |
+
|
70 |
+
# Define the input prompt (natural language question for SQL)
|
71 |
+
prompt = "what are the names of all employees?"
|
72 |
+
|
73 |
+
# Prepare the input for the model
|
74 |
+
input_ids = tokenizer.encode(prompt, return_tensors="pt").to(model.device)
|
75 |
+
|
76 |
+
# Generate the SQL query
|
77 |
+
output_ids = model.generate(input_ids, max_new_tokens=100, num_beams=1, do_sample=False)
|
78 |
+
generated_sql = tokenizer.decode(output_ids[0], skip_special_tokens=True)
|
79 |
+
|
80 |
+
print("Generated SQL Query:")
|
81 |
+
print(generated_sql)
|
82 |
+
```
|
83 |
+
|
84 |
## Model
|
85 |
|
86 |
| **Model** | Base Model | Train Method | Modelscope | HuggingFace |
|