cycloneboy nielsr HF Staff commited on
Commit
0afd86f
·
verified ·
1 Parent(s): c054647

Add comprehensive model card for CSC-SQL (#1)

Browse files

- Add comprehensive model card for CSC-SQL (9a88264820c2fa7976b9a18e731eb22e51231dfe)


Co-authored-by: Niels Rogge <[email protected]>

Files changed (1) hide show
  1. README.md +123 -0
README.md ADDED
@@ -0,0 +1,123 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ library_name: transformers
4
+ pipeline_tag: text-generation
5
+ tags:
6
+ - text-to-sql
7
+ - qwen2
8
+ - reinforcement-learning
9
+ ---
10
+
11
+ # CSC-SQL: Corrective Self-Consistency in Text-to-SQL via Reinforcement Learning
12
+
13
+ This repository contains models and related information for the paper [CSC-SQL: Corrective Self-Consistency in Text-to-SQL via Reinforcement Learning](https://huggingface.co/papers/2505.13271).
14
+
15
+ ## Abstract
16
+ Large language models (LLMs) have demonstrated strong capabilities in translating natural language questions about relational databases into SQL queries. In particular, test-time scaling techniques such as Self-Consistency and Self-Correction can enhance SQL generation accuracy by increasing computational effort during inference. However, these methods have notable limitations: Self-Consistency may select suboptimal outputs despite majority votes, while Self-Correction typically addresses only syntactic errors. To leverage the strengths of both approaches, we propose CSC-SQL, a novel method that integrates Self-Consistency and Self-Correction. CSC-SQL selects the two most frequently occurring outputs from parallel sampling and feeds them into a merge revision model for correction. Additionally, we employ the Group Relative Policy Optimization (GRPO) algorithm to fine-tune both the SQL generation and revision models via reinforcement learning, significantly enhancing output quality. Experimental results confirm the effectiveness and generalizability of CSC-SQL. On the BIRD private test set, our 7B model achieves 71.72% execution accuracy, while the 32B model achieves 73.67%.
17
+
18
+ ## Code & Resources
19
+ - **GitHub Repository**: [https://github.com/CycloneBoy/csc_sql](https://github.com/CycloneBoy/csc_sql)
20
+ - **Hugging Face Collection**: [https://huggingface.co/collections/cycloneboy/csc-sql-6835c4a52da10c54bbe14f8e](https://huggingface.co/collections/cycloneboy/csc-sql-6835c4a52da10c54bbe14f8e)
21
+ - **ModelScope Collection**: [https://modelscope.cn/collections/CSC-SQL-8542177708b643](https://modelscope.cn/collections/CSC-SQL-8542177708b643)
22
+
23
+ ## Framework Overview
24
+ ![CSC-SQL Framework](https://huggingface.co/datasets/cycloneboy/csc-sql/resolve/main/data/image/csc_sql_framework.png)
25
+
26
+ ## Main Results
27
+ Performance Comparison of different Text-to-SQL methods on BIRD dev and test dataset:
28
+ ![CSC-SQL Results](https://huggingface.co/datasets/cycloneboy/csc-sql/resolve/main/data/image/csc_sql_result_main.png)
29
+
30
+ ## Models and Datasets on Hugging Face
31
+ The following models and datasets related to CSC-SQL are available on Hugging Face:
32
+
33
+ | **Model and Dataset** | HuggingFace Link |
34
+ |-----------------------|------------------|
35
+ | bird train and dev dataset | [🤗 HuggingFace](https://huggingface.co/datasets/cycloneboy/bird_train) |
36
+ | CscSQL-Merge-Qwen2.5-Coder-3B-Instruct | [🤗 HuggingFace](https://huggingface.co/cycloneboy/CscSQL-Merge-Qwen2.5-Coder-3B-Instruct) |
37
+ | CscSQL-Merge-Qwen2.5-Coder-7B-Instruct | [🤗 HuggingFace](https://huggingface.co/cycloneboy/CscSQL-Merge-Qwen2.5-Coder-7B-Instruct) |
38
+ | CscSQL-Grpo-Qwen2.5-Coder-3B-Instruct | [🤗 HuggingFace](https://huggingface.co/cycloneboy/CscSQL-Grpo-Qwen2.5-Coder-3B-Instruct) |
39
+ | CscSQL-Grpo-XiYanSQL-QwenCoder-3B-2502 | [🤗 HuggingFace](https://huggingface.co/cycloneboy/CscSQL-Grpo-XiYanSQL-QwenCoder-3B-2502) |
40
+ | CscSQL-Grpo-Qwen2.5-Coder-7B-Instruct | [🤗 HuggingFace](https://huggingface.co/cycloneboy/CscSQL-Grpo-Qwen2.5-Coder-7B-Instruct) |
41
+ | CscSQL-Grpo-XiYanSQL-QwenCoder-7B-2502 | [🤗 HuggingFace](https://huggingface.co/cycloneboy/CscSQL-Grpo-XiYanSQL-QwenCoder-7B-2502) |
42
+
43
+ ## Usage
44
+ This model can be loaded using the `transformers` library. Below is an example of how to use the model for text-to-SQL generation. For more detailed instructions on training and evaluation, please refer to the [official GitHub repository](https://github.com/CycloneBoy/csc_sql).
45
+
46
+ ```python
47
+ from transformers import AutoModelForCausalLM, AutoTokenizer
48
+ import torch
49
+
50
+ # Load the model and tokenizer
51
+ model_id = "cycloneboy/CscSQL-Grpo-Qwen2.5-Coder-7B-Instruct" # Example 7B model from the project
52
+ tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
53
+ model = AutoModelForCausalLM.from_pretrained(
54
+ model_id,
55
+ device_map="auto",
56
+ torch_dtype="auto", # or torch.bfloat16 if supported
57
+ trust_remote_code=True # Required for custom architectures like Qwen2
58
+ ).eval()
59
+
60
+ # Prepare your input: natural language question and database schema
61
+ question = "What is the average age of students?"
62
+ schema_info = """
63
+ CREATE TABLE students (
64
+ student_id INT PRIMARY KEY,
65
+ name TEXT,
66
+ age INT,
67
+ major TEXT
68
+ );
69
+ """ # Replace with actual schema from your database
70
+
71
+ # Construct the prompt using the Qwen2 chat template format
72
+ # The model expects a structured input that includes the schema and question, followed by "SQL:"
73
+ formatted_prompt = f"Given the following database schema:
74
+ {schema_info}
75
+
76
+ Generate a SQL query for the following natural language question:
77
+ {question}
78
+ SQL:"
79
+
80
+ messages = [
81
+ {"role": "user", "content": formatted_prompt}
82
+ ]
83
+
84
+ # Apply the chat template and tokenize
85
+ text = tokenizer.apply_chat_template(
86
+ messages,
87
+ tokenize=False,
88
+ add_generation_prompt=True # Adds '<|im_start|>assistant
89
+ ' to prepare for model's response
90
+ )
91
+
92
+ inputs = tokenizer(text, return_tensors="pt").to(model.device)
93
+
94
+ # Generate the SQL query
95
+ generated_ids = model.generate(
96
+ **inputs,
97
+ max_new_tokens=256,
98
+ do_sample=False, # Use greedy decoding for reproducibility
99
+ temperature=0.7,
100
+ top_p=0.9,
101
+ eos_token_id=tokenizer.eos_token_id,
102
+ pad_token_id=tokenizer.pad_token_id,
103
+ )
104
+
105
+ # Decode and print the generated SQL
106
+ # Note: The output may contain the original prompt and special tokens. Post-processing might be needed.
107
+ output_text = tokenizer.decode(generated_ids[0], skip_special_tokens=True)
108
+ print(output_text)
109
+ ```
110
+
111
+ ## Citation
112
+ If you find our work helpful or inspiring, please feel free to cite it:
113
+ ```bibtex
114
+ @misc{sheng2025cscsqlcorrectiveselfconsistencytexttosql,
115
+ title={CSC-SQL: Corrective Self-Consistency in Text-to-SQL via Reinforcement Learning},
116
+ author={Lei Sheng and Shuai-Shuai Xu},
117
+ year={2025},
118
+ eprint={2505.13271},
119
+ archivePrefix={arXiv},
120
+ primaryClass={cs.CL},
121
+ url={https://arxiv.org/abs/2505.13271},
122
+ }
123
+ ```