onekq commited on
Commit
5bdb43f
·
verified ·
1 Parent(s): 1a5e25a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +33 -44
README.md CHANGED
@@ -1,59 +1,48 @@
1
  ---
2
  base_model: unsloth/Qwen2.5-Coder-7B-Instruct-bnb-4bit
3
  library_name: transformers
4
- model_name: onesql-completions1
5
  tags:
6
  - generated_from_trainer
7
  - unsloth
8
  - trl
9
  - sft
10
- licence: license
11
  ---
12
 
13
- # Model Card for onesql-completions1
14
 
15
- This model is a fine-tuned version of [unsloth/Qwen2.5-Coder-7B-Instruct-bnb-4bit](https://huggingface.co/unsloth/Qwen2.5-Coder-7B-Instruct-bnb-4bit).
16
- It has been trained using [TRL](https://github.com/huggingface/trl).
17
 
18
- ## Quick start
19
 
20
- ```python
21
- from transformers import pipeline
22
-
23
- question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
24
- generator = pipeline("text-generation", model="onekq-ai/onesql-completions1", device="cuda")
25
- output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
26
- print(output["generated_text"])
27
- ```
28
-
29
- ## Training procedure
30
-
31
-
32
-
33
-
34
- This model was trained with SFT.
35
 
36
- ### Framework versions
37
-
38
- - TRL: 0.13.0
39
- - Transformers: 4.47.1
40
- - Pytorch: 2.5.1
41
- - Datasets: 3.2.0
42
- - Tokenizers: 0.21.0
43
-
44
- ## Citations
45
-
46
-
47
-
48
- Cite TRL as:
49
-
50
- ```bibtex
51
- @misc{vonwerra2022trl,
52
- title = {{TRL: Transformer Reinforcement Learning}},
53
- author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
54
- year = 2020,
55
- journal = {GitHub repository},
56
- publisher = {GitHub},
57
- howpublished = {\url{https://github.com/huggingface/trl}}
58
- }
 
 
59
  ```
 
1
  ---
2
  base_model: unsloth/Qwen2.5-Coder-7B-Instruct-bnb-4bit
3
  library_name: transformers
4
+ model_name: onekq-ai/OneSQL-v0.1-Qwen-7B
5
  tags:
6
  - generated_from_trainer
7
  - unsloth
8
  - trl
9
  - sft
10
+ licence: apache-2.0
11
  ---
12
 
13
+ # Introduction
14
 
15
+ This model specializes on the Text-to-SQL task. It is finetuned from the quantized version of [Qwen2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct).
16
+ Its sibling [32B model](https://huggingface.co/onekq-ai/OneSQL-v0.1-Qwen-32B) has an EX score of **63.33** and R-VES score of **60.02** on the [BIRD leaderboard](https://bird-bench.github.io/).
17
 
18
+ # Quick start
19
 
20
+ To use this model, craft your prompt to start with your database schema in the form of **CREATE TABLE**, followed by your natural language query preceded by **--**.
21
+ Make sure your prompt ends with **SELECT** in order for the model to finish the query for you.
 
 
 
 
 
 
 
 
 
 
 
 
 
22
 
23
+ ```python
24
+ from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
25
+ from peft import PeftModel
26
+
27
+ model_name = "unsloth/Qwen2.5-Coder-7B-Instruct-bnb-4bit"
28
+ adapter_name = "onekq-ai/OneSQL-v0.1-Qwen-7B"
29
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
30
+ tokenizer.padding_side = "left"
31
+ model = PeftModel.from_pretrained(AutoModelForCausalLM.from_pretrained(model_name, device_map="auto"), adapter_name).to("cuda")
32
+
33
+ generator = pipeline("text-generation", model=model, tokenizer=tokenizer, return_full_text=False)
34
+
35
+ prompt = """
36
+ CREATE TABLE students (
37
+ id INTEGER PRIMARY KEY,
38
+ name TEXT,
39
+ age INTEGER,
40
+ grade TEXT
41
+ );
42
+
43
+ -- Find the three youngest students
44
+ SELECT """
45
+
46
+ result = generator(f"<|im_start|>system\nYou are a SQL expert. Return code only.<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant\n")[0]
47
+ print(result["generated_text"])
48
  ```