Update README.md
Browse files
README.md
CHANGED
@@ -15,10 +15,9 @@ tags:
|
|
15 |
|
16 |
## Introduction
|
17 |
|
18 |
-
Nxcode-CQ-7B-orpo is
|
19 |
|
20 |
* Strong code generation capabilities and competitve performance across a series of benchmarks;
|
21 |
-
* Supporting long context understanding and generation with the context length of 64K tokens;
|
22 |
* Supporting 92 coding languages
|
23 |
* Excellent performance in text-to-SQL, bug fix, etc.
|
24 |
|
@@ -31,7 +30,7 @@ from transformers import AutoModelForCausalLM, AutoTokenizer
|
|
31 |
device = "cuda" # the device to load the model onto
|
32 |
|
33 |
model = AutoModelForCausalLM.from_pretrained(
|
34 |
-
"
|
35 |
torch_dtype="auto",
|
36 |
device_map="auto"
|
37 |
)
|
|
|
15 |
|
16 |
## Introduction
|
17 |
|
18 |
+
Nxcode-CQ-7B-orpo is an ORPO fine-tune of Qwen/CodeQwen1.5-7B-Chat on 100k samples ours datasets.
|
19 |
|
20 |
* Strong code generation capabilities and competitve performance across a series of benchmarks;
|
|
|
21 |
* Supporting 92 coding languages
|
22 |
* Excellent performance in text-to-SQL, bug fix, etc.
|
23 |
|
|
|
30 |
device = "cuda" # the device to load the model onto
|
31 |
|
32 |
model = AutoModelForCausalLM.from_pretrained(
|
33 |
+
"NTQAI/Nxcode-CQ-7B-orpo",
|
34 |
torch_dtype="auto",
|
35 |
device_map="auto"
|
36 |
)
|