Update README.md
Browse files
README.md
CHANGED
@@ -22,6 +22,8 @@ Nxcode-CQ-7B-orpo is an ORPO fine-tune of Qwen/CodeQwen1.5-7B-Chat on 100k sampl
|
|
22 |
| --- | --- |
|
23 |
| HumanEval | 86.0 |
|
24 |
| HumanEval+ | 81.1 |
|
|
|
|
|
25 |
|
26 |
We use a simple template to generate the solution for evalplus:
|
27 |
|
@@ -44,6 +46,7 @@ We use a simple template to generate the solution for evalplus:
|
|
44 |
| GPT-3.5-Turbo (Nov 2023)| 76.8| 70.7|
|
45 |
| Llama3-70B-instruct| 76.2| 70.7|
|
46 |
|
|
|
47 |
## Quickstart
|
48 |
|
49 |
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents. You should use transformer version 4.39 if you receive an error when loading the tokenizer
|
|
|
22 |
| --- | --- |
|
23 |
| HumanEval | 86.0 |
|
24 |
| HumanEval+ | 81.1 |
|
25 |
+
| MBPP(v0.2.0) | 82.5 |
|
26 |
+
| MBPP+(v0.2.0) | 70.4 |
|
27 |
|
28 |
We use a simple template to generate the solution for evalplus:
|
29 |
|
|
|
46 |
| GPT-3.5-Turbo (Nov 2023)| 76.8| 70.7|
|
47 |
| Llama3-70B-instruct| 76.2| 70.7|
|
48 |
|
49 |
+
|
50 |
## Quickstart
|
51 |
|
52 |
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents. You should use transformer version 4.39 if you receive an error when loading the tokenizer
|