File size: 2,777 Bytes
69bb499
766baf7
69bb499
766baf7
b7222bc
69bb499
 
 
 
 
 
 
1e326ab
69bb499
 
 
 
 
15165fa
1a61029
15165fa
1a61029
15165fa
f36bb6f
9e0cea1
 
15165fa
01aa790
7c76c25
 
 
 
 
15165fa
 
 
 
 
 
 
 
f36bb6f
 
 
 
 
 
1a61029
9e0cea1
69bb499
 
199602f
69bb499
 
 
 
 
1e326ab
69bb499
 
 
 
 
0c7484a
 
 
 
 
 
 
 
 
 
 
 
 
69bb499
 
 
 
2ca26d3
 
 
69bb499
 
a87bede
 
766baf7
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
---
license: mit
tags:
- code
pipeline_tag: text-generation
---

# Nxcode-CQ-7B-orpo


## Introduction

Nxcode-CQ-7B-orpo is an ORPO fine-tune of Qwen/CodeQwen1.5-7B-Chat on 100k samples ours datasets.

* Strong code generation capabilities and competitve performance across a series of benchmarks;
* Supporting 92 coding languages
* Excellent performance in text-to-SQL, bug fix, etc.

## [Evalplus](https://github.com/evalplus/evalplus)

| EvalPlus | pass@1 |
| --- | --- |
| HumanEval | 86.0 |
| HumanEval+ | 81.1 |
| MBPP(v0.2.0) | 82.5 |
| MBPP+(v0.2.0) | 70.4 |

We use a simple template to generate the solution for evalplus:

```python
"Complete the following Python function:\n{prompt}"
```

[Evalplus Leaderboard](https://evalplus.github.io/leaderboard.html)
| Models | HumanEval | HumanEval+|
|------ | ------  | ------ |
| GPT-4-Turbo (April 2024)|  90.2| 86.6|
| GPT-4 (May 2023)|  88.4| 81.17|
| GPT-4-Turbo (Nov 2023)|  85.4| 79.3| 
| CodeQwen1.5-7B-Chat|  83.5| 78.7| 
| claude-3-opus (Mar 2024)|  82.9| 76.8|
| DeepSeek-Coder-33B-instruct|  81.1| 75.0|
| WizardCoder-33B-V1.1|  79.9| 73.2|
| OpenCodeInterpreter-DS-33B|  79.3| 73.8|
| speechless-codellama-34B-v2.0|  77.4| 72|
| GPT-3.5-Turbo (Nov 2023)| 76.8| 70.7|
| Llama3-70B-instruct| 76.2| 70.7|


## Quickstart

Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents. You should upgrade the transformers if you receive an error when loading the tokenizer
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto

model = AutoModelForCausalLM.from_pretrained(
    "NTQAI/Nxcode-CQ-7B-orpo",
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("NTQAI/Nxcode-CQ-7B-orpo")

prompt = """Complete the following Python function:
from typing import List


def has_close_elements(numbers: List[float], threshold: float) -> bool:
    """ Check if in given list of numbers, are any two numbers closer to each other than
    given threshold.
    >>> has_close_elements([1.0, 2.0, 3.0], 0.5)
    False
    >>> has_close_elements([1.0, 2.8, 3.0, 4.0, 5.0, 2.0], 0.3)
    True
    """
"""
messages = [
    {"role": "user", "content": prompt}
]

inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device)
outputs = model.generate(inputs, max_new_tokens=512, do_sample=False, top_k=50, top_p=0.95, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id)
res = tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True)

```

### Contact information
For personal communication related to this project, please contact Nha Nguyen Van ([email protected]).