Nxcode-CQ-7B-orpo / README.md
nhanv's picture
Update README.md
7c76c25 verified
|
raw
history blame
2.6 kB
metadata
license: other
license_name: tongyi-qianwen
license_link: https://huggingface.co/Qwen/CodeQwen1.5-7B-Chat/blob/main/LICENSE
language:
  - en
pipeline_tag: text-generation
tags:
  - chat

Nxcode-CQ-7B-orpo

Introduction

Nxcode-CQ-7B-orpo is an ORPO fine-tune of Qwen/CodeQwen1.5-7B-Chat on 100k samples ours datasets.

  • Strong code generation capabilities and competitve performance across a series of benchmarks;
  • Supporting 92 coding languages
  • Excellent performance in text-to-SQL, bug fix, etc.

Evalplus

EvalPlus pass@1
HumanEval 86.0
HumanEval+ 81.1

We use the simple tempale for generate the solution for evalplus:

"Complete the following Python function:\n{prompt}"

Evalplus Leaderboard

Models HumanEval HumanEval+
GPT-4-Turbo (April 2024) 90.2 86.6
GPT-4 (May 2023) 88.4 81.17
GPT-4-Turbo (Nov 2023) 85.4 79.3
CodeQwen1.5-7B-Chat 83.5 78.7
claude-3-opus (Mar 2024) 82.9 76.8
DeepSeek-Coder-33B-instruct 81.1 75.0
WizardCoder-33B-V1.1 79.9 73.2
OpenCodeInterpreter-DS-33B 79.3 73.8
speechless-codellama-34B-v2.0 77.4 72
GPT-3.5-Turbo (Nov 2023) 76.8 70.7
Llama3-70B-instruct 76.2 70.7

Quickstart

Here provides a code snippet with apply_chat_template to show you how to load the tokenizer and model and how to generate contents.

from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto

model = AutoModelForCausalLM.from_pretrained(
    "NTQAI/Nxcode-CQ-7B-orpo",
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("NTQAI/Nxcode-CQ-7B-orpo")

prompt = "Write a quicksort algorithm in python"
messages = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)

generated_ids = model.generate(
    model_inputs.input_ids,
    max_new_tokens=512
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]

Contact information

For personal communication related to this project, please contact Nha Nguyen Van ([email protected]).