YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Quantization made by Richard Erkhov.
Nxcode-CQ-7B-orpo - GGUF
- Model creator: https://huggingface.co/NTQAI/
- Original model: https://huggingface.co/NTQAI/Nxcode-CQ-7B-orpo/
Name | Quant method | Size |
---|---|---|
Nxcode-CQ-7B-orpo.Q2_K.gguf | Q2_K | 2.84GB |
Nxcode-CQ-7B-orpo.IQ3_XS.gguf | IQ3_XS | 3.13GB |
Nxcode-CQ-7B-orpo.IQ3_S.gguf | IQ3_S | 3.27GB |
Nxcode-CQ-7B-orpo.Q3_K_S.gguf | Q3_K_S | 3.26GB |
Nxcode-CQ-7B-orpo.IQ3_M.gguf | IQ3_M | 3.36GB |
Nxcode-CQ-7B-orpo.Q3_K.gguf | Q3_K | 3.55GB |
Nxcode-CQ-7B-orpo.Q3_K_M.gguf | Q3_K_M | 3.55GB |
Nxcode-CQ-7B-orpo.Q3_K_L.gguf | Q3_K_L | 3.71GB |
Nxcode-CQ-7B-orpo.IQ4_XS.gguf | IQ4_XS | 3.79GB |
Nxcode-CQ-7B-orpo.Q4_0.gguf | Q4_0 | 3.89GB |
Nxcode-CQ-7B-orpo.IQ4_NL.gguf | IQ4_NL | 3.94GB |
Nxcode-CQ-7B-orpo.Q4_K_S.gguf | Q4_K_S | 4.11GB |
Nxcode-CQ-7B-orpo.Q4_K.gguf | Q4_K | 4.41GB |
Nxcode-CQ-7B-orpo.Q4_K_M.gguf | Q4_K_M | 4.41GB |
Nxcode-CQ-7B-orpo.Q4_1.gguf | Q4_1 | 4.29GB |
Nxcode-CQ-7B-orpo.Q5_0.gguf | Q5_0 | 4.69GB |
Nxcode-CQ-7B-orpo.Q5_K_S.gguf | Q5_K_S | 4.79GB |
Nxcode-CQ-7B-orpo.Q5_K.gguf | Q5_K | 5.06GB |
Nxcode-CQ-7B-orpo.Q5_K_M.gguf | Q5_K_M | 5.06GB |
Nxcode-CQ-7B-orpo.Q5_1.gguf | Q5_1 | 5.09GB |
Nxcode-CQ-7B-orpo.Q6_K.gguf | Q6_K | 5.94GB |
Nxcode-CQ-7B-orpo.Q8_0.gguf | Q8_0 | 7.18GB |
Original model description:
license_name: tongyi-qianwen-research license_link: https://huggingface.co/Qwen/CodeQwen1.5-7B/blob/main/LICENSE tags: - code pipeline_tag: text-generation license: other
Introduction
Nxcode-CQ-7B-orpo is an Monolithic Preference Optimization without Reference Model fine-tune of Qwen/CodeQwen1.5-7B on 100k samples of high-quality ranking data.
Evalplus
EvalPlus | pass@1 |
---|---|
HumanEval | 86.6 |
HumanEval+ | 83.5 |
MBPP(v0.2.0) | 82.3 |
MBPP+(v0.2.0) | 70.4 |
We use a simple template to generate the solution for evalplus:
"Complete the following Python function:\n{prompt}"
Models | HumanEval | HumanEval+ |
---|---|---|
GPT-4-Turbo (April 2024) | 90.2 | 86.6 |
GPT-4 (May 2023) | 88.4 | 81.17 |
GPT-4-Turbo (Nov 2023) | 85.4 | 79.3 |
CodeQwen1.5-7B-Chat | 83.5 | 78.7 |
claude-3-opus (Mar 2024) | 82.9 | 76.8 |
DeepSeek-Coder-33B-instruct | 81.1 | 75.0 |
WizardCoder-33B-V1.1 | 79.9 | 73.2 |
OpenCodeInterpreter-DS-33B | 79.3 | 73.8 |
speechless-codellama-34B-v2.0 | 77.4 | 72 |
GPT-3.5-Turbo (Nov 2023) | 76.8 | 70.7 |
Llama3-70B-instruct | 76.2 | 70.7 |
Bigcode Leaderboard
09/05/2024
Top 1 average score.
Top 2 winrate.
Quickstart
Here provides a code snippet with apply_chat_template
to show you how to load the tokenizer and model and how to generate contents. You should upgrade the transformers if you receive an error when loading the tokenizer
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained(
"NTQAI/Nxcode-CQ-7B-orpo",
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("NTQAI/Nxcode-CQ-7B-orpo")
prompt = """Complete the following Python function:
from typing import List
def has_close_elements(numbers: List[float], threshold: float) -> bool:
""" Check if in given list of numbers, are any two numbers closer to each other than
given threshold.
>>> has_close_elements([1.0, 2.0, 3.0], 0.5)
False
>>> has_close_elements([1.0, 2.8, 3.0, 4.0, 5.0, 2.0], 0.3)
True
"""
"""
messages = [
{"role": "user", "content": prompt}
]
inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device)
outputs = model.generate(inputs, max_new_tokens=512, do_sample=False, top_k=50, top_p=0.95, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id)
res = tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True)
Contact information
For personal communication related to this project, please contact Nha Nguyen Van ([email protected]).
- Downloads last month
- 88