Opencsg-starcoder2-15b-v0.1 [中文] [English]

OpenCSG

[OpenCSG Community] [github] [wechat] [Twitter]

OpenCSG stands for Converged resources, Software refinement, and Generative LM. The 'C' represents Converged resources, indicating the integration and full utilization of hybrid resources. The 'S' stands for Software refinement, signifying software that is refined by large models. The 'G' represents Generative LM, which denotes widespread, inclusive, and democratized generative large models.

The vision of OpenCSG is to empower every industry, every company, and every individual to own their models. We adhere to the principles of openness and open source, making the large model software stack of OpenCSG available to the community. We welcome everyone to use, send feedback, and contribute collaboratively.

Model Description

The StarCoder models are 15.5B parameter models trained on 80+ programming languages from The Stack (v1.2), with opt-out requests excluded. Based on StarCoder2, opencsg-starcoder2-15b-v0.1 was fintuned by OpenCSG LLM Research Team througth full-paramters fine-tuning method.

Model Eval

HumanEval is the most common code generation benchmark for evaluating model performance, especially on the compeltion of code exercise cases. Model evaluation is, to some extent, a metaphysics. Different models have different sensitivities to decoding methods, parameters and instructions. It is impratical for us to manually set specific configurations for each fine-tuned model, because a real LLM should master general capabilities despite the parameters being manipulated by users.

Therefore, OpenCSG racked their brains to provide a relatively fair method to compare the fine-tuned models on the HumanEval benchmark. To simplify the comparison, we chosed the Pass@1 metric for the Python language, but our fine-tuning dataset includes samples in multiple languages.

For fairness, we evaluated the original and fine-tuned StarCoder models based only on the prompts from the original cases, without including any other instructions.

Otherwise, we use the greedy decoding method for each model during evaluation.

Model HumanEval python pass@1
starcoder 35.98%
opencsg-starcoder-v0.1 42.68%
starcoder2-3b 32.93%
opencsg-starcoder2-3b-v0.1 45.12%
starcoder2-7b 35.37%
opencsg-starcoder2-7b-v0.1 51.22%
starcoder2-15b 45.12%
opencsg-starcoder2-15b-v0.1 59.15%

TODO

  • We will provide more benchmark scores on fine-tuned models in the future.
  • We will provide different practical problems to evaluate the performance of fine-tuned models in the field of software engineering.

Model Usage

from transformers import AutoTokenizer
import transformers
import torch

model = "opencsg/opencsg-starcoder2-15b-v0.1"

tokenizer = AutoTokenizer.from_pretrained(model, trust_remote_code=True)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)
input_text = """#Generate one test case for the following code.
def quick_sort(arr):
    if len(arr) < 2:
        return arr
    else:
        pivot = arr[0]
        less = [i for i in arr[1:] if i <= pivot]
        greater = [i for i in arr[1:] if i > pivot]
        return quick_sort(less) + [pivot] + quick_sort(greater)
"""
sequences = pipeline(
    input_text,
    do_sample=False,
    top_k=10,
    temperature=0.1,
    top_p=0.95,
    num_return_sequences=1,
    eos_token_id=tokenizer.eos_token_id,
    max_length=256,
)
for seq in sequences:
    print(seq['generated_text'][len(input_text):])

generate output

# Test case
arr = [5, 2, 9, 1, 7]
print(quick_sort(arr))

Training

Hardware

  • GPUs: 8 Tesla A800
  • Training time: 7 hours

Software

OpenCSG介绍

OpenCSG

[OpenCSG 社区] [github] [微信] [推特]

OpenCSG中 Open是开源开放;C 代表 Converged resources,整合和充分利用的混合异构资源优势,算力降本增效;S 代表 Software refined,重新定义软件的交付方式,通过大模型驱动软件开发,人力降本增效;G 代表 Generative LM,大众化、普惠化和民主化的可商用的开源生成式大模型。

OpenCSG的愿景是让每个行业、每个公司、每个人都拥有自己的模型。 我们坚持开源开放的原则,将OpenCSG的大模型软件栈开源到社区,欢迎使用、反馈和参与共建,欢迎关注。

模型介绍

StarCoder 模型是在 The Stack (v1.2) 中的 80 多种编程语言上训练的 155 亿参数模型,不包括用户请求排除在训练数据之外的部分。 opencsg-starcoder2-15b-v0.1是 OpenCSG 大模型研究团队基于 StarCoder2,通过全参数微调的方法进行调优的。

模型评估

HumanEval 是评估模型在代码生成方面性能的最常见的基准,尤其是在代码习题的补全方面。 模型评估在某种程度上是一种玄学。不同的模型对解码方法、参数和指令的敏感度不同, 优秀的大模型是具备通用能力的,而不会因为解码参数的调整使得模型的生成表现有很大的差异。

因此,OpenCSG 提供了一个相对公平的方法来在 HumanEval 基准上比较各微调模型。 方便起见,我们选择了Python语言Pass@1指标,但要注意的是,我们的微调数据集是包含多种编程语言。

为了公平起见,我们仅根据原始问题的提示来评估原始和微调过的 StarCoder 模型,不包含任何其他说明。

除此之外,我们在评估过程中对每个模型都使用贪婪解码方法。

模型 HumanEval python pass@1
starcoder 35.98%
opencsg-starcoder-v0.1 42.68%
starcoder2-3b 32.93%
opencsg-starcoder2-3b-v0.1 45.12%
starcoder2-7b 35.37%
opencsg-starcoder2-7b-v0.1 51.22%
starcoder2-15b 45.12%
opencsg-starcoder2-15b-v0.1 59.15%

TODO

  • 未来我们将提供更多微调模型的在各基准上的分数。
  • 我们将提供不同的实际问题来评估微调模型在软件工程领域的性能。

模型使用

from transformers import AutoTokenizer
import transformers
import torch

model = "opencsg/opencsg-starcoder2-15b-v0.1"

tokenizer = AutoTokenizer.from_pretrained(model, trust_remote_code=True)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)
input_text = """#Generate one test case for the following code.
def quick_sort(arr):
    if len(arr) < 2:
        return arr
    else:
        pivot = arr[0]
        less = [i for i in arr[1:] if i <= pivot]
        greater = [i for i in arr[1:] if i > pivot]
        return quick_sort(less) + [pivot] + quick_sort(greater)
"""
sequences = pipeline(
    input_text,
    do_sample=False,
    top_k=10,
    temperature=0.1,
    top_p=0.95,
    num_return_sequences=1,
    eos_token_id=tokenizer.eos_token_id,
    max_length=256,
)
for seq in sequences:
    print(seq['generated_text'][len(input_text):])

generate output

# Test case
arr = [5, 2, 9, 1, 7]
print(quick_sort(arr))

训练

硬件资源

  • GPU数量: 8 Tesla A800
  • 训练时间: 7 小时

软件使用

Downloads last month
367
Safetensors
Model size
16B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for opencsg/opencsg-starcoder2-15b-v0.1

Finetuned
(14)
this model
Quantizations
4 models

Datasets used to train opencsg/opencsg-starcoder2-15b-v0.1

Collection including opencsg/opencsg-starcoder2-15b-v0.1