File size: 2,928 Bytes
ccefe8a
 
 
 
 
 
 
53bd167
 
041def4
61532b6
 
 
 
 
 
53bd167
 
041def4
 
 
53bd167
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
---
tags:
- code
- evaluation
- code llm
size_categories:
- n<1K
---

<h1 style="text-align: center;">Abstract</h1>
<p>
    Driven by the surge in code generation using large language models (LLMs), numerous benchmarks have emerged to evaluate these LLMs' capabilities. We conducted a large-scale human evaluation of HumanEval and MBPP, two popular benchmarks for Python code generation, analyzing their diversity and difficulty.
    Our findings unveil a critical bias towards a limited set of programming concepts, neglecting most of the other concepts entirely. Furthermore, we uncover a worrying prevalence of easy tasks that can inflate model performance estimations. To address these limitations, we propose a novel benchmark, PythonSaga, featuring 185 hand-crafted prompts in a balanced representation of 38 programming concepts across diverse difficulty levels. 
    The robustness of our benchmark is demonstrated by the poor performance of existing Code-LLMs. The code and dataset are openly available to the NLP community at 
    <a href="https://github.com/PythonSaga/PythonSaga" target="_blank">https://github.com/PythonSaga/PythonSaga</a>.
</p>
<br>

<h1 style="text-align: center;">PythonSaga</h1>
This dataset follows the rules and diversity of template suggested in the paper "PythonSaga: Redefining the Benchmark to Evaluate Code Generating LLM" The goal is to make benchmarks better at assessing Code Generating Language Models (LLMs).

| **Model**                     | **Size** | **Pass@1** | **Pass@10** |
|-------------------------------|----------|------------|-------------|
| StarCoderBase                 | 7B       | 0.0029     | 0.0149      |
| StarCoder2                    | 7B       | 0.0024     | 0.0217      |
| Code Llama                    | 7B       | 0.0067     | 0.0472      |
| CodeQwen1.5-Chat              | 7B       | 0.0059     | 0.0497      |
| Nxcode-CQ-orpo                | 7B       | 0.0058     | 0.0523      |
| Mistral-Instruct-v0.1         | 7B       | 0.0140     | 0.0552      |
| Code Llama Instruct           | 7B       | 0.0178     | 0.0744      |
| Deepseek Coder Instruct       | 6.7B     | 0.0137     | 0.0889      |
| Code Llama Python             | 7B       | 0.0240     | 0.0979      |
| Llama 3                       | 8B       | 0.0370     | 0.1125      |
| Phi-2                         | 2.7B     | 0.0302     | 0.1187      |
| OpenCodeInterpreter-DS        | 6.7B     | 0.0259     | 0.1206      |
| Deepseek Coder                | 6.7B     | 0.0343     | 0.1415      |
| Code Llama Python             | 13B      | 0.0405     | 0.1514      |
| GPT-3.5                       | NA       | 0.0724     | 0.2384      |
| GPT-4                         | NA       | 0.1243     | 0.3311      |
                              

*Comparison between open and closed-source models on PythonSaga. We use the number of samples (n)
as 20 for both open and closed-source models.*