Datasets:
Tasks:
Text2Text Generation
Modalities:
Text
Formats:
csv
Languages:
English
Size:
< 1K
ArXiv:
Tags:
code generation
License:
Update README.md
Browse files
README.md
CHANGED
@@ -6,6 +6,8 @@ language:
|
|
6 |
- en
|
7 |
size_categories:
|
8 |
- n<1K
|
|
|
|
|
9 |
---
|
10 |
|
11 |
|
@@ -32,6 +34,7 @@ HumanEvalComm is a benchmark dataset for evaluating the communication skills of
|
|
32 |
| 2cp | | ✔️ | ✔️ | 34 |
|
33 |
| 2ap | ✔️ | | ✔️ | 74 |
|
34 |
| **Total** | -- | -- | -- | 762 |
|
|
|
35 |
<sub>
|
36 |
*Note*: The smaller size for 2ac (same applies for 2cp and 2ap) is because we directly applied a combination of two clarification types from 1a, 1c strictly, and we create a new modified problem as 2ac only if applying a combination of 1a and 1c leads to a new problem description that is different from either 1a or 1c. 2cp and 2ap have smaller counts because the ambiguous (a) or inconsistent (c) parts are removed in (p) for a large number of problems.
|
37 |
</sub>
|
@@ -75,4 +78,4 @@ humanevalcomm = load_dataset("jie-jw-wu/HumanEvalComm", split="test")
|
|
75 |
journal={arXiv preprint arXiv:2406.00215},
|
76 |
year={2024}
|
77 |
}
|
78 |
-
```
|
|
|
6 |
- en
|
7 |
size_categories:
|
8 |
- n<1K
|
9 |
+
tags:
|
10 |
+
- code generation
|
11 |
---
|
12 |
|
13 |
|
|
|
34 |
| 2cp | | ✔️ | ✔️ | 34 |
|
35 |
| 2ap | ✔️ | | ✔️ | 74 |
|
36 |
| **Total** | -- | -- | -- | 762 |
|
37 |
+
|
38 |
<sub>
|
39 |
*Note*: The smaller size for 2ac (same applies for 2cp and 2ap) is because we directly applied a combination of two clarification types from 1a, 1c strictly, and we create a new modified problem as 2ac only if applying a combination of 1a and 1c leads to a new problem description that is different from either 1a or 1c. 2cp and 2ap have smaller counts because the ambiguous (a) or inconsistent (c) parts are removed in (p) for a large number of problems.
|
40 |
</sub>
|
|
|
78 |
journal={arXiv preprint arXiv:2406.00215},
|
79 |
year={2024}
|
80 |
}
|
81 |
+
```
|