Datasets:
Tasks:
Text2Text Generation
Modalities:
Text
Formats:
csv
Languages:
English
Size:
< 1K
ArXiv:
Tags:
code generation
License:
Update README.md
Browse files
README.md
CHANGED
@@ -18,6 +18,14 @@ tags:
|
|
18 |
<a href="https://github.com/jie-jw-wu/human-eval-comm">💻 GitHub Repository </a> •
|
19 |
<a href="https://huggingface.co/datasets/jie-jw-wu/HumanEvalComm/viewer">🤗 Dataset Viewer</a>
|
20 |
</p>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
21 |
## Dataset Description
|
22 |
|
23 |
HumanEvalComm is a benchmark dataset for evaluating the communication skills of Large Language Models (LLMs) in code generation tasks. It is built upon the widely used [HumanEval benchmark](https://github.com/openai/human-eval). HumanEvalComm contains 762 modified problem descriptions based on the 164 problems in the HumanEval dataset. The modifications are created by applying one or a combination of the aforementioned clarification types. Each modified problem description is manually verified to ensure it triggers clarifying questions. The goal of HumanEvalComm is to evaluate the ability of LLMs to ask clarifying questions when faced with incomplete, inconsistent, or ambiguous requirements in coding problems:
|
|
|
18 |
<a href="https://github.com/jie-jw-wu/human-eval-comm">💻 GitHub Repository </a> •
|
19 |
<a href="https://huggingface.co/datasets/jie-jw-wu/HumanEvalComm/viewer">🤗 Dataset Viewer</a>
|
20 |
</p>
|
21 |
+
|
22 |
+
<div>
|
23 |
+
|
24 |
+
<a href='https://huggingface.co/datasets/jie-jw-wu/HumanEvalComm'>
|
25 |
+
<img src="https://github.com/user-attachments/assets/3f62b151-d08f-4641-8d10-cc53024ec2c4" alt="HumanEvalComm" width=300></img>
|
26 |
+
</a>
|
27 |
+
</div>
|
28 |
+
|
29 |
## Dataset Description
|
30 |
|
31 |
HumanEvalComm is a benchmark dataset for evaluating the communication skills of Large Language Models (LLMs) in code generation tasks. It is built upon the widely used [HumanEval benchmark](https://github.com/openai/human-eval). HumanEvalComm contains 762 modified problem descriptions based on the 164 problems in the HumanEval dataset. The modifications are created by applying one or a combination of the aforementioned clarification types. Each modified problem description is manually verified to ensure it triggers clarifying questions. The goal of HumanEvalComm is to evaluate the ability of LLMs to ask clarifying questions when faced with incomplete, inconsistent, or ambiguous requirements in coding problems:
|