Datasets:
license: apache-2.0
task_categories:
- text2text-generation
language:
- en
size_categories:
- n<1K
tags:
- code generation
HumanEvalComm: Benchmarking the Communication Skills of Code Generation for LLMs and LLM Agent
π Paper β’ π» GitHub Repository β’ π€ Dataset Viewer
Dataset Description
HumanEvalComm is a benchmark dataset for evaluating the communication skills of Large Language Models (LLMs) in code generation tasks. It is built upon the widely used HumanEval benchmark. HumanEvalComm contains 762 modified problem descriptions based on the 164 problems in the HumanEval dataset. The modifications are created by applying one or a combination of the aforementioned clarification types. Each modified problem description is manually verified to ensure it triggers clarifying questions. The goal of HumanEvalComm is to evaluate the ability of LLMs to ask clarifying questions when faced with incomplete, inconsistent, or ambiguous requirements in coding problems:
- Ambiguity: Statements in the problem descriptions are modified to have multiple interpretations. For example, changing "sort the array descendingly" to "sort the array (descendingly or ascendingly)".
- Inconsistency: Modifications are made to create contradictions between the problem description and examples. For instance, changing the output of test examples to contradict the provided textual description.
- Incompleteness: Parts of the problem description are removed to make it incomplete, requiring the model to ask questions to recover the missing content.
Clarification Category | Ambiguity | Inconsistency | Incompleteness | Count |
---|---|---|---|---|
1a | βοΈ | 164 | ||
1c | βοΈ | 164 | ||
1p | βοΈ | 164 | ||
2ac | βοΈ | βοΈ | 162 | |
2cp | βοΈ | βοΈ | 34 | |
2ap | βοΈ | βοΈ | 74 | |
Total | -- | -- | -- | 762 |
Dataset Structure
The fields are the same as the fields in HumanEval benchmark, except the following fields:
- prompt1a: Coding problem description with 1a clarification type (Ambiguity)
- prompt1c: Coding problem description with 1a clarification type (Inconsistency)
- prompt1p: Coding problem description with 1a clarification type (Incompleteness)
- prompt2ac: Coding problem description with 1a clarification type (Ambiguity and Inconsistency)
- prompt2cp: Coding problem description with 1a clarification type (Inconsistency and Incompleteness)
- prompt2ap: Coding problem description with 1a clarification type (Ambiguity and Incompleteness)
Prompt Format
Each task is formatted with a clear instruction and provided function signature to guide the model in generating the responses. There are two rounds where the model is prompted with input:
- Round one:
You are an expert software developer who writes high quality code. With below information, please either generate Python3 code (Respond directly with code only with markdown), or ask clarifying questions:
{code_problem} (field prompt{1a,1c,1p,2ac,2cp,2ap})
- Round two:
{code_problem}
{clarifying questions}
{answers to clarifying questions}
Given above conversations, generate Python code directly (Markdown) to solve the coding problem:
Usage
You can easily load the dataset using the Hugging Face datasets
library. See more details of the usage in our github repo.
from datasets import load_dataset
humanevalcomm = load_dataset("jie-jw-wu/HumanEvalComm", split="test")
Citation
@article{wu2024benchmarking,
title={Benchmarking the Communication Competence of Code Generation for LLMs and LLM Agent},
author={Wu, Jie JW and Fard, Fatemeh H},
journal={arXiv preprint arXiv:2406.00215},
year={2024}
}