Datasets:
ArXiv:
License:
license: mit | |
configs: | |
- config_name: default | |
data_files: | |
- split: main | |
path: "instances/*.json" | |
# DEVAI dataset | |
<p align="center" width="100%"> | |
<img src="dataset_stats.png" align="center" width="84%"/> | |
</p> | |
**DEVAI** is a benchmark of 55 realistic AI development tasks. It consists of plentiful manual annotations, including a total of 365 hierarchical user requirements. | |
This dataset enables rich reinforcement signals for better automated AI software development. | |
Here is an example of our tasks. | |
<p align="center" width="100%"> | |
<img src="task51.png" align="center" width="90%"/> | |
</p> | |
We apply three state-of-the-art automatic software development systems to DEVAI, namely MetaGPT, GPT-Piolt, and OpenHands. | |
We suggest expanding the task queries with constraints defined in [constraints.json](https://huggingface.co/datasets/DEVAI-benchmark/DEVAI/blob/main/constraints.json) to guide development systems' behavior and provide auxiliary if needed. | |
The table below shows preliminary statistics results. | |
<p align="center" width="100%"> | |
<img src="developer_stats.png" align="center" width="79%"/> | |
</p> | |
We perform a manual evaluation to judge if each requirement is satisfied by the solution provided by the aforementioned systems. | |
<p align="center" width="100%"> | |
<img src="human_evaluation.png" align="center" width="80%"/> | |
</p> | |
An automated evaluation program that could possibly replace manual evaluation can be found at our [Github realse](https://github.com/metauto-ai/agent-as-a-judge). | |
Find more details in our [paper](https://arxiv.org/pdf/2410.10934). | |
If you use DEVAI to test your development system, we suggest providing the system API keys of [Kaggle](https://www.kaggle.com/) and [Hugging Face](https://huggingface.co), as some DEVAI tasks require access to these platforms. |