--- license: mit configs: - config_name: default data_files: - split: main path: "instances/*.json" --- # DEVAI dataset
**DEVAI** is a benchmark of 55 realistic AI development tasks. It consists of plentiful manual annotations, including a total of 365 hierarchical user requirements. This dataset enables rich reinforcement signals for better automated AI software development. Here is an example of our tasks.
We apply three state-of-the-art automatic software development systems to DEVAI, namely MetaGPT, GPT-Piolt, and OpenHands. We suggest expanding the task queries with constraints defined in [constraints.json](https://huggingface.co/datasets/DEVAI-benchmark/DEVAI/blob/main/constraints.json) to guide development systems' behavior and provide auxiliary if needed. The table below shows preliminary statistics results.
We perform a manual evaluation to judge if each requirement is satisfied by the solution provided by the aforementioned systems.
An automated evaluation program that could possibly replace manual evaluation can be found at our [Github realse](https://github.com/metauto-ai/agent-as-a-judge). Find more details in our [paper](https://arxiv.org/pdf/2410.10934). If you use DEVAI to test your development system, we suggest providing the system API keys of [Kaggle](https://www.kaggle.com/) and [Hugging Face](https://huggingface.co), as some DEVAI tasks require access to these platforms.