Datasets:
license: mit
configs:
- config_name: default
data_files:
- split: main
path: instances/*.json
GITHUB: https://github.com/metauto-ai/agent-as-a-judge
Current evaluation techniques are often inadequate for advanced agentic systems due to their focus on final outcomes and labor-intensive manual reviews. To overcome this limitation, we introduce the Agent-as-a-Judge framework.
As a proof-of-concept, we applied Agent-as-a-Judge to code generation tasks using DevAI, a benchmark consisting of 55 realistic AI development tasks with 365 hierarchical user requirements. The results demonstrate that Agent-as-a-Judge significantly outperforms traditional evaluation methods, delivering reliable reward signals for scalable self-improvement in agentic systems.
Check out the dataset on Hugging Face 🤗. See how to use this dataset in the guidelines.
DEVAI dataset
DEVAI is a benchmark of 55 realistic AI development tasks. It consists of plentiful manual annotations, including a total of 365 hierarchical user requirements. This dataset enables rich reinforcement signals for better automated AI software development.
Here is an example of our tasks.
We apply three state-of-the-art automatic software development systems to DEVAI, namely MetaGPT, GPT-Piolt, and OpenHands. We suggest expanding the task queries with constraints defined in constraints.json to guide development systems' behavior and provide auxiliary if needed. The table below shows preliminary statistics results.
We perform a manual evaluation to judge if each requirement is satisfied by the solution provided by the aforementioned systems.
An automated evaluation program that could possibly replace manual evaluation can be found at our Github realse. Find more details in our paper.
If you use DEVAI to test your development system, we suggest providing the system API keys of Kaggle and Hugging Face, as some DEVAI tasks require access to these platforms.