File size: 2,702 Bytes
e5e7d80
 
94efbc9
 
 
 
d936e24
e5e7d80
e3f39e0
 
 
 
 
 
 
 
 
 
 
 
 
e5e7d80
 
224081e
e5e7d80
 
 
 
 
 
 
224081e
e5e7d80
 
224081e
 
 
e5e7d80
224081e
e5e7d80
 
 
 
224081e
e5e7d80
 
1898121
15168a1
3e5ca45
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
---
license: mit
configs:
- config_name: default
  data_files:
  - split: main
    path: "instances/*.json"
---

**GITHUB:** https://github.com/metauto-ai/agent-as-a-judge

> [!NOTE]
> Current evaluation techniques are often inadequate for advanced **agentic systems** due to their focus on final outcomes and labor-intensive manual reviews. To overcome this limitation, we introduce the **Agent-as-a-Judge** framework. 


> [!IMPORTANT]
> As a **proof-of-concept**, we applied **Agent-as-a-Judge** to code generation tasks using **DevAI**, a benchmark consisting of 55 realistic AI development tasks with 365 hierarchical user requirements. The results demonstrate that **Agent-as-a-Judge** significantly outperforms traditional evaluation methods, delivering reliable reward signals for scalable self-improvement in agentic systems.
> 
> Check out the dataset on [Hugging Face 🤗](https://huggingface.co/DEVAI-benchmark).
> See how to use this dataset in the [guidelines](benchmark/devai/README.md).

# DEVAI dataset
<p align="center" width="100%">
<img src="dataset_stats.png" align="center" width="84%"/>
</p>

**DEVAI** is a benchmark of 55 realistic AI development tasks. It consists of plentiful manual annotations, including a total of 365 hierarchical user requirements. 
This dataset enables rich reinforcement signals for better automated AI software development. 

Here is an example of our tasks.
<p align="center" width="100%">
<img src="task51.png" align="center" width="90%"/>
</p>

We apply three state-of-the-art automatic software development systems to DEVAI, namely MetaGPT, GPT-Piolt, and OpenHands. 
We suggest expanding the task queries with constraints defined in [constraints.json](https://huggingface.co/datasets/DEVAI-benchmark/DEVAI/blob/main/constraints.json) to guide development systems' behavior and provide auxiliary if needed.
The table below shows preliminary statistics results.
<p align="center" width="100%">
<img src="developer_stats.png" align="center" width="79%"/>
</p>

We perform a manual evaluation to judge if each requirement is satisfied by the solution provided by the aforementioned systems.
<p align="center" width="100%">
<img src="human_evaluation.png" align="center" width="80%"/>
</p>

An automated evaluation program that could possibly replace manual evaluation can be found at our [Github realse](https://github.com/metauto-ai/agent-as-a-judge).
Find more details in our [paper](https://arxiv.org/pdf/2410.10934).

If you use DEVAI to test your development system, we suggest providing the system API keys of [Kaggle](https://www.kaggle.com/) and [Hugging Face](https://huggingface.co), as some DEVAI tasks require access to these platforms.