Datasets:
ArXiv:
License:
DEVAI-benchmark
commited on
Commit
•
1898121
1
Parent(s):
224081e
Update README.md
Browse files
README.md
CHANGED
@@ -31,7 +31,7 @@ We perform a manual evaluation to judge if each requirement is satisfied by the
|
|
31 |
<img src="human_evaluation.png" align="center" width="80%"/>
|
32 |
</p>
|
33 |
|
34 |
-
An automated evaluation program that could possibly replace manual evaluation can be found at our [Github realse](https://github.com/metauto-ai/
|
35 |
Find more details in our [paper]().
|
36 |
|
37 |
If you use DEVAI to test your development system, we suggest providing the system API keys of [Kaggle](https://www.kaggle.com/) and [Hugging Face](https://huggingface.co), as some DEVAI tasks require access to these platforms.
|
|
|
31 |
<img src="human_evaluation.png" align="center" width="80%"/>
|
32 |
</p>
|
33 |
|
34 |
+
An automated evaluation program that could possibly replace manual evaluation can be found at our [Github realse](https://github.com/metauto-ai/agent-as-a-judge).
|
35 |
Find more details in our [paper]().
|
36 |
|
37 |
If you use DEVAI to test your development system, we suggest providing the system API keys of [Kaggle](https://www.kaggle.com/) and [Hugging Face](https://huggingface.co), as some DEVAI tasks require access to these platforms.
|