Datasets:
ArXiv:
License:
DEVAI-benchmark
commited on
Commit
•
15168a1
1
Parent(s):
1898121
Update README.md
Browse files
README.md
CHANGED
@@ -32,6 +32,6 @@ We perform a manual evaluation to judge if each requirement is satisfied by the
|
|
32 |
</p>
|
33 |
|
34 |
An automated evaluation program that could possibly replace manual evaluation can be found at our [Github realse](https://github.com/metauto-ai/agent-as-a-judge).
|
35 |
-
Find more details in our [paper]().
|
36 |
|
37 |
If you use DEVAI to test your development system, we suggest providing the system API keys of [Kaggle](https://www.kaggle.com/) and [Hugging Face](https://huggingface.co), as some DEVAI tasks require access to these platforms.
|
|
|
32 |
</p>
|
33 |
|
34 |
An automated evaluation program that could possibly replace manual evaluation can be found at our [Github realse](https://github.com/metauto-ai/agent-as-a-judge).
|
35 |
+
Find more details in our [paper](https://arxiv.org/pdf/2410.10934).
|
36 |
|
37 |
If you use DEVAI to test your development system, we suggest providing the system API keys of [Kaggle](https://www.kaggle.com/) and [Hugging Face](https://huggingface.co), as some DEVAI tasks require access to these platforms.
|