Update README.md
Browse files
README.md
CHANGED
@@ -387,11 +387,40 @@ configs:
|
|
387 |
path: py/train-*
|
388 |
---
|
389 |
# Template Generation Dataset for AI Agents Evaluation
|
|
|
390 |
|
391 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
392 |
|
393 |
This dataset contains information about repos (initially gathered from https://seart-ghs.si.usi.ch) matching the following criteria:
|
394 |
-
* `
|
395 |
* 10+ stars
|
396 |
* 10-1000 code lines
|
397 |
* updated after 2023-01-01 00:00
|
@@ -400,9 +429,9 @@ This dataset contains information about repos (initially gathered from https://s
|
|
400 |
* filtered by `is_template=True` or template-related keywords presence in the description (`template`, `boilerplate`, `starter`, `skeleton`, `blueprint`, `scaffold`, `pattern`, `seed`, `example`, `demo`, `sample`, `showcase`, `illustration`, `exemplar`, `use case`, `prototype`)
|
401 |
* android is moved to separate category (by `android` keyword in description or repo `fill_name`)
|
402 |
|
403 |
-
You can find all scripts to reproduce dataset collection in our [GitHub
|
404 |
|
405 |
-
##
|
406 |
|
407 |
| **Field** | **Description** |
|
408 |
|:------------------:|:----------------------------------------:|
|
@@ -437,15 +466,21 @@ You can find all scripts to reproduce dataset collection in our [GitHub ](https:
|
|
437 |
| `description_tokens_count`* | Number of tokens in the repository description. |
|
438 |
| `description_words_count` | Number of words in the repository description. |
|
439 |
| `description_lines_count` | Number of lines in the repository description. |
|
440 |
-
| `
|
441 |
-
| `
|
442 |
-
| `readme_header_tokens_count`* | Number of tokens in the repository
|
443 |
-
| `readme_header_words_count` | Number of words in the repository
|
444 |
-
| `readme_header_lines_count` | Number of lines in the repository
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
445 |
|
446 |
-
Tokens calcualted for GPT-4 tokenizer
|
447 |
|
448 |
-
##
|
449 |
* Load the data via [load_dataset](https://huggingface.co/docs/datasets/v2.14.3/en/package_reference/loading_methods#datasets.load_dataset):
|
450 |
```python
|
451 |
|
|
|
387 |
path: py/train-*
|
388 |
---
|
389 |
# Template Generation Dataset for AI Agents Evaluation
|
390 |
+
This is the benchmark for the Project Template Generation task, which requires creating a project template (content and structure) by short textual description.
|
391 |
|
392 |
+
The dataset provides all the required components for evaluation of project template generation approaches in real project templates collected from GitHub, including:
|
393 |
+
* Repository description;
|
394 |
+
* Repository root README.md file content;
|
395 |
+
* Repository link by which the \"golden\" template can be accessed;
|
396 |
+
* GitHub repository telemetry, including additional data and metrics that can be useful in developing new approaches.
|
397 |
+
|
398 |
+
All the repositories are published under permissive licenses (MIT, Apache-2.0, BSD-3-Clause, and BSD-2-Clause). The datapoints can be removed upon request.
|
399 |
+
|
400 |
+
The collected dataset was carefully filtered, enhanced with useful metrics and, what's more, manually labeled, which assures the data quality and provides a golden subset of good examples for evaluation.\
|
401 |
+
Moreover, the dataset was split into several categories, namely:
|
402 |
+
|
403 |
+
| **Category** | **Description** | **Number of data points** |
|
404 |
+
|:------------------:|:----------------------------------------:|:----------------------------------------:|
|
405 |
+
| `py` | Repositories with Python main language | 565 |
|
406 |
+
| `java` | Repositories with Java main language | 81 |
|
407 |
+
| `kt` | Repositories with Kotlin main language | 19 |
|
408 |
+
| `android` | Repositories with Kotlin/Java main language based on Android sdk | 17 |
|
409 |
+
|
410 |
+
...and splits, namely:
|
411 |
+
| **Split** | **Description** |
|
412 |
+
|:------------------:|:----------------------------------------:|
|
413 |
+
| `dev` | All collected datapoints |
|
414 |
+
| `test` | Manually verified datapoints |
|
415 |
+
| `train` | Rest of the datapoint from `dev` without `test` |
|
416 |
+
|
417 |
+
The following sections describe the utilities around the dataset, as well as dataset content.
|
418 |
+
|
419 |
+
|
420 |
+
## Dataset Collection
|
421 |
|
422 |
This dataset contains information about repos (initially gathered from https://seart-ghs.si.usi.ch) matching the following criteria:
|
423 |
+
* `Python`, `Java`, `Kotlin` programming languages
|
424 |
* 10+ stars
|
425 |
* 10-1000 code lines
|
426 |
* updated after 2023-01-01 00:00
|
|
|
429 |
* filtered by `is_template=True` or template-related keywords presence in the description (`template`, `boilerplate`, `starter`, `skeleton`, `blueprint`, `scaffold`, `pattern`, `seed`, `example`, `demo`, `sample`, `showcase`, `illustration`, `exemplar`, `use case`, `prototype`)
|
430 |
* android is moved to separate category (by `android` keyword in description or repo `fill_name`)
|
431 |
|
432 |
+
You can find all scripts to reproduce dataset collection in our [GitHub](https://github.com/JetBrains-Research/agents-eval) repository
|
433 |
|
434 |
+
## Dataset Description
|
435 |
|
436 |
| **Field** | **Description** |
|
437 |
|:------------------:|:----------------------------------------:|
|
|
|
466 |
| `description_tokens_count`* | Number of tokens in the repository description. |
|
467 |
| `description_words_count` | Number of words in the repository description. |
|
468 |
| `description_lines_count` | Number of lines in the repository description. |
|
469 |
+
| `readme` | Root README.md repository content. |
|
470 |
+
| `readme_symbols_count`| Number of symbols in the repository `readme`. |
|
471 |
+
| `readme_header_tokens_count`* | Number of tokens in the repository `readme`. |
|
472 |
+
| `readme_header_words_count` | Number of words in the repository `readme`. |
|
473 |
+
| `readme_header_lines_count` | Number of lines in the repository `readme`. |
|
474 |
+
|
475 |
+
\* Tokens calculated via GPT-4 tokenizer
|
476 |
+
|
477 |
+
|
478 |
+
## Dataset analysis
|
479 |
+
* Topics analysis [notebook](https://github.com/JetBrains-Research/agents-eval/blob/main/src/template_generation/notebooks/topics_analysis.ipynb)
|
480 |
+
* Large-scale analysis [notebook](https://github.com/JetBrains-Research/agents-eval/blob/main/src/template_generation/notebooks/templates_analysis.ipynb)
|
481 |
|
|
|
482 |
|
483 |
+
## Dataset usage
|
484 |
* Load the data via [load_dataset](https://huggingface.co/docs/datasets/v2.14.3/en/package_reference/loading_methods#datasets.load_dataset):
|
485 |
```python
|
486 |
|