Datasets:
Tasks:
Text Retrieval
Modalities:
Text
Sub-tasks:
document-retrieval
Languages:
code
Size:
100K - 1M
ArXiv:
License:
language_creators: | |
- found | |
language: | |
- code | |
license: | |
- cc-by-nc-nd-4.0 | |
multilinguality: | |
- multilingual | |
pretty_name: RepoBench-Retrieval | |
source_datasets: | |
- original | |
task_categories: | |
- text-retrieval | |
task_ids: | |
- document-retrieval | |
# Dataset Card for RepoBench-R | |
## Dataset Description | |
- **Homepage:** https://github.com/Leolty/repobench | |
- **Paper:** https://arxiv.org/abs/2306.03091 | |
## Dataset Summary | |
**RepoBench-R (Retrieval)** is a subtask of [RepoBench](https://github.com/Leolty/repobench), targeting the retrieval component of a repository-level auto-completion system, focusing on retrieving the most relevant code snippet from a project repository for next-line | |
code prediction. | |
## Settings | |
- `cff`: short for cross_file_first, indicating the cross-file module in next line is first used in the current file. | |
- `cfr`: short for cross_file_random, indicating the cross-file module in next line is not first used in the current file. | |
## Supported Tasks | |
The dataset has 4 subsets: | |
- `python_cff`: python dataset with `cff` setting. | |
- `python_cfr`: python dataset with `cfr` setting. | |
- `java_cff`: java dataset with `cff` setting. | |
- `java_cfr`: java dataset with `cfr` setting. | |
Each subset has 4 splits: | |
- `train_easy`: training set with easy difficulty, where the number of code snippets in the context \\(k\\) satisfies \\( 5 \leq k < 10 \\). | |
- `train_hard`: training set with hard difficulty, where the number of code snippets in the context \\(k\\) satisfies \\( k \geq 10 \\). | |
- `test_easy`: testing set with easy difficulty. | |
- `test_hard`: testing set with hard difficulty. | |
## Loading Data | |
For example, if you want to load the `test` `cross_file_first` `python` dataset with `easy` difficulty, you can use the following code: | |
```python | |
from datasets import load_dataset | |
dataset = load_dataset("tianyang/repobench-r", "python_cff", split="test_easy") | |
``` | |
> Note: The `split` argument is optional. If not provided, the entire dataset (including, train and test data with easy and hard level) will be loaded. | |
## Dataset Structure | |
```json | |
{ | |
"repo_name": "repository name of the data point", | |
"file_path": "path/to/file", | |
"context": [ | |
"snippet 1", | |
"snippet 2", | |
// ... | |
"snippet k" | |
], | |
"import_statement": "all import statements in the file", | |
"gold_snippet_idex": 2, // the index of the gold snippet in the context list, 0~k-1 | |
"code": "the code for next-line prediction", | |
"next_line": "the next line of the code" | |
} | |
``` | |
## Licensing Information | |
CC BY-NC-ND 4.0 | |
## Citation Information | |
```bibtex | |
@misc{liu2023repobench, | |
title={RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems}, | |
author={Tianyang Liu and Canwen Xu and Julian McAuley}, | |
year={2023}, | |
eprint={2306.03091}, | |
archivePrefix={arXiv}, | |
primaryClass={cs.CL} | |
} | |
``` | |
## Contributions | |
Thanks to [@Leolty](https://github.com/Leolty) for adding this dataset. |