Datasets:

ArXiv:
License:
shrivasd commited on
Commit
42a2906
·
1 Parent(s): 2dcb996

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +66 -29
README.md CHANGED
@@ -2,15 +2,67 @@
2
  license: other
3
  ---
4
 
5
- This version of the dataset is strictly permitted for use exclusively in conjunction with the review process for the paper with Submission Number 13449. Upon completion of the review process, a de-anonymized version of the dataset will be released under a license similar to that of The Stack, which can be found at https://huggingface.co/datasets/bigcode/the-stack.
6
 
 
7
 
8
- ## Dataset Format
9
- The dataset contains 4 different subdataset or configurations in HuggingFace Datasets terminology. Those are `bm25_contexts` `PP_contexts` `randomNN_contexts` and `sources`.
 
 
10
 
11
- First 3 are data used to train and test Repo fusion and the last one is actual java sourcode files the date was taken from.
12
 
13
- The format of the data for firt 3 dataset is as follows:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
  ```
15
  features = datasets.Features({
16
  'id': datasets.Value('string'),
@@ -30,31 +82,16 @@ features = datasets.Features({
30
  })
31
  ```
32
 
33
- The format of the `sources` is either as follows if accessed through Datasets.load_dataset:
34
- ```
35
- features = datasets.Features({
36
- 'file': datasets.Value('string'),
37
- 'content': datasets.Value('string')
38
- })
39
- ```
40
- Or, it can be accessed via file system directly. The format is like this `[<data_set_root>/data/<split_name>/<github_user>/<repo_name>/<path/to/every/java/file/in/the/repo>.java]`
41
 
42
- Therea are 3 splits for each configuration `train`, `test`, `validation`
 
 
 
 
 
 
43
 
44
- ## Dataset usage
45
- First, please, clone the dataset locally
46
- ```
47
- git clone https://huggingface.co/datasets/RepoFusion/Stack-Repo <local/path/to/manual/data>
48
- ```
49
 
50
- Second, please, load the dataset desired configuration and split:
51
- ```
52
- ds = datasets.load_dataset(
53
- "RepoFusion/Stack-Repo",
54
- name="<configuration_name>",
55
- split="<split_name>"
56
- data_dir="<local/path/to/manual/data>"
57
- )
58
- ```
59
 
60
- NOTE: `bm25_contexts` `PP_contexts` `randomNN_contexts` configrations can be loaded directly from the hub without cloning the repo locally. For the `sources` if not clonned beforehand or `data_dir` not specified, `ManualDownloadError` will be raised.
 
2
  license: other
3
  ---
4
 
5
+ # Summary of the Dataset
6
 
7
+ ## Description
8
 
9
+ Stack-Repo is a dataset of 200 Java repositories from GitHub with permissive licenses and near-deduplicated files that are augmented with three types of repository contexts.
10
+ - Prompt Proposal (PP) Contexts: These contexts are based on the prompt proposals from the paper [Repository-Level Prompt Generation for Large Language Models of Code](https://arxiv.org/abs/2206.12839).
11
+ - BM25 Contexts: These contexts are obtained based on the BM25 similarity scores.
12
+ - RandomNN Contexts: These contexts are obtained using the nearest neighbors in the representation space of an embedding model.
13
 
14
+ For more details, please check our paper [RepoFusion: Training Code Models to Understand Your Repository]().
15
 
16
+ The original Java source files are obtained using a [modified version](https://huggingface.co/datasets/bigcode/the-stack-dedup) of [The Stack](https://huggingface.co/datasets/bigcode/the-stack).
17
+
18
+
19
+ ## Data Splits
20
+ The dataset consists of three splits: `train`, `validation` and `test`, comprising of 100, 50, and 50 repositories, respectively.
21
+
22
+ ## Data Organization
23
+ Each split contains separate folder for a repository where each repository contains all `.java` source code files in the repository in the original directory structure along with three `.json` files corresponding to the PP, BM25 and RandomNN repo contexts. In terms of the HuggingFace Datasets terminology, we have four subdatasets or configurations.
24
+ - `PP_contexts`: Propmt Proposal repo contexts.
25
+ - `bm25_contexts`: BM25 repo contexts.
26
+ - `randomNN_contexts`: RandomNN repo contexts.
27
+ - `sources`: actual java (`.java`) source code files
28
+
29
+ # Dataset Usage
30
+ To clone the dataset locally
31
+ ```
32
+ git clone https://huggingface.co/datasets/RepoFusion/Stack-Repo <local_path>
33
+ ```
34
+
35
+ To load the dataset desired configuration and split:
36
+ ```
37
+ ds = datasets.load_dataset(
38
+ "RepoFusion/Stack-Repo",
39
+ name="<configuration_name>",
40
+ split="<split_name>"
41
+ data_dir="<local_path>"
42
+ )
43
+ ```
44
+
45
+ NOTE: The configurations for the repo contexts `bm25_contexts`, `PP_contexts` and `randomNN_contexts` can be loaded directly by specifying the corresponding <configuration_name> in the load_dataset command listed below without cloning the repo locally. For the `sources` if not cloned beforehand or `data_dir` not specified, `ManualDownloadError` will be raised.
46
+
47
+
48
+ ## Data Format
49
+ The expected data format of the `.json` files is a list of target holes and corresponding repo contexts where each entry in the `.json` file corresponds to a target hole consisting of the location of the target hole, the target hole as a string, the surrounding context as a string and a list of repo-contexts as strings. Specifically, each row is a dictionary containing
50
+ - `id`: hole_id (location of the target hole)
51
+ - `question`: surrounding context
52
+ - `target`: target hole
53
+ - `ctxs`: a list of repo contexts where each item is a dictionary containing
54
+ - `title`: name of the repo context
55
+ - `text`: content of the repo context
56
+
57
+ The actual java sources can be accessed via file system directly. The format is like this `[<data_set_root>/data/<split_name>/<github_user>/<repo_name>/<path/to/every/java/file/in/the/repo>.java]`. When accessed through `Datasets.load_dataset`, the data fields for the `sources` can be specified as below.
58
+ ```
59
+ features = datasets.Features({
60
+ 'file': datasets.Value('string'),
61
+ 'content': datasets.Value('string')
62
+ })
63
+ ```
64
+
65
+ When accessed through `Datasets.load_dataset`, the data fields for the repo contexts can be specified as below.
66
  ```
67
  features = datasets.Features({
68
  'id': datasets.Value('string'),
 
82
  })
83
  ```
84
 
85
+ # Additional Information
 
 
 
 
 
 
 
86
 
87
+ ## Dataset Curators
88
+ - Disha Shrivastava, [email protected]
89
+ - Denis Kocetkov, [email protected]
90
+
91
+ ## Licensing Information
92
+ Stack-Repo is derived from a modified version of The Stack. The Stack is a collection of source code from repositories with various licenses. Any use of all or part of the code gathered in The Stack must abide by the terms of the original licenses, including attribution clauses when relevant. We facilitate this by providing provenance information for each data point.
93
+ The list of [SPDX license identifiers](https://spdx.org/licenses/) included in the dataset can be found [here](https://huggingface.co/datasets/bigcode/the-stack-dedup/blob/main/licenses.json).
94
 
95
+ ## Citation Information
 
 
 
 
96
 
 
 
 
 
 
 
 
 
 
97