mvasiliniuc commited on
Commit
88a1963
·
1 Parent(s): 51070a2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +177 -0
README.md CHANGED
@@ -1,3 +1,180 @@
1
  ---
 
 
2
  license: other
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ annotations_creators:
3
+ - crowdsourced
4
  license: other
5
+ language_creators:
6
+ - crowdsourced
7
+ language:
8
+ - code
9
+ task_categories:
10
+ - text-generation
11
+ tags:
12
+ - code, kotlin, native Android development, curated
13
+ size_categories:
14
+ - 100K<n<1M
15
+ source_datasets: []
16
+ pretty_name: iva-kotlin-codeint-clean
17
+ task_ids:
18
+ - language-modeling
19
  ---
20
+
21
+ # IVA Kotlin GitHub Code Dataset
22
+
23
+ ## Dataset Description
24
+
25
+ This is the curated IVA Kotlin dataset extracted from GitHub.
26
+ It contains curated Kotlin files gathered with the purpose to train a code generation model.
27
+
28
+ The dataset consists of 383380 Kotlin code files from GitHub totaling ~542MB of data.
29
+ The [uncurated](https://huggingface.co/datasets/mvasiliniuc/iva-kotlin-codeint) dataset was created from the public GitHub dataset on Google BiqQuery.
30
+
31
+ ### How to use it
32
+
33
+ To download the full dataset:
34
+
35
+ ```python
36
+ from datasets import load_dataset
37
+ dataset = load_dataset('mvasiliniuc/iva-kotlin-codeint-clean', split='train')
38
+ ```
39
+
40
+
41
+ Other details are available for each field:
42
+
43
+ ```python
44
+ from datasets import load_dataset
45
+ dataset = load_dataset('mvasiliniuc/iva-kotlin-codeint-clean', split='train')
46
+ print(dataset[723])
47
+
48
+ #OUTPUT:
49
+ {
50
+ "repo_name":"oboenikui/UnivCoopFeliCaReader",
51
+ "path":"app/src/main/java/com/oboenikui/campusfelica/ScannerActivity.kt",
52
+ "copies":"1",
53
+ "size":"5635",
54
+ "content":"....public override fun onPause() {\n if (this.isFinishing) {\n adapter.disableForegroundDispatch(this)\n }\n super.onPause()\n }\n\n override ...}\n",
55
+ "license":"apache-2.0",
56
+ "hash":"e88cfd99346cbef640fc540aac3bf20b",
57
+ "line_mean":37.8620689655,
58
+ "line_max":199,
59
+ "alpha_frac":0.5724933452,
60
+ "ratio":5.0222816399,
61
+ "autogenerated":false,
62
+ "config_or_test":false,
63
+ "has_no_keywords":false,
64
+ "has_few_assignments":false
65
+ }
66
+ ```
67
+
68
+ ## Data Structure
69
+
70
+ ### Data Fields
71
+
72
+ |Field|Type|Description|
73
+ |---|---|---|
74
+ |repo_name|string|name of the GitHub repository|
75
+ |path|string|path of the file in GitHub repository|
76
+ |copies|string|number of occurrences in dataset|
77
+ |content|string|content of source file|
78
+ |size|string|size of the source file in bytes|
79
+ |license|string|license of GitHub repository|
80
+ |hash|string|Hash of content field.|
81
+ |line_mean|number|Mean line length of the content.
82
+ |line_max|number|Max line length of the content.
83
+ |alpha_frac|number|Fraction between mean and max line length of content.
84
+ |ratio|number|Character/token ratio of the file with tokenizer.
85
+ |autogenerated|boolean|True if the content is autogenerated by looking for keywords in the first few lines of the file.
86
+ |config_or_test|boolean|True if the content is a configuration file or a unit test.
87
+ |has_no_keywords|boolean|True if a file has none of the keywords for Kotlin Programming Language.
88
+ |has_few_assignments|boolean|True if file uses symbol '=' less than `minimum` times.
89
+
90
+ ### Instance
91
+
92
+ ```json
93
+ {
94
+ "repo_name":"oboenikui/UnivCoopFeliCaReader",
95
+ "path":"app/src/main/java/com/oboenikui/campusfelica/ScannerActivity.kt",
96
+ "copies":"1",
97
+ "size":"5635",
98
+ "content":"....",
99
+ "license":"apache-2.0",
100
+ "hash":"e88cfd99346cbef640fc540aac3bf20b",
101
+ "line_mean":37.8620689655,
102
+ "line_max":199,
103
+ "alpha_frac":0.5724933452,
104
+ "ratio":5.0222816399,
105
+ "autogenerated":false,
106
+ "config_or_test":false,
107
+ "has_no_keywords":false,
108
+ "has_few_assignments":false
109
+ }
110
+ ```
111
+
112
+ ## Languages
113
+
114
+ The dataset contains only Kotlin files.
115
+
116
+ ```json
117
+ {
118
+ "Kotlin": [".kt"]
119
+ }
120
+ ```
121
+
122
+ ## Licenses
123
+
124
+ Each entry in the dataset contains the associated license. The following is a list of licenses involved and their occurrences.
125
+
126
+ ```json
127
+ {
128
+ "agpl-3.0":4052,
129
+ "apache-2.0":114641,
130
+ "artistic-2.0":159,
131
+ "bsd-2-clause":474,
132
+ "bsd-3-clause":4571,
133
+ "cc0-1.0":198,
134
+ "epl-1.0":991,
135
+ "gpl-2.0":5625,
136
+ "gpl-3.0":25102,
137
+ "isc":436,
138
+ "lgpl-2.1":146,
139
+ "lgpl-3.0":3406,
140
+ "mit":39399,
141
+ "mpl-2.0":1819,
142
+ "unlicense":824
143
+ }
144
+ ```
145
+
146
+ ## Dataset Statistics
147
+
148
+ ```json
149
+ {
150
+ "Total size": "~261 MB",
151
+ "Number of files": 201843,
152
+ "Number of files under 500 bytes": 3697,
153
+ "Average file size in bytes": 5205,
154
+ }
155
+ ```
156
+
157
+ ## Curation Process
158
+
159
+ * Removal of duplication files based on file hash.
160
+ * Removal of file templates. File containing the following: [${PACKAGE_NAME}, ${NAME}, ${VIEWHOLDER_CLASS}, ${ITEM_CLASS}]
161
+ * Removal of the files containing the following words in the first 10 lines: `generated, auto-generated", "autogenerated", "automatically generated`
162
+ * Removal of the files containing the following words in the first 10 lines with a probability of 0.7: `test", "unit test", "config", "XCTest", "JUnit`
163
+ * Removal of file with the rate of alphanumeric characters below 0.3 of the file.
164
+ * Removal of near duplication based MinHash and Jaccard similarity.
165
+ * Removal of files with mean line length above 100.
166
+ * Removal of files without mention of keywords with a probability of 0.7: [`"fun ", "val ", "var ", "if ", "else ", "while ", "for ", "return ", "class ", "data ", "struct ", "interface ", "when ", "catch "`]
167
+ * Removal of files that use the assignment operator `=` less than 3 times.
168
+ * Removal of files with the ratio between the number of characters and number of tokens after tokenization lower than 1.5.
169
+
170
+ Curation process is a derivation of the one used in CodeParrot project: https://huggingface.co/codeparrot
171
+
172
+ ## Data Splits
173
+ The dataset only contains a train split which is separated into train and valid which can be found here:
174
+ * Clean Version Train: https://huggingface.co/datasets/mvasiliniuc/iva-kotlin-codeint-clean-train
175
+ * Clean Version Valid: https://huggingface.co/datasets/mvasiliniuc/iva-kotlin-codeint-clean-valid
176
+
177
+ # Considerations for Using the Data
178
+
179
+ The dataset consists of source code from a wide range of repositories.
180
+ As such they can potentially include harmful or biased code as well as sensitive information like passwords or usernames.