Madjakul commited on
Commit
f3247f5
1 Parent(s): d3b9f73

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -171
README.md CHANGED
@@ -1,128 +1,34 @@
1
  ---
2
- pretty_name: HALvest
 
 
3
 
4
  configs:
5
- - config_name: bg
6
- data_files: "bg/*.gz"
7
- - config_name: br
8
- data_files: "br/*.gz"
9
- - config_name: ca
10
- data_files: "ca/*.gz"
11
- - config_name: cs
12
- data_files: "cs/*.gz"
13
- - config_name: da
14
- data_files: "da/*.gz"
15
- - config_name: de
16
- data_files: "de/*.gz"
17
- - config_name: el
18
- data_files: "el/*.gz"
19
  - config_name: en
20
  data_files: "en/*.gz"
21
- - config_name: eo
22
- data_files: "eo/*.gz"
23
- - config_name: es
24
- data_files: "es/*.gz"
25
- - config_name: et
26
- data_files: "et/*.gz"
27
- - config_name: eu
28
- data_files: "eu/*.gz"
29
- - config_name: fa
30
- data_files: "fa/*.gz"
31
- - config_name: fi
32
- data_files: "fi/*.gz"
33
  - config_name: fr
34
  data_files: "fr/*.gz"
35
- - config_name: gl
36
- data_files: "gl/*.gz"
37
- - config_name: he
38
- data_files: "he/*.gz"
39
- - config_name: hr
40
- data_files: "hr/*.gz"
41
- - config_name: hu
42
- data_files: "hu/*.gz"
43
- - config_name: hy
44
- data_files: "hy/*.gz"
45
- - config_name: id
46
- data_files: "id/*.gz"
47
- - config_name: it
48
- data_files: "it/*.gz"
49
- - config_name: ko
50
- data_files: "ko/*.gz"
51
- - config_name: "no"
52
- data_files: "no/*.gz"
53
- - config_name: pl
54
- data_files: "pl/*.gz"
55
- - config_name: pt
56
- data_files: "pt/*.gz"
57
- - config_name: ro
58
- data_files: "ro/*.gz"
59
- - config_name: ru
60
- data_files: "ru/*.gz"
61
- - config_name: sk
62
- data_files: "sk/*.gz"
63
- - config_name: sl
64
- data_files: "sl/*.gz"
65
- - config_name: sv
66
- data_files: "sv/*.gz"
67
- - config_name: sw
68
- data_files: "sw/*.gz"
69
- - config_name: th
70
- data_files: "th/*.gz"
71
- - config_name: tr
72
- data_files: "tr/*.gz"
73
 
74
  language:
75
- - bg
76
- - br
77
- - ca
78
- - cs
79
- - da
80
- - de
81
- - el
82
  - en
83
- - eo
84
- - es
85
- - et
86
- - eu
87
- - fa
88
- - fi
89
  - fr
90
- - gl
91
- - he
92
- - hr
93
- - hu
94
- - hy
95
- - id
96
- - it
97
- - ko
98
- - "no"
99
- - pl
100
- - pt
101
- - ro
102
- - ru
103
- - sk
104
- - sl
105
- - sv
106
- - sw
107
- - th
108
- - tr
109
 
110
  size_categories:
111
- - n<1K
112
- - 1K<n<10K
113
- - 10K<n<100K
114
  - 100K<n<1M
115
 
116
  task_categories:
117
  - text-generation
118
  - fill-mask
 
119
  task_ids:
120
  - language-modeling
121
  - masked-language-modeling
 
122
 
123
  tags:
124
  - academia
125
  - research
 
126
 
127
  annotations_creators:
128
  - no-annotation
@@ -131,13 +37,13 @@ multilinguality:
131
  - multilingual
132
 
133
  source_datasets:
134
- - HALvest-R
135
  ---
136
 
137
 
138
  <div align="center">
139
- <h1> HALvest </h1>
140
- <h3> Open Scientific Papers Harvested from HAL </h3>
141
  </div>
142
 
143
  ---
@@ -158,18 +64,13 @@ You can download the dataset using Hugging Face datasets:
158
  ```py
159
  from datasets import load_dataset
160
 
161
- ds = load_dataset("Madjakul/HALvest", "en")
162
  ```
163
 
164
 
165
  ### Details
166
 
167
- Building the dataset is a four steps process: data fetching from HAL, data merging, data enriching and data filtering.
168
-
169
- 1. We first request [HAL's API](https://api.archives-ouvertes.fr/docs) in order to gather open research papers and parse it -- effectively sorting papers by language. Then, we download the PDFs of the fetched data.
170
- 2. Using [GROBID](https://github.com/kermitt2/grobid), we convert each PDF to an `xml-tei` format in order to have structured data. We convert each `xml-tei` file to a `txt` format before concatenating it with the paper's.
171
- 3. We compute some statistics about each document.
172
- 4. We filter the data based of off simple ratios to expurge badly encoded documents.
173
 
174
 
175
  ### Languages
@@ -178,61 +79,12 @@ ISO-639|Language|# Documents|# mT5 Tokens
178
  -------|--------|-----------|--------
179
  en|English|442,892|7,606,895,258
180
  fr|French|193,437|8,728,722,255
181
- es|Spanish|2,930|68,076,878
182
- it|Italian|1,172|48,747,986
183
- pt|Portuguese|934|32,918,832
184
- de|German|646|11,699,417
185
- ru|Russian|245|5,763,532
186
- eu|Basque|112|2,297,460
187
- pl|Polish|43|987,878
188
- el|Greek|42|1,680,696
189
- ro|Romanian|39|1,298,901
190
- ca|Catalan|28|975,078
191
- da|Danish|26|961,895
192
- br|Breton|24|998,088
193
- ko|Korean|17|226,268
194
- tr|Turkish|17|149,718
195
- hu|Hungarian|14|577,568
196
- eo|Esperanto|14|105,286
197
- fa|Persian|10|190,929
198
- hy|Armenian|10|127,988
199
- cs|Czech|9|712,263
200
- bg|Bulgarian|8|180,146
201
- id|Indonesian|9|53,075
202
- he|Hebrew|8|61,283
203
- hr|Croatian|8|40,621
204
- et|Estonian|7|20,405
205
- sv|Swedish|6|270,642
206
- no|Norwegian|6|62,767
207
- fi|Finnish|3|17,583
208
- sw|Swahili|2|73,921
209
- gl|Galician|2|29,688
210
- th|Thai|1|70,909
211
- sl|Slovenian|1|22,844
212
- sk|Slovak|1|12,997
213
-
214
-
215
- ### Domains
216
-
217
- Domain|Code|# Documents|# mT5 Tokens
218
- ------|----|-----------|------------
219
- Humanities and Social Sciences|shs|152,818|5,487,738,344
220
- Computer Science|info|143,229|2,436,890,715
221
- Life Sciences|sdv|111,038|3,008,633,879
222
- Engineering Sciences|spi|99,393|2,155,602,249
223
- Physics|phys|63,557|1,435,905,328
224
- Mathematics|math|54,393|1,359,277,656
225
- Chemical Science|chim|38,500|857,617,219
226
- Environmental Science|sde|30,827|566,560,266
227
- Sciences of the Universe|sdu|22,917|654,909,131
228
- Statistics|stat|20,571|1,449,842,318
229
- Cognitive science|scco|11,584|222,832,732
230
- Quantitative Finance|qfin|3,290|64,970,285
231
- Nonlinear Sciences|nlin|1,908|29,296,684
232
-
233
- You can browse through every domains and sub-domains here: https://hal.science/browse/domain.
234
 
235
 
 
 
 
 
236
  ## Considerations for Using the Data
237
 
238
  The corpus is extracted from the [HAL's open archive](https://hal.science/) which distributes scientific publications following open access principles. The corpus is made up of both creative commons licensed and copyrighted documents (distribution authorized on HAL by the publisher). This must be considered prior to using this dataset for any purpose, other than training deep learning models, data mining etc. We do not own any of the text from which these data has been extracted.
@@ -241,14 +93,7 @@ The corpus is extracted from the [HAL's open archive](https://hal.science/) whic
241
  ## Citation
242
 
243
  ```bib
244
- @software{almanach_halvest_2024,
245
- author = {Kulumba, Francis and Antoun, Wissam and Vimont, Guillaume and Romary, Laurent},
246
- title = {HALvest: Open Scientific Papers Harvested from HAL.},
247
- month = April,
248
- year = 2024,
249
- company = Almanach,
250
- url = {https://github.com/Madjakul/HALvesting}
251
- }
252
  ```
253
 
254
 
 
1
  ---
2
+ pretty_name: HALvest-Geometric
3
+
4
+ license: cc-by-4.0
5
 
6
  configs:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  - config_name: en
8
  data_files: "en/*.gz"
 
 
 
 
 
 
 
 
 
 
 
 
9
  - config_name: fr
10
  data_files: "fr/*.gz"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
 
12
  language:
 
 
 
 
 
 
 
13
  - en
 
 
 
 
 
 
14
  - fr
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
 
16
  size_categories:
 
 
 
17
  - 100K<n<1M
18
 
19
  task_categories:
20
  - text-generation
21
  - fill-mask
22
+
23
  task_ids:
24
  - language-modeling
25
  - masked-language-modeling
26
+ - graph-representation-learning
27
 
28
  tags:
29
  - academia
30
  - research
31
+ - graph
32
 
33
  annotations_creators:
34
  - no-annotation
 
37
  - multilingual
38
 
39
  source_datasets:
40
+ - HALvest
41
  ---
42
 
43
 
44
  <div align="center">
45
+ <h1> HALvest-Geometric </h1>
46
+ <h3> Citation Network of Open Scientific Papers Harvested from HAL </h3>
47
  </div>
48
 
49
  ---
 
64
  ```py
65
  from datasets import load_dataset
66
 
67
+ ds = load_dataset("Madjakul/HALvest-Geometric", "en")
68
  ```
69
 
70
 
71
  ### Details
72
 
73
+ TODO
 
 
 
 
 
74
 
75
 
76
  ### Languages
 
79
  -------|--------|-----------|--------
80
  en|English|442,892|7,606,895,258
81
  fr|French|193,437|8,728,722,255
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
82
 
83
 
84
+ ### Graph
85
+
86
+ TODO
87
+
88
  ## Considerations for Using the Data
89
 
90
  The corpus is extracted from the [HAL's open archive](https://hal.science/) which distributes scientific publications following open access principles. The corpus is made up of both creative commons licensed and copyrighted documents (distribution authorized on HAL by the publisher). This must be considered prior to using this dataset for any purpose, other than training deep learning models, data mining etc. We do not own any of the text from which these data has been extracted.
 
93
  ## Citation
94
 
95
  ```bib
96
+ TODO
 
 
 
 
 
 
 
97
  ```
98
 
99