File size: 5,961 Bytes
c6ac317 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 |
---
pretty_name: HALvest
configs:
- config_name: bg
data_files: "bg/*.gz"
- config_name: br
data_files: "br/*.gz"
- config_name: ca
data_files: "ca/*.gz"
- config_name: cs
data_files: "cs/*.gz"
- config_name: da
data_files: "da/*.gz"
- config_name: de
data_files: "de/*.gz"
- config_name: el
data_files: "el/*.gz"
- config_name: en
data_files: "en/*.gz"
- config_name: eo
data_files: "eo/*.gz"
- config_name: es
data_files: "es/*.gz"
- config_name: et
data_files: "et/*.gz"
- config_name: eu
data_files: "eu/*.gz"
- config_name: fa
data_files: "fa/*.gz"
- config_name: fi
data_files: "fi/*.gz"
- config_name: fr
data_files: "fr/*.gz"
- config_name: gl
data_files: "gl/*.gz"
- config_name: he
data_files: "he/*.gz"
- config_name: hr
data_files: "hr/*.gz"
- config_name: hu
data_files: "hu/*.gz"
- config_name: hy
data_files: "hy/*.gz"
- config_name: id
data_files: "id/*.gz"
- config_name: it
data_files: "it/*.gz"
- config_name: ko
data_files: "ko/*.gz"
- config_name: "no"
data_files: "no/*.gz"
- config_name: pl
data_files: "pl/*.gz"
- config_name: pt
data_files: "pt/*.gz"
- config_name: ro
data_files: "ro/*.gz"
- config_name: ru
data_files: "ru/*.gz"
- config_name: sk
data_files: "sk/*.gz"
- config_name: sl
data_files: "sl/*.gz"
- config_name: sv
data_files: "sv/*.gz"
- config_name: sw
data_files: "sw/*.gz"
- config_name: th
data_files: "th/*.gz"
- config_name: tr
data_files: "tr/*.gz"
language:
- bg
- br
- ca
- cs
- da
- de
- el
- en
- eo
- es
- et
- eu
- fa
- fi
- fr
- gl
- he
- hr
- hu
- hy
- id
- it
- ko
- "no"
- pl
- pt
- ro
- ru
- sk
- sl
- sv
- sw
- th
- tr
size_categories:
- n<1K
- 1K<n<10K
- 10K<n<100K
- 100K<n<1M
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
tags:
- academia
- research
annotations_creators:
- no-annotation
multilinguality:
- multilingual
source_datasets:
- HALvest-R
---
<div align="center">
<h1> HALvest </h1>
<h3> Open Scientific Papers Harvested from HAL </h3>
</div>
---
## Dataset Description
- **Repository:** [GitHub](https://github.com/Madjakul/HALvesting/tree/main)
## Dataset Summary
### overview:
This dataset is comprised of fulltext from open papers found on [Hyper Articles en Ligne (HAL)](https://hal.science/). Our dump is mostly english/french but gather papers written in 34 languages across 13 domains.
You can download the dataset using Hugging Face datasets:
```py
from datasets import load_dataset
ds = load_dataset("Madjakul/HALvest", "en")
```
### Details
Building the dataset is a four steps process: data fetching from HAL, data merging, data enriching and data filtering.
1. We first request [HAL's API](https://api.archives-ouvertes.fr/docs) in order to gather open research papers and parse it -- effectively sorting papers by language. Then, we download the PDFs of the fetched data.
2. Using [GROBID](https://github.com/kermitt2/grobid), we convert each PDF to an `xml-tei` format in order to have structured data. We convert each `xml-tei` file to a `txt` format before concatenating it with the paper's.
3. We compute some statistics about each document.
4. We filter the data based of off simple ratios to expurge badly encoded documents.
### Languages
ISO-639|Language|# Documents|# mT5 Tokens
-------|--------|-----------|--------
en|English|442,892|7,606,895,258
fr|French|193,437|8,728,722,255
es|Spanish|2,930|68,076,878
it|Italian|1,172|48,747,986
pt|Portuguese|934|32,918,832
de|German|646|11,699,417
ru|Russian|245|5,763,532
eu|Basque|112|2,297,460
pl|Polish|43|987,878
el|Greek|42|1,680,696
ro|Romanian|39|1,298,901
ca|Catalan|28|975,078
da|Danish|26|961,895
br|Breton|24|998,088
ko|Korean|17|226,268
tr|Turkish|17|149,718
hu|Hungarian|14|577,568
eo|Esperanto|14|105,286
fa|Persian|10|190,929
hy|Armenian|10|127,988
cs|Czech|9|712,263
bg|Bulgarian|8|180,146
id|Indonesian|9|53,075
he|Hebrew|8|61,283
hr|Croatian|8|40,621
et|Estonian|7|20,405
sv|Swedish|6|270,642
no|Norwegian|6|62,767
fi|Finnish|3|17,583
sw|Swahili|2|73,921
gl|Galician|2|29,688
th|Thai|1|70,909
sl|Slovenian|1|22,844
sk|Slovak|1|12,997
### Domains
Domain|Code|# Documents|# mT5 Tokens
------|----|-----------|------------
Humanities and Social Sciences|shs|152,818|5,487,738,344
Computer Science|info|143,229|2,436,890,715
Life Sciences|sdv|111,038|3,008,633,879
Engineering Sciences|spi|99,393|2,155,602,249
Physics|phys|63,557|1,435,905,328
Mathematics|math|54,393|1,359,277,656
Chemical Science|chim|38,500|857,617,219
Environmental Science|sde|30,827|566,560,266
Sciences of the Universe|sdu|22,917|654,909,131
Statistics|stat|20,571|1,449,842,318
Cognitive science|scco|11,584|222,832,732
Quantitative Finance|qfin|3,290|64,970,285
Nonlinear Sciences|nlin|1,908|29,296,684
You can browse through every domains and sub-domains here: https://hal.science/browse/domain.
## Considerations for Using the Data
The corpus is extracted from the [HAL's open archive](https://hal.science/) which distributes scientific publications following open access principles. The corpus is made up of both creative commons licensed and copyrighted documents (distribution authorized on HAL by the publisher). This must be considered prior to using this dataset for any purpose, other than training deep learning models, data mining etc. We do not own any of the text from which these data has been extracted.
## Citation
```bib
@software{almanach_halvest_2024,
author = {Kulumba, Francis and Antoun, Wissam and Vimont, Guillaume and Romary, Laurent},
title = {HALvest: Open Scientific Papers Harvested from HAL.},
month = April,
year = 2024,
company = Almanach,
url = {https://github.com/Madjakul/HALvesting}
}
```
## Dataset Copyright
The licence terms for HALvest strictly follows the one from HAL. Please refer to the below license when using this dataset.
- [HAL license](https://doc.archives-ouvertes.fr/en/legal-aspects/) |