File size: 1,352 Bytes
1e342a3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
pretty_name: Wikipedia
paperswithcode_id: null
license:
- cc-by-sa-3.0
- gfdl
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
source_datasets:
- original
multilinguality:
- multilingual
size_categories:
- n<1K
- 1K<n<10K
- 10K<n<100K
- 100K<n<1M
- 1M<n<10M
language:
   - bg
   - cs
   - da
   - de
   - el
   - en
   - es
   - et
   - fi
   - fr
   - ga
   - hr
   - hu
   - it
   - lt
   - lv
   - mt
   - nl
   - pl
   - pt
   - ro
   - sk
   - sl
   - sv
---

# Dataset Card for Wikipedia

This repo is a wrapper around [olm/wikipedia](https://huggingface.co/datasets/olm/wikipedia) that just concatenates data from the EU languages.
Please refer to it for a complete data card.

The EU languages we include are:
   - bg
   - cs
   - da
   - de
   - el
   - en
   - es
   - et
   - fi
   - fr
   - ga
   - hr
   - hu
   - it
   - lt
   - lv
   - mt
   - nl
   - pl
   - pt
   - ro
   - sk
   - sl
   - sv


As with `olm/wikipedia` you will need to install a few dependencies:


```
pip install mwparserfromhell==0.6.4 multiprocess==0.70.13
```

```python
from datasets import load_dataset

load_dataset("dlwh/eu_wikipedias", date="20221101")
```

Please refer to the original olm/wikipedia for a complete data card.