Upload dataset
Browse files- README.md +3 -3
- data/test-00000-of-00001.parquet +3 -0
README.md
CHANGED
@@ -16,7 +16,7 @@ dataset_info:
|
|
16 |
- name: train
|
17 |
num_bytes: 9744003868
|
18 |
num_examples: 2119045
|
19 |
-
- name:
|
20 |
num_bytes: 3764150
|
21 |
num_examples: 1780
|
22 |
download_size: 5366910719
|
@@ -26,8 +26,8 @@ configs:
|
|
26 |
data_files:
|
27 |
- split: train
|
28 |
path: data/train-*
|
29 |
-
- split:
|
30 |
-
path: data/
|
31 |
---
|
32 |
|
33 |
Ukrainian dump of the Wikipedia from June 20th 2024, plus equivalent articles from the English Wikipedia same date (turned out similar amount of tokens). Validation is reduced to <0.1% of the dataset for faster evaluation and avoiding losing valuable training data
|
|
|
16 |
- name: train
|
17 |
num_bytes: 9744003868
|
18 |
num_examples: 2119045
|
19 |
+
- name: test
|
20 |
num_bytes: 3764150
|
21 |
num_examples: 1780
|
22 |
download_size: 5366910719
|
|
|
26 |
data_files:
|
27 |
- split: train
|
28 |
path: data/train-*
|
29 |
+
- split: test
|
30 |
+
path: data/test-*
|
31 |
---
|
32 |
|
33 |
Ukrainian dump of the Wikipedia from June 20th 2024, plus equivalent articles from the English Wikipedia same date (turned out similar amount of tokens). Validation is reduced to <0.1% of the dataset for faster evaluation and avoiding losing valuable training data
|
data/test-00000-of-00001.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:2077bdb263da0761853a818fe1014b53c371809a24528356340bc50647ab8143
|
3 |
+
size 2042566
|