muhammadravi251001
commited on
Upload dataset
Browse files- README.md +47 -0
- data/test-00000-of-00001.parquet +3 -0
- data/train-00000-of-00001.parquet +3 -0
- data/validation-00000-of-00001.parquet +3 -0
README.md
CHANGED
@@ -1,4 +1,51 @@
|
|
1 |
---
|
2 |
license: unknown
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
4 |
I do not hold the copyright to this dataset; I merely restructured it to have the same structure as other datasets (that we are researching) to facilitate future coding and analysis. I refer to this [link](https://huggingface.co/datasets/indonlp/NusaX-MT) for the raw dataset.
|
|
|
1 |
---
|
2 |
license: unknown
|
3 |
+
configs:
|
4 |
+
- config_name: default
|
5 |
+
data_files:
|
6 |
+
- split: train
|
7 |
+
path: data/train-*
|
8 |
+
- split: validation
|
9 |
+
path: data/validation-*
|
10 |
+
- split: test
|
11 |
+
path: data/test-*
|
12 |
+
dataset_info:
|
13 |
+
features:
|
14 |
+
- name: text_ace_Latn
|
15 |
+
dtype: string
|
16 |
+
- name: text_ban_Latn
|
17 |
+
dtype: string
|
18 |
+
- name: text_bbc_Latn
|
19 |
+
dtype: string
|
20 |
+
- name: text_bjn_Latn
|
21 |
+
dtype: string
|
22 |
+
- name: text_bug_Latn
|
23 |
+
dtype: string
|
24 |
+
- name: text_eng_Latn
|
25 |
+
dtype: string
|
26 |
+
- name: text_ind_Latn
|
27 |
+
dtype: string
|
28 |
+
- name: text_jav_Latn
|
29 |
+
dtype: string
|
30 |
+
- name: text_mad_Latn
|
31 |
+
dtype: string
|
32 |
+
- name: text_min_Latn
|
33 |
+
dtype: string
|
34 |
+
- name: text_nij_Latn
|
35 |
+
dtype: string
|
36 |
+
- name: text_sun_Latn
|
37 |
+
dtype: string
|
38 |
+
splits:
|
39 |
+
- name: train
|
40 |
+
num_bytes: 944296
|
41 |
+
num_examples: 500
|
42 |
+
- name: validation
|
43 |
+
num_bytes: 186281
|
44 |
+
num_examples: 100
|
45 |
+
- name: test
|
46 |
+
num_bytes: 758225
|
47 |
+
num_examples: 400
|
48 |
+
download_size: 1250295
|
49 |
+
dataset_size: 1888802
|
50 |
---
|
51 |
I do not hold the copyright to this dataset; I merely restructured it to have the same structure as other datasets (that we are researching) to facilitate future coding and analysis. I refer to this [link](https://huggingface.co/datasets/indonlp/NusaX-MT) for the raw dataset.
|
data/test-00000-of-00001.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:6007327f258875a66a1ad4b741f3794feeea86d908d0262c8782df646bd8d310
|
3 |
+
size 493491
|
data/train-00000-of-00001.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:b1d9cd02e5c382a559a1160d1f1e5a6aabee0a57c763fabff1f6cca3cc17d935
|
3 |
+
size 613969
|
data/validation-00000-of-00001.parquet
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:fbf76dffa4c96113a7facad962a9dde14eb43370ccb3fe685c4ab2a21c148e80
|
3 |
+
size 142835
|