Update README.md
Browse files
README.md
CHANGED
@@ -19,6 +19,14 @@ dataset_info:
|
|
19 |
num_examples: 121165423
|
20 |
download_size: 16613737659
|
21 |
dataset_size: 22252479269
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
22 |
---
|
23 |
This dataset is built from the open source data accompanying ["An Open Dataset and Model for Language Identification" (Burchell et al., 2023)](https://arxiv.org/abs/2305.13820)
|
24 |
|
@@ -31,4 +39,6 @@ However, individual datasets within it follow [each of their own licenses.](http
|
|
31 |
|
32 |
Conversion to huggingface dataset and upload to hub done by [Chris Ha](https://github.com/chris-ha458)
|
33 |
|
|
|
|
|
34 |
This dataset was processed and uploaded using huggingface datasets.
|
|
|
19 |
num_examples: 121165423
|
20 |
download_size: 16613737659
|
21 |
dataset_size: 22252479269
|
22 |
+
language:
|
23 |
+
- en
|
24 |
+
- ko
|
25 |
+
- fr
|
26 |
+
- aa
|
27 |
+
- hi
|
28 |
+
size_categories:
|
29 |
+
- 100M<n<1B
|
30 |
---
|
31 |
This dataset is built from the open source data accompanying ["An Open Dataset and Model for Language Identification" (Burchell et al., 2023)](https://arxiv.org/abs/2305.13820)
|
32 |
|
|
|
39 |
|
40 |
Conversion to huggingface dataset and upload to hub done by [Chris Ha](https://github.com/chris-ha458)
|
41 |
|
42 |
+
Original authors built the dataset for LID models for 201 languages. I thought such a dataset could also be used for a tokenizer for 201 languages.
|
43 |
+
|
44 |
This dataset was processed and uploaded using huggingface datasets.
|