Datasets:
ArXiv:
License:
update
Browse files- README.md +6 -6
- data/id_panl_bppt.jsonl +3 -0
- data/menyo20k_mt.jsonl +3 -0
- dataset_details.md +101 -0
- docs/picture/id_panl_bppt_text_length.jpg +3 -0
- docs/picture/menyo20k_mt_text_length.jpg +3 -0
- examples/make_subset_details.py +1 -1
- examples/preprocess/preprocess_id_panl_bppt.py +85 -0
- examples/preprocess/preprocess_igbo.py +85 -0
- examples/preprocess/preprocess_menyo20k_mt.py +88 -0
- examples/preprocess/preprocess_para_pat.py +96 -0
- examples/preprocess/preprocess_pib.py +99 -0
- examples/preprocess/preprocess_poleval2019_mt.py +89 -0
- language_identification.py +6 -0
README.md
CHANGED
@@ -48,13 +48,11 @@ Tips:
|
|
48 |
| giga_fren | | | | [giga_fren](https://huggingface.co/datasets/giga_fren) |
|
49 |
| hind_encorp | [HindEnCorp](https://aclanthology.org/L14-1643/) | TRAIN: 445071 | HindEnCorp 并行文本(句子对齐)来自以下来源:Tides,其中包含主要取自新闻文章的 50K 句对。 该数据集最初是为 2002 年 DARPA-TIDES 惊喜语言竞赛收集的,后来在 IIIT 海得拉巴进行了完善,并提供给 ICON 2008 的 NLP 工具竞赛(Venkatapathy,2008)。 | [hind_encorp](https://huggingface.co/datasets/hind_encorp) |
|
50 |
| hrenwac_para | | TRAIN: 191946 | hrenWaC 语料库版本 2.0 由从克罗地亚 .hr 顶级域爬取的并行克罗地亚语-英语文本组成。 | [hrenwac_para](https://huggingface.co/datasets/hrenwac_para) |
|
51 |
-
| id_panl_bppt | |
|
52 |
-
| igbo | [Igbo-English Machine Translation](https://arxiv.org/abs/2004.00648v1) |
|
53 |
-
| menyo20k_mt | [menyo20k_mt](https://arxiv.org/abs/2103.08647v3) |
|
54 |
-
| multi_para_crawl | [ParaCrawl](https://aclanthology.org/2020.acl-main.417/); [paracrawl.eu](http://paracrawl.eu); [MultiParaCrawl](https://opus.nlpl.eu/MultiParaCrawl/corpus/version/MultiParaCrawl) | 我们报告了使用开源软件通过抓取网络来创建最大的公开可用并行语料库的方法。 | | [multi_para_crawl](https://huggingface.co/datasets/multi_para_crawl) |
|
55 |
-
| para_crawl | [ParaCrawl](https://aclanthology.org/2020.acl-main.417.pdf) | 样本个数 | 欧洲官方语言的网络规模并行语料库。 | [para_crawl](https://huggingface.co/datasets/para_crawl) |
|
56 |
| para_pat | [ParaPat](https://aclanthology.org/2020.lrec-1.465.pdf); [Homepage](https://figshare.com/articles/dataset/ParaPat_The_Multi-Million_Sentences_Parallel_Corpus_of_Patents_Abstracts/12627632) | 样本个数 | ParaPat:专利摘要的数百万个句子平行语料库 | [para_pat](https://huggingface.co/datasets/para_pat) |
|
57 |
-
| pib | [CVIT-PIB](https://arxiv.org/abs/2008.04860) |
|
58 |
| poleval2019_mt | | 样本个数 | PolEval 是一项受 SemEval 启发的波兰语自然语言处理工具评估活动。 | [poleval2019_mt](https://huggingface.co/datasets/poleval2019_mt) |
|
59 |
|
60 |
|
@@ -68,7 +66,9 @@ https://opus.nlpl.eu/
|
|
68 |
| ecb | [ECB](https://opus.nlpl.eu/ECB/corpus/version/ECB); | 样本个数 | | [ecb](https://huggingface.co/datasets/ecb) |
|
69 |
| emea | [EMEA](https://opus.nlpl.eu/EMEA/corpus/version/EMEA); | 样本个数 | | [emea](https://huggingface.co/datasets/emea) |
|
70 |
| kde4 | [KDE4](https://opus.nlpl.eu/KDE4/corpus/version/KDE4); [apps.kde.org](https://apps.kde.org/zh-cn/); [opus.nlpl.eu](https://opus.nlpl.eu/) | 样本个数 | | [kde4](https://huggingface.co/datasets/kde4) |
|
|
|
71 |
| open_subtitles | [OpenSubtitles](https://opus.nlpl.eu/OpenSubtitles/corpus/version/OpenSubtitles); [L16-1147.pdf](https://aclanthology.org/L16-1147.pdf) | 样本个数 | 我们推出了平行语料库 OpenSubtitles 集合的新主要版本。 该版本由大型电影和电视字幕数据库编译而成,共包含 1689 个双文本,涵盖 60 种语言的 26 亿个句子。 该版本还包含了字幕预处理和对齐方面的许多增强功能,例如自动更正 OCR 错误以及使用元数据来估计每个字幕的质量并对字幕对进行评分。 | [open_subtitles](https://huggingface.co/datasets/open_subtitles) |
|
|
|
72 |
| php | [PHP](https://opus.nlpl.eu/PHP/corpus/version/PHP) | 样本个数 | 最初从 http://se.php.net/download-docs.php 中提取的并行语料库。该语料库相当嘈杂。 | [php](https://huggingface.co/datasets/php) |
|
73 |
|
74 |
|
|
|
48 |
| giga_fren | | | | [giga_fren](https://huggingface.co/datasets/giga_fren) |
|
49 |
| hind_encorp | [HindEnCorp](https://aclanthology.org/L14-1643/) | TRAIN: 445071 | HindEnCorp 并行文本(句子对齐)来自以下来源:Tides,其中包含主要取自新闻文章的 50K 句对。 该数据集最初是为 2002 年 DARPA-TIDES 惊喜语言竞赛收集的,后来在 IIIT 海得拉巴进行了完善,并提供给 ICON 2008 的 NLP 工具竞赛(Venkatapathy,2008)。 | [hind_encorp](https://huggingface.co/datasets/hind_encorp) |
|
50 |
| hrenwac_para | | TRAIN: 191946 | hrenWaC 语料库版本 2.0 由从克罗地亚 .hr 顶级域爬取的并行克罗地亚语-英语文本组成。 | [hrenwac_para](https://huggingface.co/datasets/hrenwac_para) |
|
51 |
+
| id_panl_bppt | | TRAIN: 47916 | BPPT(印度尼西亚技术评估和应用机构)为 PAN 本地化项目(发展亚洲本地语言计算能力的区域性倡议)创建的多域翻译系统并行文本语料库。 该数据集包含大约 24K 个句子,分为 4 个不同主题(经济、国际、科学技术和体育)。 | [id_panl_bppt](https://huggingface.co/datasets/id_panl_bppt) |
|
52 |
+
| igbo | [Igbo-English Machine Translation](https://arxiv.org/abs/2004.00648v1) | | 在这项工作中,我们讨论了为伊博语(尼日利亚三种主要语言之一)构建标准机器翻译基准数据集所做的努力。 | [igbo_english_machine_translation](https://huggingface.co/datasets/igbo_english_machine_translation) |
|
53 |
+
| menyo20k_mt | [menyo20k_mt](https://arxiv.org/abs/2103.08647v3) | TRAIN: 19899, VALID: 6655, TEST: 13148 | MENYO-20k 是一个多域并行数据集,其中的文本来自新闻文章、ted 演讲、电影文字记录、广播文字记录、科技文本以及其他由网络和专业翻译人员策划的短文。 | [menyo20k_mt](https://huggingface.co/datasets/menyo20k_mt) |
|
|
|
|
|
54 |
| para_pat | [ParaPat](https://aclanthology.org/2020.lrec-1.465.pdf); [Homepage](https://figshare.com/articles/dataset/ParaPat_The_Multi-Million_Sentences_Parallel_Corpus_of_Patents_Abstracts/12627632) | 样本个数 | ParaPat:专利摘要的数百万个句子平行语料库 | [para_pat](https://huggingface.co/datasets/para_pat) |
|
55 |
+
| pib | [CVIT-PIB](https://arxiv.org/abs/2008.04860) | | 该数据集是 11 种印度语言的大规模句子对齐语料库,即: CVIT-PIB 语料库是印度语言可用的最大多语言语料库。 | [pib](https://huggingface.co/datasets/pib) |
|
56 |
| poleval2019_mt | | 样本个数 | PolEval 是一项受 SemEval 启发的波兰语自然语言处理工具评估活动。 | [poleval2019_mt](https://huggingface.co/datasets/poleval2019_mt) |
|
57 |
|
58 |
|
|
|
66 |
| ecb | [ECB](https://opus.nlpl.eu/ECB/corpus/version/ECB); | 样本个数 | | [ecb](https://huggingface.co/datasets/ecb) |
|
67 |
| emea | [EMEA](https://opus.nlpl.eu/EMEA/corpus/version/EMEA); | 样本个数 | | [emea](https://huggingface.co/datasets/emea) |
|
68 |
| kde4 | [KDE4](https://opus.nlpl.eu/KDE4/corpus/version/KDE4); [apps.kde.org](https://apps.kde.org/zh-cn/); [opus.nlpl.eu](https://opus.nlpl.eu/) | 样本个数 | | [kde4](https://huggingface.co/datasets/kde4) |
|
69 |
+
| multi_para_crawl | [ParaCrawl](https://aclanthology.org/2020.acl-main.417/); [paracrawl.eu](http://paracrawl.eu); [MultiParaCrawl](https://opus.nlpl.eu/MultiParaCrawl/corpus/version/MultiParaCrawl) | 我们报告了使用开源软件通过抓取网络来创建最大的公开可用并行语料库的方法。 | | [multi_para_crawl](https://huggingface.co/datasets/multi_para_crawl) |
|
70 |
| open_subtitles | [OpenSubtitles](https://opus.nlpl.eu/OpenSubtitles/corpus/version/OpenSubtitles); [L16-1147.pdf](https://aclanthology.org/L16-1147.pdf) | 样本个数 | 我们推出了平行语料库 OpenSubtitles 集合的新主要版本。 该版本由大型电影和电视字幕数据库编译而成,共包含 1689 个双文本,涵盖 60 种语言的 26 亿个句子。 该版本还包含了字幕预处理和对齐方面的许多增强功能,例如自动更正 OCR 错误以及使用元数据来估计每个字幕的质量并对字幕对进行评分。 | [open_subtitles](https://huggingface.co/datasets/open_subtitles) |
|
71 |
+
| para_crawl | [ParaCrawl](https://opus.nlpl.eu/ParaCrawl/corpus/version/ParaCrawl); [ParaCrawl](https://aclanthology.org/2020.acl-main.417.pdf) | 样本个数 | 欧洲官方语言的网络规模并行语料库。 | [para_crawl](https://huggingface.co/datasets/para_crawl) |
|
72 |
| php | [PHP](https://opus.nlpl.eu/PHP/corpus/version/PHP) | 样本个数 | 最初从 http://se.php.net/download-docs.php 中提取的并行语料库。该语料库相当嘈杂。 | [php](https://huggingface.co/datasets/php) |
|
73 |
|
74 |
|
data/id_panl_bppt.jsonl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:507a14ffce6d0a9f8c5a8c3d6d3d6bd1828fe16177e1d4c7d157f7e3fbb1a6e0
|
3 |
+
size 10734731
|
data/menyo20k_mt.jsonl
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:382a1e4d4c721cb225b595c5581a729cffee9d6146c9e3209289fe7c840a148b
|
3 |
+
size 8350838
|
dataset_details.md
CHANGED
@@ -676,6 +676,56 @@ hr: 95844
|
|
676 |
![hrenwac_para_text_length.jpg](docs/picture/hrenwac_para_text_length.jpg)
|
677 |
|
678 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
679 |
#### iwslt2017
|
680 |
以下都是 train 训练集的信息
|
681 |
|
@@ -759,6 +809,57 @@ de: 203597
|
|
759 |
![iwslt2017_text_length.jpg](docs/picture/iwslt2017_text_length.jpg)
|
760 |
|
761 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
762 |
#### mike0307
|
763 |
以下都是 train 训练集的信息
|
764 |
|
|
|
676 |
![hrenwac_para_text_length.jpg](docs/picture/hrenwac_para_text_length.jpg)
|
677 |
|
678 |
|
679 |
+
#### id_panl_bppt
|
680 |
+
以下都是 train 训练集的信息
|
681 |
+
|
682 |
+
```text
|
683 |
+
语种数量:
|
684 |
+
en: 23976
|
685 |
+
id: 23940
|
686 |
+
```
|
687 |
+
|
688 |
+
样本示例:
|
689 |
+
|
690 |
+
| 数据 | 语种 | 样本 |
|
691 |
+
| :---: | :---: | :---: |
|
692 |
+
| id_panl_bppt | en | Minister of Finance Sri Mulyani Indrawati said that a sharp correction of the composite inde x by up to 4 pct in Wedenesday?s trading was a mere temporary effect of regional factors like decline in plantation commodity prices and the financial crisis in Thailand. |
|
693 |
+
| id_panl_bppt | en | In a press briefing held at the ministry here on Wedenesday evening, Minister Sri Mulyani flanked by President Director of the Jakarta Stock Exchange JSX Erry Firmansyah said that some of the Indonesian economic factors had improved, instead the inflation factor of foodstuffs will soon dissappear which is confirmed by rice prices in all the provinces. |
|
694 |
+
| id_panl_bppt | en | Sri Mulayani showed other factors, among others, the rupiah currency tended to strengthen, with a positive impact on the inflation. |
|
695 |
+
| id_panl_bppt | id | Menteri Keuangan Sri Mulyani mengatakan koreksi tajam pada Indeks Harga Saham Gabungan IHSG hingga sekitar 4 persen dalam perdagangan Rabu 10/1 hanya efek sesaat dari faktor-faktor regional seperti penurunan harga komoditi perkebunan dan krisis finansial di Thailand. |
|
696 |
+
| id_panl_bppt | id | Dalam jumpa pers bersama Dirut Bursa Efek Jakarta BEJ, Erry Firmansyah di gedung Depkeu Jakarta, Rabu malam, Menkeu menjelaskan beberapa faktor ekonomi Indonesia justru membaik. |
|
697 |
+
| id_panl_bppt | id | Kita melihat faktor inflasi dari makanan akan segera hilang yang terkonfirmasi dari harga beras di semua propinsi, katanya. |
|
698 |
+
|
699 |
+
<details>
|
700 |
+
<summary>文本长度</summary>
|
701 |
+
<pre><code>10-20: 42
|
702 |
+
20-30: 303
|
703 |
+
30-40: 711
|
704 |
+
40-50: 1137
|
705 |
+
50-60: 1563
|
706 |
+
60-70: 1973
|
707 |
+
70-80: 2285
|
708 |
+
80-90: 2478
|
709 |
+
90-100: 2856
|
710 |
+
100-110: 2847
|
711 |
+
110-120: 2932
|
712 |
+
120-130: 2777
|
713 |
+
130-140: 2792
|
714 |
+
140-150: 2689
|
715 |
+
150-160: 2560
|
716 |
+
160-170: 2492
|
717 |
+
170-180: 2322
|
718 |
+
180-190: 2198
|
719 |
+
190-200: 2022
|
720 |
+
200-210: 8937
|
721 |
+
</code></pre>
|
722 |
+
</details>
|
723 |
+
|
724 |
+
文本长度统计图像:
|
725 |
+
|
726 |
+
![id_panl_bppt_text_length.jpg](docs/picture/id_panl_bppt_text_length.jpg)
|
727 |
+
|
728 |
+
|
729 |
#### iwslt2017
|
730 |
以下都是 train 训练集的信息
|
731 |
|
|
|
809 |
![iwslt2017_text_length.jpg](docs/picture/iwslt2017_text_length.jpg)
|
810 |
|
811 |
|
812 |
+
#### menyo20k_mt
|
813 |
+
以下都是 train 训练集的信息
|
814 |
+
|
815 |
+
```text
|
816 |
+
语种数量:
|
817 |
+
yo: 9970
|
818 |
+
en: 9929
|
819 |
+
```
|
820 |
+
|
821 |
+
样本示例:
|
822 |
+
|
823 |
+
| 数据 | 语种 | 样本 |
|
824 |
+
| :---: | :---: | :---: |
|
825 |
+
| menyo20k_mt | en | Unit 1: What is Creative Commons? |
|
826 |
+
| menyo20k_mt | en | This work is licensed under a Creative Commons Attribution 4.0 International License. |
|
827 |
+
| menyo20k_mt | en | Creative Commons is a set of legal tools, a nonprofit organization, as well as a global network and a movement — all inspired by people’s willingness to share their creativity and knowledge, and enabled by a set of open copyright licenses. |
|
828 |
+
| menyo20k_mt | yo | Ìdá 1: Kín ni Creative Commons? |
|
829 |
+
| menyo20k_mt | yo | Iṣẹ́ yìí wà lábẹ́ àṣẹ Creative Commons Attribution 4.0 International License. |
|
830 |
+
| menyo20k_mt | yo | Creative Commons jẹ́ àwọn ọ̀kan-ò-jọ̀kan ohun-èlò ajẹmófin, iléeṣẹ́ àìlérèlórí, àti àjọ àwọn ènìyàn eléròǹgbà kan náà kárí àgbáńlá ayé— tí í ṣe ìmísí àwọn ènìyànkan tí ó ní ìfẹ́ tinútinú láti pín àwọn iṣẹ́-àtinúdá àti ìmọ̀ wọn èyí tí ó ní àtìlẹ́yìn àwọn ọ̀kan-ò-jọ̀kan àṣẹ ìṣísílẹ̀-gbangba-wálíà fún àtúnlò. |
|
831 |
+
|
832 |
+
<details>
|
833 |
+
<summary>文本长度</summary>
|
834 |
+
<pre><code>0-10: 98
|
835 |
+
10-20: 851
|
836 |
+
20-30: 1289
|
837 |
+
30-40: 1480
|
838 |
+
40-50: 1506
|
839 |
+
50-60: 1506
|
840 |
+
60-70: 1386
|
841 |
+
70-80: 1165
|
842 |
+
80-90: 1142
|
843 |
+
90-100: 1085
|
844 |
+
100-110: 923
|
845 |
+
110-120: 927
|
846 |
+
120-130: 825
|
847 |
+
130-140: 726
|
848 |
+
140-150: 690
|
849 |
+
150-160: 647
|
850 |
+
160-170: 620
|
851 |
+
170-180: 433
|
852 |
+
180-190: 423
|
853 |
+
190-200: 324
|
854 |
+
200-210: 1853
|
855 |
+
</code></pre>
|
856 |
+
</details>
|
857 |
+
|
858 |
+
文本长度统计图像:
|
859 |
+
|
860 |
+
![menyo20k_mt_text_length.jpg](docs/picture/menyo20k_mt_text_length.jpg)
|
861 |
+
|
862 |
+
|
863 |
#### mike0307
|
864 |
以下都是 train 训练集的信息
|
865 |
|
docs/picture/id_panl_bppt_text_length.jpg
ADDED
Git LFS Details
|
docs/picture/menyo20k_mt_text_length.jpg
ADDED
Git LFS Details
|
examples/make_subset_details.py
CHANGED
@@ -12,7 +12,7 @@ from project_settings import project_path
|
|
12 |
|
13 |
def get_args():
|
14 |
parser = argparse.ArgumentParser()
|
15 |
-
parser.add_argument("--dataset_name", default="
|
16 |
parser.add_argument(
|
17 |
"--dataset_cache_dir",
|
18 |
default=(project_path / "hub_datasets").as_posix(),
|
|
|
12 |
|
13 |
def get_args():
|
14 |
parser = argparse.ArgumentParser()
|
15 |
+
parser.add_argument("--dataset_name", default="menyo20k_mt", type=str)
|
16 |
parser.add_argument(
|
17 |
"--dataset_cache_dir",
|
18 |
default=(project_path / "hub_datasets").as_posix(),
|
examples/preprocess/preprocess_id_panl_bppt.py
ADDED
@@ -0,0 +1,85 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/usr/bin/python3
|
2 |
+
# -*- coding: utf-8 -*-
|
3 |
+
import argparse
|
4 |
+
from collections import defaultdict
|
5 |
+
import json
|
6 |
+
import os
|
7 |
+
import sys
|
8 |
+
|
9 |
+
pwd = os.path.abspath(os.path.dirname(__file__))
|
10 |
+
sys.path.append(os.path.join(pwd, "../../"))
|
11 |
+
|
12 |
+
from datasets import load_dataset, DownloadMode
|
13 |
+
from tqdm import tqdm
|
14 |
+
|
15 |
+
from language_identification import LANGUAGE_MAP
|
16 |
+
from project_settings import project_path
|
17 |
+
|
18 |
+
|
19 |
+
def get_args():
|
20 |
+
parser = argparse.ArgumentParser()
|
21 |
+
parser.add_argument("--dataset_path", default="id_panl_bppt", type=str)
|
22 |
+
parser.add_argument(
|
23 |
+
"--dataset_cache_dir",
|
24 |
+
default=(project_path / "hub_datasets").as_posix(),
|
25 |
+
type=str
|
26 |
+
)
|
27 |
+
parser.add_argument(
|
28 |
+
"--output_file",
|
29 |
+
default=(project_path / "data/id_panl_bppt.jsonl"),
|
30 |
+
type=str
|
31 |
+
)
|
32 |
+
|
33 |
+
args = parser.parse_args()
|
34 |
+
return args
|
35 |
+
|
36 |
+
|
37 |
+
def main():
|
38 |
+
args = get_args()
|
39 |
+
|
40 |
+
dataset_dict = load_dataset(
|
41 |
+
path=args.dataset_path,
|
42 |
+
cache_dir=args.dataset_cache_dir,
|
43 |
+
# download_mode=DownloadMode.FORCE_REDOWNLOAD
|
44 |
+
)
|
45 |
+
print(dataset_dict)
|
46 |
+
|
47 |
+
text_set = set()
|
48 |
+
counter = defaultdict(int)
|
49 |
+
with open(args.output_file, "w", encoding="utf-8") as f:
|
50 |
+
for k, v in dataset_dict.items():
|
51 |
+
split = k
|
52 |
+
if split not in ("train", "validation", "test"):
|
53 |
+
print("skip split: {}".format(split))
|
54 |
+
continue
|
55 |
+
|
56 |
+
for sample in tqdm(v):
|
57 |
+
|
58 |
+
translation = sample["translation"]
|
59 |
+
for language, text in translation.items():
|
60 |
+
text = text.strip()
|
61 |
+
|
62 |
+
if text in text_set:
|
63 |
+
continue
|
64 |
+
text_set.add(text)
|
65 |
+
|
66 |
+
if language not in LANGUAGE_MAP.keys():
|
67 |
+
raise AssertionError("language: {}, text: {}".format(language, text))
|
68 |
+
|
69 |
+
row = {
|
70 |
+
"text": text,
|
71 |
+
"language": language,
|
72 |
+
"data_source": "id_panl_bppt",
|
73 |
+
"split": split
|
74 |
+
}
|
75 |
+
row = json.dumps(row, ensure_ascii=False)
|
76 |
+
f.write("{}\n".format(row))
|
77 |
+
counter[split] += 1
|
78 |
+
|
79 |
+
print("counter: {}".format(counter))
|
80 |
+
|
81 |
+
return
|
82 |
+
|
83 |
+
|
84 |
+
if __name__ == '__main__':
|
85 |
+
main()
|
examples/preprocess/preprocess_igbo.py
ADDED
@@ -0,0 +1,85 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/usr/bin/python3
|
2 |
+
# -*- coding: utf-8 -*-
|
3 |
+
import argparse
|
4 |
+
from collections import defaultdict
|
5 |
+
import json
|
6 |
+
import os
|
7 |
+
import sys
|
8 |
+
|
9 |
+
pwd = os.path.abspath(os.path.dirname(__file__))
|
10 |
+
sys.path.append(os.path.join(pwd, "../../"))
|
11 |
+
|
12 |
+
from datasets import load_dataset, DownloadMode
|
13 |
+
from tqdm import tqdm
|
14 |
+
|
15 |
+
from language_identification import LANGUAGE_MAP
|
16 |
+
from project_settings import project_path
|
17 |
+
|
18 |
+
|
19 |
+
def get_args():
|
20 |
+
parser = argparse.ArgumentParser()
|
21 |
+
parser.add_argument("--dataset_path", default="igbo_english_machine_translation", type=str)
|
22 |
+
parser.add_argument(
|
23 |
+
"--dataset_cache_dir",
|
24 |
+
default=(project_path / "hub_datasets").as_posix(),
|
25 |
+
type=str
|
26 |
+
)
|
27 |
+
parser.add_argument(
|
28 |
+
"--output_file",
|
29 |
+
default=(project_path / "data/igbo.jsonl"),
|
30 |
+
type=str
|
31 |
+
)
|
32 |
+
|
33 |
+
args = parser.parse_args()
|
34 |
+
return args
|
35 |
+
|
36 |
+
|
37 |
+
def main():
|
38 |
+
args = get_args()
|
39 |
+
|
40 |
+
dataset_dict = load_dataset(
|
41 |
+
path=args.dataset_path,
|
42 |
+
cache_dir=args.dataset_cache_dir,
|
43 |
+
# download_mode=DownloadMode.FORCE_REDOWNLOAD
|
44 |
+
)
|
45 |
+
print(dataset_dict)
|
46 |
+
|
47 |
+
text_set = set()
|
48 |
+
counter = defaultdict(int)
|
49 |
+
with open(args.output_file, "w", encoding="utf-8") as f:
|
50 |
+
for k, v in dataset_dict.items():
|
51 |
+
split = k
|
52 |
+
if split not in ("train", "validation", "test"):
|
53 |
+
print("skip split: {}".format(split))
|
54 |
+
continue
|
55 |
+
|
56 |
+
for sample in tqdm(v):
|
57 |
+
|
58 |
+
translation = sample["translation"]
|
59 |
+
for language, text in translation.items():
|
60 |
+
text = text.strip()
|
61 |
+
|
62 |
+
if text in text_set:
|
63 |
+
continue
|
64 |
+
text_set.add(text)
|
65 |
+
|
66 |
+
if language not in LANGUAGE_MAP.keys():
|
67 |
+
raise AssertionError("language: {}, text: {}".format(language, text))
|
68 |
+
|
69 |
+
row = {
|
70 |
+
"text": text,
|
71 |
+
"language": language,
|
72 |
+
"data_source": "igbo",
|
73 |
+
"split": split
|
74 |
+
}
|
75 |
+
row = json.dumps(row, ensure_ascii=False)
|
76 |
+
f.write("{}\n".format(row))
|
77 |
+
counter[split] += 1
|
78 |
+
|
79 |
+
print("counter: {}".format(counter))
|
80 |
+
|
81 |
+
return
|
82 |
+
|
83 |
+
|
84 |
+
if __name__ == '__main__':
|
85 |
+
main()
|
examples/preprocess/preprocess_menyo20k_mt.py
ADDED
@@ -0,0 +1,88 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/usr/bin/python3
|
2 |
+
# -*- coding: utf-8 -*-
|
3 |
+
import argparse
|
4 |
+
from collections import defaultdict
|
5 |
+
import json
|
6 |
+
import os
|
7 |
+
import sys
|
8 |
+
|
9 |
+
pwd = os.path.abspath(os.path.dirname(__file__))
|
10 |
+
sys.path.append(os.path.join(pwd, "../../"))
|
11 |
+
|
12 |
+
from datasets import load_dataset, DownloadMode
|
13 |
+
from tqdm import tqdm
|
14 |
+
|
15 |
+
from language_identification import LANGUAGE_MAP
|
16 |
+
from project_settings import project_path
|
17 |
+
|
18 |
+
|
19 |
+
def get_args():
|
20 |
+
parser = argparse.ArgumentParser()
|
21 |
+
parser.add_argument("--dataset_path", default="menyo20k_mt", type=str)
|
22 |
+
parser.add_argument(
|
23 |
+
"--dataset_cache_dir",
|
24 |
+
default=(project_path / "hub_datasets").as_posix(),
|
25 |
+
type=str
|
26 |
+
)
|
27 |
+
parser.add_argument(
|
28 |
+
"--output_file",
|
29 |
+
default=(project_path / "data/menyo20k_mt.jsonl"),
|
30 |
+
type=str
|
31 |
+
)
|
32 |
+
|
33 |
+
args = parser.parse_args()
|
34 |
+
return args
|
35 |
+
|
36 |
+
|
37 |
+
def main():
|
38 |
+
args = get_args()
|
39 |
+
|
40 |
+
dataset_dict = load_dataset(
|
41 |
+
path=args.dataset_path,
|
42 |
+
cache_dir=args.dataset_cache_dir,
|
43 |
+
# download_mode=DownloadMode.FORCE_REDOWNLOAD
|
44 |
+
)
|
45 |
+
print(dataset_dict)
|
46 |
+
|
47 |
+
text_set = set()
|
48 |
+
counter = defaultdict(int)
|
49 |
+
with open(args.output_file, "w", encoding="utf-8") as f:
|
50 |
+
for k, v in dataset_dict.items():
|
51 |
+
split = k
|
52 |
+
if split not in ("train", "validation", "test"):
|
53 |
+
print("skip split: {}".format(split))
|
54 |
+
continue
|
55 |
+
|
56 |
+
for sample in tqdm(v):
|
57 |
+
|
58 |
+
translation = sample["translation"]
|
59 |
+
for language, text in translation.items():
|
60 |
+
text = text.strip()
|
61 |
+
text = text.replace("", "")
|
62 |
+
text = text.replace(" ", " ")
|
63 |
+
text = text.replace("", "-")
|
64 |
+
|
65 |
+
if text in text_set:
|
66 |
+
continue
|
67 |
+
text_set.add(text)
|
68 |
+
|
69 |
+
if language not in LANGUAGE_MAP.keys():
|
70 |
+
raise AssertionError("language: {}, text: {}".format(language, text))
|
71 |
+
|
72 |
+
row = {
|
73 |
+
"text": text,
|
74 |
+
"language": language,
|
75 |
+
"data_source": "menyo20k_mt",
|
76 |
+
"split": split
|
77 |
+
}
|
78 |
+
row = json.dumps(row, ensure_ascii=False)
|
79 |
+
f.write("{}\n".format(row))
|
80 |
+
counter[split] += 1
|
81 |
+
|
82 |
+
print("counter: {}".format(counter))
|
83 |
+
|
84 |
+
return
|
85 |
+
|
86 |
+
|
87 |
+
if __name__ == '__main__':
|
88 |
+
main()
|
examples/preprocess/preprocess_para_pat.py
ADDED
@@ -0,0 +1,96 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/usr/bin/python3
|
2 |
+
# -*- coding: utf-8 -*-
|
3 |
+
import argparse
|
4 |
+
from collections import defaultdict
|
5 |
+
import json
|
6 |
+
import os
|
7 |
+
import sys
|
8 |
+
|
9 |
+
pwd = os.path.abspath(os.path.dirname(__file__))
|
10 |
+
sys.path.append(os.path.join(pwd, "../../"))
|
11 |
+
|
12 |
+
from datasets import load_dataset, DownloadMode
|
13 |
+
from tqdm import tqdm
|
14 |
+
|
15 |
+
from language_identification import LANGUAGE_MAP
|
16 |
+
from project_settings import project_path
|
17 |
+
|
18 |
+
|
19 |
+
def get_args():
|
20 |
+
parser = argparse.ArgumentParser()
|
21 |
+
parser.add_argument("--dataset_path", default="para_pat", type=str)
|
22 |
+
parser.add_argument(
|
23 |
+
"--dataset_cache_dir",
|
24 |
+
default=(project_path / "hub_datasets").as_posix(),
|
25 |
+
type=str
|
26 |
+
)
|
27 |
+
parser.add_argument(
|
28 |
+
"--output_file",
|
29 |
+
default=(project_path / "data/para_pat.jsonl"),
|
30 |
+
type=str
|
31 |
+
)
|
32 |
+
|
33 |
+
args = parser.parse_args()
|
34 |
+
return args
|
35 |
+
|
36 |
+
|
37 |
+
def main():
|
38 |
+
args = get_args()
|
39 |
+
|
40 |
+
name_list = [
|
41 |
+
"cs-en", "de-en", "de-fr", "el-en", "en-es", "en-fr", "en-hu", "en-ja",
|
42 |
+
"en-ko", "en-pt", "en-ro", "en-ru", "en-sk", "en-uk", "en-zh", "es-fr",
|
43 |
+
"fr-ja", "fr-ko", "fr-ru"
|
44 |
+
]
|
45 |
+
|
46 |
+
text_set = set()
|
47 |
+
counter = defaultdict(int)
|
48 |
+
with open(args.output_file, "w", encoding="utf-8") as f:
|
49 |
+
for name in name_list:
|
50 |
+
try:
|
51 |
+
dataset_dict = load_dataset(
|
52 |
+
path=args.dataset_path,
|
53 |
+
name=name,
|
54 |
+
cache_dir=args.dataset_cache_dir,
|
55 |
+
# download_mode=DownloadMode.FORCE_REDOWNLOAD
|
56 |
+
)
|
57 |
+
except Exception:
|
58 |
+
print("skip subset: {}".format(name))
|
59 |
+
continue
|
60 |
+
for k, v in dataset_dict.items():
|
61 |
+
split = k
|
62 |
+
if split not in ("train", "validation", "test"):
|
63 |
+
print("skip split: {}".format(split))
|
64 |
+
continue
|
65 |
+
|
66 |
+
for sample in tqdm(v):
|
67 |
+
translation = sample["translation"]
|
68 |
+
for language, text in translation.items():
|
69 |
+
text = text.strip()
|
70 |
+
text = text.replace(" ", " ")
|
71 |
+
text = text.replace("", "-")
|
72 |
+
|
73 |
+
if text in text_set:
|
74 |
+
continue
|
75 |
+
text_set.add(text)
|
76 |
+
|
77 |
+
if language not in LANGUAGE_MAP.keys():
|
78 |
+
raise AssertionError(language)
|
79 |
+
|
80 |
+
row = {
|
81 |
+
"text": text,
|
82 |
+
"language": language,
|
83 |
+
"data_source": "para_pat",
|
84 |
+
"split": split
|
85 |
+
}
|
86 |
+
row = json.dumps(row, ensure_ascii=False)
|
87 |
+
f.write("{}\n".format(row))
|
88 |
+
counter[split] += 1
|
89 |
+
|
90 |
+
print("counter: {}".format(counter))
|
91 |
+
|
92 |
+
return
|
93 |
+
|
94 |
+
|
95 |
+
if __name__ == "__main__":
|
96 |
+
main()
|
examples/preprocess/preprocess_pib.py
ADDED
@@ -0,0 +1,99 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/usr/bin/python3
|
2 |
+
# -*- coding: utf-8 -*-
|
3 |
+
import argparse
|
4 |
+
from collections import defaultdict
|
5 |
+
import json
|
6 |
+
import os
|
7 |
+
import sys
|
8 |
+
|
9 |
+
pwd = os.path.abspath(os.path.dirname(__file__))
|
10 |
+
sys.path.append(os.path.join(pwd, "../../"))
|
11 |
+
|
12 |
+
from datasets import load_dataset, DownloadMode
|
13 |
+
from tqdm import tqdm
|
14 |
+
|
15 |
+
from language_identification import LANGUAGE_MAP
|
16 |
+
from project_settings import project_path
|
17 |
+
|
18 |
+
|
19 |
+
def get_args():
|
20 |
+
parser = argparse.ArgumentParser()
|
21 |
+
parser.add_argument("--dataset_path", default="pib", type=str)
|
22 |
+
parser.add_argument(
|
23 |
+
"--dataset_cache_dir",
|
24 |
+
default=(project_path / "hub_datasets").as_posix(),
|
25 |
+
type=str
|
26 |
+
)
|
27 |
+
parser.add_argument(
|
28 |
+
"--output_file",
|
29 |
+
default=(project_path / "data/pib.jsonl"),
|
30 |
+
type=str
|
31 |
+
)
|
32 |
+
|
33 |
+
args = parser.parse_args()
|
34 |
+
return args
|
35 |
+
|
36 |
+
|
37 |
+
def main():
|
38 |
+
args = get_args()
|
39 |
+
|
40 |
+
name_list = [
|
41 |
+
"or-ur", "ml-or", "bn-ta", "gu-mr", "hi-or",
|
42 |
+
"en-or", "mr-ur", "en-ta", "hi-ta", "bn-en",
|
43 |
+
"bn-or", "ml-ta", "gu-ur", "bn-ml", "ml-pa",
|
44 |
+
"en-pa", "bn-hi", "hi-pa", "gu-te", "pa-ta",
|
45 |
+
"hi-ml", "or-te", "en-ml", "en-hi", "bn-pa",
|
46 |
+
"mr-te", "mr-pa", "bn-te", "gu-hi", "ta-ur",
|
47 |
+
"te-ur", "or-pa", "gu-ml", "gu-pa", "hi-te",
|
48 |
+
"en-te", "ml-te", "pa-ur", "hi-ur", "mr-or",
|
49 |
+
"en-ur", "ml-ur", "bn-mr", "gu-ta", "pa-te",
|
50 |
+
"bn-gu", "bn-ur", "ml-mr", "or-ta", "ta-te",
|
51 |
+
"gu-or", "en-gu", "hi-mr", "mr-ta", "en-mr"
|
52 |
+
]
|
53 |
+
|
54 |
+
text_set = set()
|
55 |
+
counter = defaultdict(int)
|
56 |
+
with open(args.output_file, "w", encoding="utf-8") as f:
|
57 |
+
for name in name_list:
|
58 |
+
dataset_dict = load_dataset(
|
59 |
+
path=args.dataset_path,
|
60 |
+
name=name,
|
61 |
+
cache_dir=args.dataset_cache_dir,
|
62 |
+
# download_mode=DownloadMode.FORCE_REDOWNLOAD
|
63 |
+
)
|
64 |
+
for k, v in dataset_dict.items():
|
65 |
+
split = k
|
66 |
+
if split not in ("train", "validation", "test"):
|
67 |
+
print("skip split: {}".format(split))
|
68 |
+
continue
|
69 |
+
|
70 |
+
for sample in tqdm(v):
|
71 |
+
|
72 |
+
translation = sample["translation"]
|
73 |
+
for language, text in translation.items():
|
74 |
+
text = text.strip()
|
75 |
+
|
76 |
+
if text in text_set:
|
77 |
+
continue
|
78 |
+
text_set.add(text)
|
79 |
+
|
80 |
+
if language not in LANGUAGE_MAP.keys():
|
81 |
+
raise AssertionError("language: {}, text: {}".format(language, text))
|
82 |
+
|
83 |
+
row = {
|
84 |
+
"text": text,
|
85 |
+
"language": language,
|
86 |
+
"data_source": "pib",
|
87 |
+
"split": split
|
88 |
+
}
|
89 |
+
row = json.dumps(row, ensure_ascii=False)
|
90 |
+
f.write("{}\n".format(row))
|
91 |
+
counter[split] += 1
|
92 |
+
|
93 |
+
print("counter: {}".format(counter))
|
94 |
+
|
95 |
+
return
|
96 |
+
|
97 |
+
|
98 |
+
if __name__ == "__main__":
|
99 |
+
main()
|
examples/preprocess/preprocess_poleval2019_mt.py
ADDED
@@ -0,0 +1,89 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/usr/bin/python3
|
2 |
+
# -*- coding: utf-8 -*-
|
3 |
+
import argparse
|
4 |
+
from collections import defaultdict
|
5 |
+
import json
|
6 |
+
import os
|
7 |
+
import sys
|
8 |
+
|
9 |
+
pwd = os.path.abspath(os.path.dirname(__file__))
|
10 |
+
sys.path.append(os.path.join(pwd, "../../"))
|
11 |
+
|
12 |
+
from datasets import load_dataset, DownloadMode
|
13 |
+
from tqdm import tqdm
|
14 |
+
|
15 |
+
from language_identification import LANGUAGE_MAP
|
16 |
+
from project_settings import project_path
|
17 |
+
|
18 |
+
|
19 |
+
def get_args():
|
20 |
+
parser = argparse.ArgumentParser()
|
21 |
+
parser.add_argument("--dataset_path", default="poleval2019_mt", type=str)
|
22 |
+
parser.add_argument(
|
23 |
+
"--dataset_cache_dir",
|
24 |
+
default=(project_path / "hub_datasets").as_posix(),
|
25 |
+
type=str
|
26 |
+
)
|
27 |
+
parser.add_argument(
|
28 |
+
"--output_file",
|
29 |
+
default=(project_path / "data/poleval2019_mt.jsonl"),
|
30 |
+
type=str
|
31 |
+
)
|
32 |
+
|
33 |
+
args = parser.parse_args()
|
34 |
+
return args
|
35 |
+
|
36 |
+
|
37 |
+
def main():
|
38 |
+
args = get_args()
|
39 |
+
|
40 |
+
name_list = [
|
41 |
+
"en-pl", "pl-en", "pl-ru", "ru-pl"
|
42 |
+
]
|
43 |
+
|
44 |
+
text_set = set()
|
45 |
+
counter = defaultdict(int)
|
46 |
+
with open(args.output_file, "w", encoding="utf-8") as f:
|
47 |
+
for name in name_list:
|
48 |
+
dataset_dict = load_dataset(
|
49 |
+
path=args.dataset_path,
|
50 |
+
name=name,
|
51 |
+
cache_dir=args.dataset_cache_dir,
|
52 |
+
# download_mode=DownloadMode.FORCE_REDOWNLOAD
|
53 |
+
)
|
54 |
+
for k, v in dataset_dict.items():
|
55 |
+
split = k
|
56 |
+
if split not in ("train", "validation", "test"):
|
57 |
+
print("skip split: {}".format(split))
|
58 |
+
continue
|
59 |
+
|
60 |
+
for sample in tqdm(v):
|
61 |
+
|
62 |
+
translation = sample["translation"]
|
63 |
+
for language, text in translation.items():
|
64 |
+
text = text.strip()
|
65 |
+
|
66 |
+
if text in text_set:
|
67 |
+
continue
|
68 |
+
text_set.add(text)
|
69 |
+
|
70 |
+
if language not in LANGUAGE_MAP.keys():
|
71 |
+
raise AssertionError("language: {}, text: {}".format(language, text))
|
72 |
+
|
73 |
+
row = {
|
74 |
+
"text": text,
|
75 |
+
"language": language,
|
76 |
+
"data_source": "poleval2019_mt",
|
77 |
+
"split": split
|
78 |
+
}
|
79 |
+
row = json.dumps(row, ensure_ascii=False)
|
80 |
+
f.write("{}\n".format(row))
|
81 |
+
counter[split] += 1
|
82 |
+
|
83 |
+
print("counter: {}".format(counter))
|
84 |
+
|
85 |
+
return
|
86 |
+
|
87 |
+
|
88 |
+
if __name__ == "__main__":
|
89 |
+
main()
|
language_identification.py
CHANGED
@@ -18,7 +18,9 @@ _URLS = {
|
|
18 |
"europa_ecdc_tm": "data/europa_ecdc_tm.jsonl",
|
19 |
"hind_encorp": "data/hind_encorp.jsonl",
|
20 |
"hrenwac_para": "data/hrenwac_para.jsonl",
|
|
|
21 |
"iwslt2017": "data/iwslt2017.jsonl",
|
|
|
22 |
"mike0307": "data/mike0307.jsonl",
|
23 |
"nbnn": "data/nbnn.jsonl",
|
24 |
"nordic_langid": "data/nordic_langid.jsonl",
|
@@ -62,6 +64,7 @@ LANGUAGE_MAP = {
|
|
62 |
"hi_en": "hindi english",
|
63 |
"hr": "croatian",
|
64 |
"hu": "hungarian",
|
|
|
65 |
"is": "icelandic",
|
66 |
"it": "italian",
|
67 |
"ja": "japanese",
|
@@ -88,6 +91,7 @@ LANGUAGE_MAP = {
|
|
88 |
"ts": "dzonga",
|
89 |
"ur": "urdu",
|
90 |
"vi": "vietnamese",
|
|
|
91 |
"zh": "chinese",
|
92 |
"zh-cn": "simplified chinese",
|
93 |
"zh-tw": "traditional chinese",
|
@@ -108,7 +112,9 @@ class LanguageIdentification(datasets.GeneratorBasedBuilder):
|
|
108 |
datasets.BuilderConfig(name="europa_ecdc_tm", version=VERSION, description="europa_ecdc_tm"),
|
109 |
datasets.BuilderConfig(name="hind_encorp", version=VERSION, description="hind_encorp"),
|
110 |
datasets.BuilderConfig(name="hrenwac_para", version=VERSION, description="hrenwac_para"),
|
|
|
111 |
datasets.BuilderConfig(name="iwslt2017", version=VERSION, description="iwslt2017"),
|
|
|
112 |
datasets.BuilderConfig(name="mike0307", version=VERSION, description="mike0307"),
|
113 |
datasets.BuilderConfig(name="nbnn", version=VERSION, description="nbnn"),
|
114 |
datasets.BuilderConfig(name="nordic_langid", version=VERSION, description="nordic_langid"),
|
|
|
18 |
"europa_ecdc_tm": "data/europa_ecdc_tm.jsonl",
|
19 |
"hind_encorp": "data/hind_encorp.jsonl",
|
20 |
"hrenwac_para": "data/hrenwac_para.jsonl",
|
21 |
+
"id_panl_bppt": "data/id_panl_bppt.jsonl",
|
22 |
"iwslt2017": "data/iwslt2017.jsonl",
|
23 |
+
"menyo20k_mt": "data/menyo20k_mt.jsonl",
|
24 |
"mike0307": "data/mike0307.jsonl",
|
25 |
"nbnn": "data/nbnn.jsonl",
|
26 |
"nordic_langid": "data/nordic_langid.jsonl",
|
|
|
64 |
"hi_en": "hindi english",
|
65 |
"hr": "croatian",
|
66 |
"hu": "hungarian",
|
67 |
+
"id": "indonesian",
|
68 |
"is": "icelandic",
|
69 |
"it": "italian",
|
70 |
"ja": "japanese",
|
|
|
91 |
"ts": "dzonga",
|
92 |
"ur": "urdu",
|
93 |
"vi": "vietnamese",
|
94 |
+
"yo": "yoruba",
|
95 |
"zh": "chinese",
|
96 |
"zh-cn": "simplified chinese",
|
97 |
"zh-tw": "traditional chinese",
|
|
|
112 |
datasets.BuilderConfig(name="europa_ecdc_tm", version=VERSION, description="europa_ecdc_tm"),
|
113 |
datasets.BuilderConfig(name="hind_encorp", version=VERSION, description="hind_encorp"),
|
114 |
datasets.BuilderConfig(name="hrenwac_para", version=VERSION, description="hrenwac_para"),
|
115 |
+
datasets.BuilderConfig(name="id_panl_bppt", version=VERSION, description="id_panl_bppt"),
|
116 |
datasets.BuilderConfig(name="iwslt2017", version=VERSION, description="iwslt2017"),
|
117 |
+
datasets.BuilderConfig(name="menyo20k_mt", version=VERSION, description="menyo20k_mt"),
|
118 |
datasets.BuilderConfig(name="mike0307", version=VERSION, description="mike0307"),
|
119 |
datasets.BuilderConfig(name="nbnn", version=VERSION, description="nbnn"),
|
120 |
datasets.BuilderConfig(name="nordic_langid", version=VERSION, description="nordic_langid"),
|