sha
stringlengths
40
40
text
stringlengths
1
13.4M
id
stringlengths
2
117
tags
sequencelengths
1
7.91k
created_at
stringlengths
25
25
metadata
stringlengths
2
875k
last_modified
stringlengths
25
25
arxiv
sequencelengths
0
25
languages
sequencelengths
0
7.91k
tags_str
stringlengths
17
159k
text_str
stringlengths
1
447k
text_lists
sequencelengths
0
352
processed_texts
sequencelengths
1
353
4dd08ebac7cf221d87d3175fca5d5562d3923c34
A smaller version (100 samples) of https://huggingface.co/datasets/bs-modeling-metadata/website_metadata_c4
shanya/website_metadata_c4_toy
[ "region:us" ]
2022-03-02T23:29:22+00:00
{}
2021-10-04T15:55:11+00:00
[]
[]
TAGS #region-us
A smaller version (100 samples) of URL
[]
[ "TAGS\n#region-us \n" ]
8051c9bb36a5460415cb4f94b156eeb653c3385e
# BiPaR > General Description This repository contains datasets for EMNLP-IJCNLP 2019 paper "BiPaR: A Bilingual Parallel Dataset for Multilingual and Cross-lingual Reading Comprehension on Novels" (Yimin Jing, Deyi Xiong and Yan Zhen). BiPaR is an extractive and manually annotated bilingual parallel novel-style machine reading comprehension (MRC) dataset, developed to support monolingual, multilingual and cross-lingual reading comprehension. The data format of BiPaR is the same as SQuAD, so you can process BiPaR like SQuAD. Paper link: <https://arxiv.org/abs/1910.05040> Download link: <https://github.com/sharejing/BiPaR> (Including testing set) ## Monolingual MRC (P<sub>en</sub>, Q<sub>en</sub>, A<sub>en</sub>) or (P<sub>zh</sub>, Q<sub>zh</sub>, A<sub>zh</sub>). With these two monolingual MRC forms, we can investigate the performance variation of the same MRC model trained on two different languages with equivalent training instances. ```sh "context": "“Do you know what the Lingjiu Place in the Piaomiao Peak is and why the Shennong should be at its command?”“Never have I heard it before until you told me. And indeed, I didn’t know the Shennong troubling us is obeying its order ” replied the master, who thought that even as the Shennong should be at its command, then the Lingjiu Palace in the Piaomiao Peak must be very formidable. But the Piaomiao Peak never had he heard before in the numerous mountains of Yunnan. The thought loaded an even heavier rock on his troubled heart, whose eyebrows were knitted. “Then another said, ‘As maybe the Waternuts in the Wuliang Hill could rid our master of the disease, we should get them anyway, even risking our necks,’” the girl said, after eating two more seeds. “Then the first one sighed, ‘None but Madam Tianshantonglao can break the spell of Life and Death in my body. And when the spell attacks, though the herb is efficacious, merely it can relieve the intense agony, which leaves you between death and life……’ This is what they said as walking away. Have I made it clear?”", "id": "TRAIN_tian_long_ba_bu_34_QUERY_3_EN" "question": "What happens when the Life and Death break out?" "answers": [{"answer_start": 977, "text": "leaves you between death and life"}] ``` ```sh "context": "那少女道:“缥缈峰灵鹫宫是甚么玩意儿?为甚么神农帮要奉他的号令?”左子穆道:“缥缈峰灵鹫宫甚么的,还是此刻第一遭从姑娘嘴里听到。我实不知神农帮原来还是奉了别人的号令,才来跟我们为难。”想到神农帮既须奉令行事,则那缥缈峰甚么的自然厉害之极,云岭之南千山万峰,可从来没听说有一座缥缈峰,忧心更增,不由得皱起了眉头。那少女吃了两粒瓜子,说道:“那时又听得另一人说道:‘帮主身上这病根子,既然无量山中的通天草或能解得,众兄弟拚着身受千刀万剑,也要去采这通天草到手。’先一人叹了口气,说道:‘我身上这“生死符”,除了天山童姥她老人家本人,谁也无法解得。通天草虽然药性灵异,也只是在“生死符”发作之时,稍稍减轻些求生不得、求死不能的苦楚而已……’他们几个人一面说,一面走远。我说得够清楚了吗?”", "id": "TRAIN_tian_long_ba_bu_34_QUERY_3_ZH" "question": "“生死符”发作会怎样?" "answers": [{"answer_start": 300, "text": "求生不得、求死不能"}] ``` ## Multilingual MRC (P<sub>en</sub>, Q<sub>en</sub>, A<sub>en</sub>, P<sub>zh</sub>, Q<sub>zh</sub>, A<sub>zh</sub>). We can build a single MRC model to handle MRC of multiple languages on BiPaR and explore whether the alignment feature can significantly improve performance of the two languages. ```sh "context": '"Why? Do you wish to encourage the people to face the possible destruction of Trisolaran civilization with equanimity?""No. It's to encourage them to face the destruction of Earth civilization with equanimity. You know very well that after we publicized our policy toward the Earth civilization, there was a wave of extremely dangerous pacifism. We have only now discovered that there are many like the listener of Post 1379. We must control and eliminate these weak sentiments.""Princeps, this is mainly the result of recent messages received from the Earth. Your prediction has come true: The alienated forces on Earth really are growing. They have built a new transmission site completely under their control, and have begun to send us large amounts of information about Earth civilization.I must admit that their civilization has great appeal on Trisolaris. For our people, it sounds like sacred music from Heaven. The humanism of Earth will lead many Trisolarans onto the wrong path, just as Trisolaran civilization has already become a religion on Earth, Earth civilization has this potential on Trisolaris."', "id": "TRAIN_san_ti_374_QUERY_1_EN" "question": "What is the impact of earthman's humanistic thought on the trisolaran people?" "answers": [{"answer_start": 919, "text": "The humanism of Earth will lead many Trisolarans onto the wrong path, just as Trisolaran civilization has already become a religion on Earth, Earth civilization has this potential on Trisolaris."}] "context": "“这有什么意义呢?是让人民能够坦然面对三体文明可能的毁灭吗?”“不,是让他们坦然面对地球文明的毁灭。你也知道,在我们对地球文明的基本政策公布后,激发起一些极其危险的和平主义情绪。我们现在才发现,三体世界中像1379号监听员这样的人其实是很多的,必须控制和消除这种脆弱的情绪。”“元首,这种情绪主要是由最近来自地球的新信息引起的。您的预测实现了,地球上的异己力量果然在发展,他们建立了一个完全由自己控制的发射基地开始源源不断地向我们发送大量地球文明的信息。我得承认,地球文明在三体世界是很有杀伤力的,对我们的人民来说,那是来自天堂的圣乐。地铁人的人文思想会使很多三体人走上精神歧途,三体文明在地球已经成为一种宗教,而地球文明在三体世界也有这个可能。”", "id": "TRAIN_san_ti_374_QUERY_1_ZH" "question": "地球人的人文思想会对三体人造成什么影响?" "answers": [{"answer_start": 268, "text": "地铁人的人文思想会使很多三体人走上精神歧途,三体文明在地球已经成为一种宗教,而地球文明在三体世界也有这个可能。"}] ``` ## Cross-lingual MRC The first two forms of cross-lingual MRC are (P<sub>en</sub>, Q<sub>zh</sub>, A<sub>en</sub>) or (P<sub>zh</sub>, Q<sub>en</sub>, A<sub>zh</sub>), in which we use questions in one language to extract answers from passages written in another language. This form is in essence similar to the early cross-lingual question answering (CLQA). We can use a translator to translate questions into the language of passages, and then treat them as a monolingual MRC. ```sh "context": '"Come on, fish," he said. But the fish did not come. Instead he lay there wallowing now in the seas and the old man pulled the skiff up-onto him. When he was even with him and had the fish's head against the bow he could not believe his size. But he untied the harpoon rope from the bitt, passed it through the fish's gills and out his jaws, made a turn around his sword then passed the rope through the other gill, made another turn around the bill and knotted the double rope and made it fast to the bitt in the bow. He cut the rope then and went astern to noose the tail. The fist had turned silver from his original purple and silver, and the strips showed the same pale violet color as his tail. They were wider than a man's hand with his fingers spread and the fish's eye looked as detached as the mirrors in a periscope or as a saint in a procession. "It was the only way to kill him," the old man said. He was feeling better since the water and he knew he would not go away and his head was clear.', "id": "TRAIN_lao_ren_yu_hai_64_QUERY_2_CROSS_QZH" "question": "鱼已经由原来的银紫色变成什么颜色?" "answers": [{"answer_start": 595, "text": "silver"}] ``` ```sh "context": "射击的武器五花八门,有陈旧的美式卡宾枪、捷克式机枪和三八大盖,也有崭新的制式步枪和冲锋枪——后者是在“八月社论”发表之后从军队中偷抢来的——连同那些梭标和大刀等冷兵器,构成了一部浓缩的近现代史……“四.二八”的人在前面多次玩过这个游戏,在楼顶上站出来的人,除了挥舞旗帜外,有时还用喇叭筒喊口号或向下撒传单,每次他们都能在弹雨中全身而退,为自己挣到了崇高的荣誉这次出来的女孩儿显然也相信自己还有那样的幸运她挥舞着战旗,挥动着自己燃烧的青春,敌人将在这火焰中化为灰烬,理想世界明天就会在她那沸腾的热血中诞生……她陶醉在这鲜红灿烂的梦幻中,直到被一颗步枪子弹洞穿了胸膛", "id": "TRAIN_san_ti_1_QUERY_0_CROSS_QEN" "question": "What were the old weapons?" "answers": [{"answer_start": 14, "text": "美式卡宾枪、捷克式机枪和三八大盖"}] ``` The other two forms are (P<sub>en</sub>, P<sub>zh</sub>, Q<sub>zh</sub>, A<sub>zh</sub>, A<sub>en</sub>) or (P<sub>zh</sub>, P<sub>en</sub>, Q<sub>en</sub>, A<sub>en</sub>, A<sub>zh</sub>). The bilinguality of BiPaR provides a potential opportunity for building cross-lingual MRC that does not rely machine translation. Such as (P<sub>zh</sub>, P<sub>en</sub>, Q<sub>en</sub>, A<sub>en</sub>, A<sub>zh</sub>), we first obtain A<sub>en</sub> through a English monolingual MRC model, then use a word alignment tool to obtain the aligned A<sub>zh</sub> from P<sub>zh</sub>. ## Notes and Acknowledgments Chinese evaluation script is from <https://github.com/ymcui/cmrc2018> ## Data License <a rel="license" href="http://creativecommons.org/licenses/by-nc/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by-nc/4.0/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc/4.0/">Creative Commons Attribution-NonCommercial 4.0 International License</a>
sharejing/BiPaR
[ "arxiv:1910.05040", "region:us" ]
2022-03-02T23:29:22+00:00
{}
2021-03-31T02:33:00+00:00
[ "1910.05040" ]
[]
TAGS #arxiv-1910.05040 #region-us
# BiPaR > General Description This repository contains datasets for EMNLP-IJCNLP 2019 paper "BiPaR: A Bilingual Parallel Dataset for Multilingual and Cross-lingual Reading Comprehension on Novels" (Yimin Jing, Deyi Xiong and Yan Zhen). BiPaR is an extractive and manually annotated bilingual parallel novel-style machine reading comprehension (MRC) dataset, developed to support monolingual, multilingual and cross-lingual reading comprehension. The data format of BiPaR is the same as SQuAD, so you can process BiPaR like SQuAD. Paper link: <URL Download link: <URL (Including testing set) ## Monolingual MRC (P<sub>en</sub>, Q<sub>en</sub>, A<sub>en</sub>) or (P<sub>zh</sub>, Q<sub>zh</sub>, A<sub>zh</sub>). With these two monolingual MRC forms, we can investigate the performance variation of the same MRC model trained on two different languages with equivalent training instances. ## Multilingual MRC (P<sub>en</sub>, Q<sub>en</sub>, A<sub>en</sub>, P<sub>zh</sub>, Q<sub>zh</sub>, A<sub>zh</sub>). We can build a single MRC model to handle MRC of multiple languages on BiPaR and explore whether the alignment feature can significantly improve performance of the two languages. ## Cross-lingual MRC The first two forms of cross-lingual MRC are (P<sub>en</sub>, Q<sub>zh</sub>, A<sub>en</sub>) or (P<sub>zh</sub>, Q<sub>en</sub>, A<sub>zh</sub>), in which we use questions in one language to extract answers from passages written in another language. This form is in essence similar to the early cross-lingual question answering (CLQA). We can use a translator to translate questions into the language of passages, and then treat them as a monolingual MRC. The other two forms are (P<sub>en</sub>, P<sub>zh</sub>, Q<sub>zh</sub>, A<sub>zh</sub>, A<sub>en</sub>) or (P<sub>zh</sub>, P<sub>en</sub>, Q<sub>en</sub>, A<sub>en</sub>, A<sub>zh</sub>). The bilinguality of BiPaR provides a potential opportunity for building cross-lingual MRC that does not rely machine translation. Such as (P<sub>zh</sub>, P<sub>en</sub>, Q<sub>en</sub>, A<sub>en</sub>, A<sub>zh</sub>), we first obtain A<sub>en</sub> through a English monolingual MRC model, then use a word alignment tool to obtain the aligned A<sub>zh</sub> from P<sub>zh</sub>. ## Notes and Acknowledgments Chinese evaluation script is from <URL ## Data License <a rel="license" href="URL alt="Creative Commons License" style="border-width:0" src="https://i.URL /></a><br />This work is licensed under a <a rel="license" href="URL Commons Attribution-NonCommercial 4.0 International License</a>
[ "# BiPaR\n\n> General Description\nThis repository contains datasets for EMNLP-IJCNLP 2019 paper \"BiPaR: A Bilingual Parallel Dataset for Multilingual and Cross-lingual Reading Comprehension on Novels\" (Yimin Jing, Deyi Xiong and Yan Zhen). BiPaR is an extractive and manually annotated bilingual parallel novel-style machine reading comprehension (MRC) dataset, developed to support monolingual, multilingual and cross-lingual reading comprehension. \n\nThe data format of BiPaR is the same as SQuAD, so you can process BiPaR like SQuAD.\n\nPaper link: <URL\n\nDownload link: <URL (Including testing set)", "## Monolingual MRC\n\n(P<sub>en</sub>, Q<sub>en</sub>, A<sub>en</sub>) or (P<sub>zh</sub>, Q<sub>zh</sub>, A<sub>zh</sub>). With these two monolingual MRC forms, we can investigate the performance variation of the same MRC model trained on two different languages with equivalent training instances.", "## Multilingual MRC\n\n(P<sub>en</sub>, Q<sub>en</sub>, A<sub>en</sub>, P<sub>zh</sub>, Q<sub>zh</sub>, A<sub>zh</sub>). We can build a single MRC model to handle MRC of multiple languages on BiPaR and explore whether the alignment feature can significantly improve performance of the two languages.", "## Cross-lingual MRC\n\nThe first two forms of cross-lingual MRC are (P<sub>en</sub>, Q<sub>zh</sub>, A<sub>en</sub>) or (P<sub>zh</sub>, Q<sub>en</sub>, A<sub>zh</sub>), in which we use questions in one language to extract answers from passages written in another language. This form is in essence similar to the early cross-lingual question answering (CLQA). We can use a translator to translate questions into the language of passages, and then treat them as a monolingual MRC.\n\n\n\n\n\nThe other two forms are (P<sub>en</sub>, P<sub>zh</sub>, Q<sub>zh</sub>, A<sub>zh</sub>, A<sub>en</sub>) or (P<sub>zh</sub>, P<sub>en</sub>, Q<sub>en</sub>, A<sub>en</sub>, A<sub>zh</sub>). The bilinguality of BiPaR provides a potential opportunity for building cross-lingual MRC that does not rely machine translation. Such as (P<sub>zh</sub>, P<sub>en</sub>, Q<sub>en</sub>, A<sub>en</sub>, A<sub>zh</sub>), we first obtain A<sub>en</sub> through a English monolingual MRC model, then use a word alignment tool to obtain the aligned A<sub>zh</sub> from P<sub>zh</sub>.", "## Notes and Acknowledgments\nChinese evaluation script is from <URL", "## Data License\n<a rel=\"license\" href=\"URL alt=\"Creative Commons License\" style=\"border-width:0\" src=\"https://i.URL /></a><br />This work is licensed under a <a rel=\"license\" href=\"URL Commons Attribution-NonCommercial 4.0 International License</a>" ]
[ "TAGS\n#arxiv-1910.05040 #region-us \n", "# BiPaR\n\n> General Description\nThis repository contains datasets for EMNLP-IJCNLP 2019 paper \"BiPaR: A Bilingual Parallel Dataset for Multilingual and Cross-lingual Reading Comprehension on Novels\" (Yimin Jing, Deyi Xiong and Yan Zhen). BiPaR is an extractive and manually annotated bilingual parallel novel-style machine reading comprehension (MRC) dataset, developed to support monolingual, multilingual and cross-lingual reading comprehension. \n\nThe data format of BiPaR is the same as SQuAD, so you can process BiPaR like SQuAD.\n\nPaper link: <URL\n\nDownload link: <URL (Including testing set)", "## Monolingual MRC\n\n(P<sub>en</sub>, Q<sub>en</sub>, A<sub>en</sub>) or (P<sub>zh</sub>, Q<sub>zh</sub>, A<sub>zh</sub>). With these two monolingual MRC forms, we can investigate the performance variation of the same MRC model trained on two different languages with equivalent training instances.", "## Multilingual MRC\n\n(P<sub>en</sub>, Q<sub>en</sub>, A<sub>en</sub>, P<sub>zh</sub>, Q<sub>zh</sub>, A<sub>zh</sub>). We can build a single MRC model to handle MRC of multiple languages on BiPaR and explore whether the alignment feature can significantly improve performance of the two languages.", "## Cross-lingual MRC\n\nThe first two forms of cross-lingual MRC are (P<sub>en</sub>, Q<sub>zh</sub>, A<sub>en</sub>) or (P<sub>zh</sub>, Q<sub>en</sub>, A<sub>zh</sub>), in which we use questions in one language to extract answers from passages written in another language. This form is in essence similar to the early cross-lingual question answering (CLQA). We can use a translator to translate questions into the language of passages, and then treat them as a monolingual MRC.\n\n\n\n\n\nThe other two forms are (P<sub>en</sub>, P<sub>zh</sub>, Q<sub>zh</sub>, A<sub>zh</sub>, A<sub>en</sub>) or (P<sub>zh</sub>, P<sub>en</sub>, Q<sub>en</sub>, A<sub>en</sub>, A<sub>zh</sub>). The bilinguality of BiPaR provides a potential opportunity for building cross-lingual MRC that does not rely machine translation. Such as (P<sub>zh</sub>, P<sub>en</sub>, Q<sub>en</sub>, A<sub>en</sub>, A<sub>zh</sub>), we first obtain A<sub>en</sub> through a English monolingual MRC model, then use a word alignment tool to obtain the aligned A<sub>zh</sub> from P<sub>zh</sub>.", "## Notes and Acknowledgments\nChinese evaluation script is from <URL", "## Data License\n<a rel=\"license\" href=\"URL alt=\"Creative Commons License\" style=\"border-width:0\" src=\"https://i.URL /></a><br />This work is licensed under a <a rel=\"license\" href=\"URL Commons Attribution-NonCommercial 4.0 International License</a>" ]
79fc59f431716692cd8ae024f1b97fe7457a7ca8
# Dataset Card for NLI_zh ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** [Chinese NLI dataset](https://github.com/shibing624/text2vec) - **Leaderboard:** [NLI_zh leaderboard](https://github.com/shibing624/text2vec) (located on the homepage) - **Size of downloaded dataset files:** 16 MB - **Total amount of disk used:** 42 MB ### Dataset Summary 常见中文语义匹配数据集,包含[ATEC](https://github.com/IceFlameWorm/NLP_Datasets/tree/master/ATEC)、[BQ](http://icrc.hitsz.edu.cn/info/1037/1162.htm)、[LCQMC](http://icrc.hitsz.edu.cn/Article/show/171.html)、[PAWSX](https://arxiv.org/abs/1908.11828)、[STS-B](https://github.com/pluto-junzeng/CNSD)共5个任务。 数据源: - ATEC: https://github.com/IceFlameWorm/NLP_Datasets/tree/master/ATEC - BQ: http://icrc.hitsz.edu.cn/info/1037/1162.htm - LCQMC: http://icrc.hitsz.edu.cn/Article/show/171.html - PAWSX: https://arxiv.org/abs/1908.11828 - STS-B: https://github.com/pluto-junzeng/CNSD ### Supported Tasks and Leaderboards Supported Tasks: 支持中文文本匹配任务,文本相似度计算等相关任务。 中文匹配任务的结果目前在顶会paper上出现较少,我罗列一个我自己训练的结果: **Leaderboard:** [NLI_zh leaderboard](https://github.com/shibing624/text2vec) ### Languages 数据集均是简体中文文本。 ## Dataset Structure ### Data Instances An example of 'train' looks as follows. ``` { "sentence1": "刘诗诗杨幂谁漂亮", "sentence2": "刘诗诗和杨幂谁漂亮", "label": 1, } { "sentence1": "汇理财怎么样", "sentence2": "怎么样去理财", "label": 0, } ``` ### Data Fields The data fields are the same among all splits. - `sentence1`: a `string` feature. - `sentence2`: a `string` feature. - `label`: a classification label, with possible values including `similarity` (1), `dissimilarity` (0). ### Data Splits #### ATEC ```shell $ wc -l ATEC/* 20000 ATEC/ATEC.test.data 62477 ATEC/ATEC.train.data 20000 ATEC/ATEC.valid.data 102477 total ``` #### BQ ```shell $ wc -l BQ/* 10000 BQ/BQ.test.data 100000 BQ/BQ.train.data 10000 BQ/BQ.valid.data 120000 total ``` #### LCQMC ```shell $ wc -l LCQMC/* 12500 LCQMC/LCQMC.test.data 238766 LCQMC/LCQMC.train.data 8802 LCQMC/LCQMC.valid.data 260068 total ``` #### PAWSX ```shell $ wc -l PAWSX/* 2000 PAWSX/PAWSX.test.data 49401 PAWSX/PAWSX.train.data 2000 PAWSX/PAWSX.valid.data 53401 total ``` #### STS-B ```shell $ wc -l STS-B/* 1361 STS-B/STS-B.test.data 5231 STS-B/STS-B.train.data 1458 STS-B/STS-B.valid.data 8050 total ``` ## Dataset Creation ### Curation Rationale 作为中文NLI(natural langauge inference)数据集,这里把这个数据集上传到huggingface的datasets,方便大家使用。 ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? 数据集的版权归原作者所有,使用各数据集时请尊重原数据集的版权。 BQ: Jing Chen, Qingcai Chen, Xin Liu, Haijun Yang, Daohe Lu, Buzhou Tang, The BQ Corpus: A Large-scale Domain-specific Chinese Corpus For Sentence Semantic Equivalence Identification EMNLP2018. ### Annotations #### Annotation process #### Who are the annotators? 原作者。 ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset This dataset was developed as a benchmark for evaluating representational systems for text, especially including those induced by representation learning methods, in the task of predicting truth conditions in a given context. Systems that are successful at such a task may be more successful in modeling semantic representations. ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators - 苏剑林对文件名称有整理 - 我上传到huggingface的datasets ### Licensing Information 用于学术研究。 The BQ corpus is free to the public for academic research. ### Contributions Thanks to [@shibing624](https://github.com/shibing624) add this dataset.
shibing624/nli_zh
[ "task_categories:text-classification", "task_ids:natural-language-inference", "task_ids:semantic-similarity-scoring", "task_ids:text-scoring", "annotations_creators:shibing624", "language_creators:shibing624", "multilinguality:monolingual", "size_categories:100K<n<20M", "source_datasets:https://github.com/shibing624/text2vec", "source_datasets:https://github.com/IceFlameWorm/NLP_Datasets/tree/master/ATEC", "source_datasets:http://icrc.hitsz.edu.cn/info/1037/1162.htm", "source_datasets:http://icrc.hitsz.edu.cn/Article/show/171.html", "source_datasets:https://arxiv.org/abs/1908.11828", "source_datasets:https://github.com/pluto-junzeng/CNSD", "language:zh", "license:cc-by-4.0", "arxiv:1908.11828", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["shibing624"], "language_creators": ["shibing624"], "language": ["zh"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<20M"], "source_datasets": ["https://github.com/shibing624/text2vec", "https://github.com/IceFlameWorm/NLP_Datasets/tree/master/ATEC", "http://icrc.hitsz.edu.cn/info/1037/1162.htm", "http://icrc.hitsz.edu.cn/Article/show/171.html", "https://arxiv.org/abs/1908.11828", "https://github.com/pluto-junzeng/CNSD"], "task_categories": ["text-classification"], "task_ids": ["natural-language-inference", "semantic-similarity-scoring", "text-scoring"], "paperswithcode_id": "snli", "pretty_name": "Stanford Natural Language Inference"}
2022-10-30T06:30:56+00:00
[ "1908.11828" ]
[ "zh" ]
TAGS #task_categories-text-classification #task_ids-natural-language-inference #task_ids-semantic-similarity-scoring #task_ids-text-scoring #annotations_creators-shibing624 #language_creators-shibing624 #multilinguality-monolingual #size_categories-100K<n<20M #source_datasets-https-//github.com/shibing624/text2vec #source_datasets-https-//github.com/IceFlameWorm/NLP_Datasets/tree/master/ATEC #source_datasets-http-//icrc.hitsz.edu.cn/info/1037/1162.htm #source_datasets-http-//icrc.hitsz.edu.cn/Article/show/171.html #source_datasets-https-//arxiv.org/abs/1908.11828 #source_datasets-https-//github.com/pluto-junzeng/CNSD #language-Chinese #license-cc-by-4.0 #arxiv-1908.11828 #region-us
# Dataset Card for NLI_zh ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Repository: Chinese NLI dataset - Leaderboard: NLI_zh leaderboard (located on the homepage) - Size of downloaded dataset files: 16 MB - Total amount of disk used: 42 MB ### Dataset Summary 常见中文语义匹配数据集,包含ATEC、BQ、LCQMC、PAWSX、STS-B共5个任务。 数据源: - ATEC: URL - BQ: URL - LCQMC: URL - PAWSX: URL - STS-B: URL ### Supported Tasks and Leaderboards Supported Tasks: 支持中文文本匹配任务,文本相似度计算等相关任务。 中文匹配任务的结果目前在顶会paper上出现较少,我罗列一个我自己训练的结果: Leaderboard: NLI_zh leaderboard ### Languages 数据集均是简体中文文本。 ## Dataset Structure ### Data Instances An example of 'train' looks as follows. ### Data Fields The data fields are the same among all splits. - 'sentence1': a 'string' feature. - 'sentence2': a 'string' feature. - 'label': a classification label, with possible values including 'similarity' (1), 'dissimilarity' (0). ### Data Splits #### ATEC #### BQ #### LCQMC #### PAWSX #### STS-B ## Dataset Creation ### Curation Rationale 作为中文NLI(natural langauge inference)数据集,这里把这个数据集上传到huggingface的datasets,方便大家使用。 ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? 数据集的版权归原作者所有,使用各数据集时请尊重原数据集的版权。 BQ: Jing Chen, Qingcai Chen, Xin Liu, Haijun Yang, Daohe Lu, Buzhou Tang, The BQ Corpus: A Large-scale Domain-specific Chinese Corpus For Sentence Semantic Equivalence Identification EMNLP2018. ### Annotations #### Annotation process #### Who are the annotators? 原作者。 ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset This dataset was developed as a benchmark for evaluating representational systems for text, especially including those induced by representation learning methods, in the task of predicting truth conditions in a given context. Systems that are successful at such a task may be more successful in modeling semantic representations. ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators - 苏剑林对文件名称有整理 - 我上传到huggingface的datasets ### Licensing Information 用于学术研究。 The BQ corpus is free to the public for academic research. ### Contributions Thanks to @shibing624 add this dataset.
[ "# Dataset Card for NLI_zh", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n- Repository: Chinese NLI dataset\n- Leaderboard: NLI_zh leaderboard (located on the homepage)\n- Size of downloaded dataset files: 16 MB\n- Total amount of disk used: 42 MB", "### Dataset Summary\n\n常见中文语义匹配数据集,包含ATEC、BQ、LCQMC、PAWSX、STS-B共5个任务。\n\n数据源:\n\n- ATEC: URL\n- BQ: URL\n- LCQMC: URL\n- PAWSX: URL\n- STS-B: URL", "### Supported Tasks and Leaderboards\n\nSupported Tasks: 支持中文文本匹配任务,文本相似度计算等相关任务。\n\n中文匹配任务的结果目前在顶会paper上出现较少,我罗列一个我自己训练的结果:\n\nLeaderboard: NLI_zh leaderboard", "### Languages\n\n数据集均是简体中文文本。", "## Dataset Structure", "### Data Instances\nAn example of 'train' looks as follows.", "### Data Fields\nThe data fields are the same among all splits.\n\n- 'sentence1': a 'string' feature.\n- 'sentence2': a 'string' feature.\n- 'label': a classification label, with possible values including 'similarity' (1), 'dissimilarity' (0).", "### Data Splits", "#### ATEC", "#### BQ", "#### LCQMC", "#### PAWSX", "#### STS-B", "## Dataset Creation", "### Curation Rationale\n作为中文NLI(natural langauge inference)数据集,这里把这个数据集上传到huggingface的datasets,方便大家使用。", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?\n数据集的版权归原作者所有,使用各数据集时请尊重原数据集的版权。\n\nBQ: Jing Chen, Qingcai Chen, Xin Liu, Haijun Yang, Daohe Lu, Buzhou Tang, The BQ Corpus: A Large-scale Domain-specific Chinese Corpus For Sentence Semantic Equivalence Identification EMNLP2018.", "### Annotations", "#### Annotation process", "#### Who are the annotators?\n原作者。", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset\nThis dataset was developed as a benchmark for evaluating representational systems for text, especially including those induced by representation learning methods, in the task of predicting truth conditions in a given context. \n\nSystems that are successful at such a task may be more successful in modeling semantic representations.", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\n- 苏剑林对文件名称有整理\n- 我上传到huggingface的datasets", "### Licensing Information\n\n用于学术研究。\n\nThe BQ corpus is free to the public for academic research.", "### Contributions\n\nThanks to @shibing624 add this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-natural-language-inference #task_ids-semantic-similarity-scoring #task_ids-text-scoring #annotations_creators-shibing624 #language_creators-shibing624 #multilinguality-monolingual #size_categories-100K<n<20M #source_datasets-https-//github.com/shibing624/text2vec #source_datasets-https-//github.com/IceFlameWorm/NLP_Datasets/tree/master/ATEC #source_datasets-http-//icrc.hitsz.edu.cn/info/1037/1162.htm #source_datasets-http-//icrc.hitsz.edu.cn/Article/show/171.html #source_datasets-https-//arxiv.org/abs/1908.11828 #source_datasets-https-//github.com/pluto-junzeng/CNSD #language-Chinese #license-cc-by-4.0 #arxiv-1908.11828 #region-us \n", "# Dataset Card for NLI_zh", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n- Repository: Chinese NLI dataset\n- Leaderboard: NLI_zh leaderboard (located on the homepage)\n- Size of downloaded dataset files: 16 MB\n- Total amount of disk used: 42 MB", "### Dataset Summary\n\n常见中文语义匹配数据集,包含ATEC、BQ、LCQMC、PAWSX、STS-B共5个任务。\n\n数据源:\n\n- ATEC: URL\n- BQ: URL\n- LCQMC: URL\n- PAWSX: URL\n- STS-B: URL", "### Supported Tasks and Leaderboards\n\nSupported Tasks: 支持中文文本匹配任务,文本相似度计算等相关任务。\n\n中文匹配任务的结果目前在顶会paper上出现较少,我罗列一个我自己训练的结果:\n\nLeaderboard: NLI_zh leaderboard", "### Languages\n\n数据集均是简体中文文本。", "## Dataset Structure", "### Data Instances\nAn example of 'train' looks as follows.", "### Data Fields\nThe data fields are the same among all splits.\n\n- 'sentence1': a 'string' feature.\n- 'sentence2': a 'string' feature.\n- 'label': a classification label, with possible values including 'similarity' (1), 'dissimilarity' (0).", "### Data Splits", "#### ATEC", "#### BQ", "#### LCQMC", "#### PAWSX", "#### STS-B", "## Dataset Creation", "### Curation Rationale\n作为中文NLI(natural langauge inference)数据集,这里把这个数据集上传到huggingface的datasets,方便大家使用。", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?\n数据集的版权归原作者所有,使用各数据集时请尊重原数据集的版权。\n\nBQ: Jing Chen, Qingcai Chen, Xin Liu, Haijun Yang, Daohe Lu, Buzhou Tang, The BQ Corpus: A Large-scale Domain-specific Chinese Corpus For Sentence Semantic Equivalence Identification EMNLP2018.", "### Annotations", "#### Annotation process", "#### Who are the annotators?\n原作者。", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset\nThis dataset was developed as a benchmark for evaluating representational systems for text, especially including those induced by representation learning methods, in the task of predicting truth conditions in a given context. \n\nSystems that are successful at such a task may be more successful in modeling semantic representations.", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\n- 苏剑林对文件名称有整理\n- 我上传到huggingface的datasets", "### Licensing Information\n\n用于学术研究。\n\nThe BQ corpus is free to the public for academic research.", "### Contributions\n\nThanks to @shibing624 add this dataset." ]
9d62f0ab0d93b3f4b38284ac88b540772e7b61b8
# Dataset Card for "SourceCode" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** [code-autocomplete](https://github.com/shibing624/code-autocomplete) - **Leaderboard:** [leaderboard](https://github.com/shibing624/code-autocomplete) (located on the homepage) - **Size of downloaded dataset files:** 105 MB - **Total amount of disk used:** 570 MB ### Dataset Summary Source code dataset is a collection of Github awesome repos, it contains Python, Java, C++, and other programming languages. This dataset can be used in different NLP tasks like language modeling and text generation tasks. data source: - PYTHON_CODE: https://github.com/bharathgs/Awesome-pytorch-list - JAVA_CODE: https://github.com/akullpp/awesome-java - CPP_CODE: https://github.com/fffaraz/awesome-cpp ### Supported Tasks and Leaderboards - language modeling - code generation tasks, **Leaderboard:** [code-autocomplete](https://github.com/shibing624/code-autocomplete) ### Languages - programming languages: Python, Java, C++ - natural language: English ## Dataset Structure ### Data Instances An example of 'train' looks as follows. ``` This example was too long and was cropped: { "text": """ import json import argparse def _parse_args(): parser = argparse.ArgumentParser( description=__doc__, formatter_class=argparse.RawTextHelpFormatter, ) parser.add_argument( '--model-file', required=True, help=( 'A pt file from ' 'https://github.com/pytorch/fairseq/tree/main/examples/hubert' ) ) return parser.parse_args() """ } ``` ### Data Fields The data fields are the same among all splits. - `text`: a `string` feature. ### Data Splits #### python ```shell $ wc -l python/* 10000 python/test.txt 5215412 python/train.txt 10000 python/valid.txt 5235412 total ``` #### java ```shell $ wc -l java/* 950083 java/test.txt 2802880 java/train.txt 940803 java/valid.txt 4693766 total ``` #### cpp ```shell $ wc -l cpp/* 1060014 cpp/test.txt 3119241 cpp/train.txt 1099124 cpp/valid.txt 5278379 total ``` ## Dataset Creation ### Curation Rationale As code generation dataset, I upload it to huggingface datasets. ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? Citation: APA: ```latex Xu, M. code-autocomplete: Code AutoComplete with GPT2 model (Version 0.0.4) [Computer software]. https://github.com/shibing624/code-autocomplete ``` BibTeX: ```latex @software{Xu_code-autocomplete_Code_AutoComplete, author = {Xu, Ming}, title = {code-autocomplete: Code AutoComplete with GPT2 model}, url = {https://github.com/shibing624/code-autocomplete}, version = {0.0.4} } ``` ### Annotations #### Annotation process #### Who are the annotators? nobody ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset This dataset was developed as a benchmark for evaluating code generation model. ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators Github awesome programing code repos. ### Licensing Information GNU Free Documentation License v1.3 or later. For research use only. ### Contributions Thanks to [@shibing624](https://github.com/shibing624) add this dataset.
shibing624/source_code
[ "task_categories:text-generation", "task_ids:language-modeling", "annotations_creators:no-annotation", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:100M<n<200M", "source_datasets:https://github.com/shibing624/code-autocomplete", "source_datasets:https://github.com/bharathgs/Awesome-pytorch-list", "source_datasets:https://github.com/akullpp/awesome-java", "source_datasets:https://github.com/fffaraz/awesome-cpp", "language:en", "license:cc-by-4.0", "license:gfdl", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-4.0", "gfdl"], "multilinguality": ["monolingual"], "size_categories": ["100M<n<200M"], "source_datasets": ["https://github.com/shibing624/code-autocomplete", "https://github.com/bharathgs/Awesome-pytorch-list", "https://github.com/akullpp/awesome-java", "https://github.com/fffaraz/awesome-cpp"], "task_categories": ["text-generation"], "task_ids": ["language-modeling"]}
2022-10-30T06:30:07+00:00
[]
[ "en" ]
TAGS #task_categories-text-generation #task_ids-language-modeling #annotations_creators-no-annotation #language_creators-crowdsourced #multilinguality-monolingual #size_categories-100M<n<200M #source_datasets-https-//github.com/shibing624/code-autocomplete #source_datasets-https-//github.com/bharathgs/Awesome-pytorch-list #source_datasets-https-//github.com/akullpp/awesome-java #source_datasets-https-//github.com/fffaraz/awesome-cpp #language-English #license-cc-by-4.0 #license-gfdl #region-us
# Dataset Card for "SourceCode" ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Repository: code-autocomplete - Leaderboard: leaderboard (located on the homepage) - Size of downloaded dataset files: 105 MB - Total amount of disk used: 570 MB ### Dataset Summary Source code dataset is a collection of Github awesome repos, it contains Python, Java, C++, and other programming languages. This dataset can be used in different NLP tasks like language modeling and text generation tasks. data source: - PYTHON_CODE: URL - JAVA_CODE: URL - CPP_CODE: URL ### Supported Tasks and Leaderboards - language modeling - code generation tasks, Leaderboard: code-autocomplete ### Languages - programming languages: Python, Java, C++ - natural language: English ## Dataset Structure ### Data Instances An example of 'train' looks as follows. ### Data Fields The data fields are the same among all splits. - 'text': a 'string' feature. ### Data Splits #### python #### java #### cpp ## Dataset Creation ### Curation Rationale As code generation dataset, I upload it to huggingface datasets. ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? Citation: APA: BibTeX: ### Annotations #### Annotation process #### Who are the annotators? nobody ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset This dataset was developed as a benchmark for evaluating code generation model. ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators Github awesome programing code repos. ### Licensing Information GNU Free Documentation License v1.3 or later. For research use only. ### Contributions Thanks to @shibing624 add this dataset.
[ "# Dataset Card for \"SourceCode\"", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n- Repository: code-autocomplete\n- Leaderboard: leaderboard (located on the homepage)\n- Size of downloaded dataset files: 105 MB\n- Total amount of disk used: 570 MB", "### Dataset Summary\n\nSource code dataset is a collection of Github awesome repos, it contains Python, Java, C++, and other programming languages.\nThis dataset can be used in different NLP tasks like language modeling and text generation tasks.\n\ndata source:\n\n- PYTHON_CODE: URL\n- JAVA_CODE: URL\n- CPP_CODE: URL", "### Supported Tasks and Leaderboards\n- language modeling \n- code generation tasks, Leaderboard: code-autocomplete", "### Languages\n\n- programming languages: Python, Java, C++\n- natural language: English", "## Dataset Structure", "### Data Instances\nAn example of 'train' looks as follows.", "### Data Fields\nThe data fields are the same among all splits.\n- 'text': a 'string' feature.", "### Data Splits", "#### python", "#### java", "#### cpp", "## Dataset Creation", "### Curation Rationale\nAs code generation dataset, I upload it to huggingface datasets.", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?\nCitation:\n\nAPA:\n\n\nBibTeX:", "### Annotations", "#### Annotation process", "#### Who are the annotators?\nnobody", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset\nThis dataset was developed as a benchmark for evaluating code generation model.", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nGithub awesome programing code repos.", "### Licensing Information\n\nGNU Free Documentation License v1.3 or later.\n\nFor research use only.", "### Contributions\nThanks to @shibing624 add this dataset." ]
[ "TAGS\n#task_categories-text-generation #task_ids-language-modeling #annotations_creators-no-annotation #language_creators-crowdsourced #multilinguality-monolingual #size_categories-100M<n<200M #source_datasets-https-//github.com/shibing624/code-autocomplete #source_datasets-https-//github.com/bharathgs/Awesome-pytorch-list #source_datasets-https-//github.com/akullpp/awesome-java #source_datasets-https-//github.com/fffaraz/awesome-cpp #language-English #license-cc-by-4.0 #license-gfdl #region-us \n", "# Dataset Card for \"SourceCode\"", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n- Repository: code-autocomplete\n- Leaderboard: leaderboard (located on the homepage)\n- Size of downloaded dataset files: 105 MB\n- Total amount of disk used: 570 MB", "### Dataset Summary\n\nSource code dataset is a collection of Github awesome repos, it contains Python, Java, C++, and other programming languages.\nThis dataset can be used in different NLP tasks like language modeling and text generation tasks.\n\ndata source:\n\n- PYTHON_CODE: URL\n- JAVA_CODE: URL\n- CPP_CODE: URL", "### Supported Tasks and Leaderboards\n- language modeling \n- code generation tasks, Leaderboard: code-autocomplete", "### Languages\n\n- programming languages: Python, Java, C++\n- natural language: English", "## Dataset Structure", "### Data Instances\nAn example of 'train' looks as follows.", "### Data Fields\nThe data fields are the same among all splits.\n- 'text': a 'string' feature.", "### Data Splits", "#### python", "#### java", "#### cpp", "## Dataset Creation", "### Curation Rationale\nAs code generation dataset, I upload it to huggingface datasets.", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?\nCitation:\n\nAPA:\n\n\nBibTeX:", "### Annotations", "#### Annotation process", "#### Who are the annotators?\nnobody", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset\nThis dataset was developed as a benchmark for evaluating code generation model.", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\nGithub awesome programing code repos.", "### Licensing Information\n\nGNU Free Documentation License v1.3 or later.\n\nFor research use only.", "### Contributions\nThanks to @shibing624 add this dataset." ]
5f3da553d930a07de2685c0f541636eaec270318
## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) <!-- - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) --> ## Dataset Description - **Homepage:** [SIL AI](https://ai.sil.org/) - **Point of Contact:** [SIL AI email](mailto:[email protected]) - **Source Data:** [Bloom Library](https://bloomlibrary.org/) ![logo for Bloom Library](https://bloom-vist.s3.amazonaws.com/bloom_logo.png) ![sil-ai logo](https://s3.amazonaws.com/moonup/production/uploads/1661440873726-6108057a823007eaf0c7bd10.png) ## Dataset Summary **Bloom** is free, open-source software and an associated website [Bloom Library](https://bloomlibrary.org/), app, and services developed by [SIL International](https://www.sil.org/). Bloom’s primary goal is to equip non-dominant language communities and their members to create the literature they want for their community and children. Bloom also serves organizations that help such communities develop literature and education or other aspects of community development. This version of the Bloom Library data is developed specifically for the language modeling task. It includes data from 364 languages across 31 language families. There is a mean of 32 stories and median of 2 stories per language. **Note**: If you speak one of these languages and can help provide feedback or corrections, please let us know! **Note**: Although this data was used in the training of the [BLOOM model](https://huggingface.co/bigscience/bloom), this dataset only represents a small portion of the data used to train that model. Data from "Bloom Library" was combined with a large number of other datasets to train that model. "Bloom Library" is a project that existed prior to the BLOOM model, and is something separate. All that to say... We were using the "Bloom" name before it was cool. 😉 ## Languages Of the 500+ languages listed at BloomLibrary.org, there are 363 languages available in this dataset. Here are the corresponding ISO 639-3 codes: aaa, abc, ada, adq, aeu, afr, agq, ags, ahk, aia, ajz, aka, ame, amh, amp, amu, ann, aph, awa, awb, azn, azo, bag, bam, baw, bax, bbk, bcc, bce, bec, bef, ben, bfd, bfm, bfn, bgf, bho, bhs, bis, bjn, bjr, bkc, bkh, bkm, bkx, bob, bod, boz, bqm, bra, brb, bri, brv, bss, bud, buo, bwt, bwx, bxa, bya, bze, bzi, cak, cbr, ceb, cgc, chd, chp, cim, clo, cmn, cmo, csw, cuh, cuv, dag, ddg, ded, deu, dig, dje, dmg, dnw, dtp, dtr, dty, dug, eee, ekm, enb, enc, eng, ewo, fas, fil, fli, fon, fra, fub, fuh, gal, gbj, gou, gsw, guc, guj, guz, gwc, hao, hat, hau, hbb, hig, hil, hin, hla, hna, hre, hro, idt, ilo, ind, ino, isu, ita, jgo, jmx, jpn, jra, kak, kam, kan, kau, kbq, kbx, kby, kek, ken, khb, khm, kik, kin, kir, kjb, kmg, kmr, kms, kmu, kor, kqr, krr, ksw, kur, kvt, kwd, kwu, kwx, kxp, kyq, laj, lan, lao, lbr, lfa, lgg, lgr, lhm, lhu, lkb, llg, lmp, lns, loh, lsi, lts, lug, luy, lwl, mai, mal, mam, mar, mdr, mfh, mfj, mgg, mgm, mgo, mgq, mhx, miy, mkz, mle, mlk, mlw, mmu, mne, mnf, mnw, mot, mqj, mrn, mry, msb, muv, mve, mxu, mya, myk, myx, mzm, nas, nco, nep, new, nge, ngn, nhx, njy, nla, nld, nlv, nod, nsk, nsn, nso, nst, nuj, nwe, nwi, nxa, nxl, nya, nyo, nyu, nza, odk, oji, oki, omw, ori, ozm, pae, pag, pan, pbt, pce, pcg, pdu, pea, pex, pis, pkb, pmf, pnz, por, psp, pwg, qub, quc, quf, quz, qve, qvh, qvm, qvo, qxh, rel, rnl, ron, roo, rue, rug, rus, san, saq, sat, sdk, sea, sgd, shn, sml, snk, snl, som, sot, sox, spa, sps, ssn, stk, swa, swh, sxb, syw, taj, tam, tbj, tdb, tdg, tdt, teo, tet, tgk, tha, the, thk, thl, thy, tio, tkd, tnl, tnn, tnp, tnt, tod, tom, tpi, tpl, tpu, tsb, tsn, tso, tuv, tuz, tvs, udg, unr, urd, uzb, ven, vie, vif, war, wbm, wbr, wms, wni, wnk, wtk, xho, xkg, xmd, xmg, xmm, xog, xty, yas, yav, ybb, ybh, ybi, ydd, yea, yet, yid, yin, ymp, zaw, zho, zlm, zuh, zul ## Dataset Statistics Some of the languages included in the dataset just include 1 or a couple of "stories." These are not split between training, validation, and test. For those with higher numbers of available stories we include the following numbers of stories in each split: | ISO 639-3 | Name | Train Stories | Validation Stories | Test Stories | |:------------|:------------------------------|----------------:|---------------------:|---------------:| | aeu | Akeu | 47 | 6 | 5 | | afr | Afrikaans | 19 | 2 | 2 | | ahk | Akha | 81 | 10 | 10 | | aph | Athpariya | 28 | 4 | 3 | | awa | Awadhi | 131 | 16 | 16 | | ben | Bengali | 201 | 25 | 25 | | bfn | Bunak | 11 | 1 | 1 | | bho | Bhojpuri | 139 | 17 | 17 | | bis | Bislama | 20 | 2 | 2 | | bkm | Kom (Cameroon) | 15 | 2 | 1 | | bkx | Baikeno | 8 | 1 | 1 | | brb | Brao | 18 | 2 | 2 | | bwx | Bu-Nao Bunu | 14 | 2 | 1 | | bzi | Bisu | 53 | 7 | 6 | | cak | Kaqchikel | 54 | 7 | 6 | | cbr | Cashibo-Cacataibo | 11 | 1 | 1 | | ceb | Cebuano | 335 | 42 | 41 | | cgc | Kagayanen | 158 | 20 | 19 | | cmo | Central Mnong | 16 | 2 | 2 | | ddg | Fataluku | 14 | 2 | 1 | | deu | German | 36 | 4 | 4 | | dtp | Kadazan Dusun | 13 | 2 | 1 | | dty | Dotyali | 138 | 17 | 17 | | eng | English | 2107 | 263 | 263 | | fas | Persian | 104 | 13 | 12 | | fil | Filipino | 55 | 7 | 6 | | fra | French | 323 | 40 | 40 | | gal | Galolen | 11 | 1 | 1 | | gwc | Gawri | 15 | 2 | 1 | | hat | Haitian | 208 | 26 | 26 | | hau | Hausa | 205 | 26 | 25 | | hbb | Huba | 22 | 3 | 2 | | hin | Hindi | 16 | 2 | 2 | | idt | Idaté | 8 | 1 | 1 | | ind | Indonesian | 208 | 26 | 25 | | jmx | Western Juxtlahuaca Mixtec | 19 | 2 | 2 | | jra | Jarai | 112 | 14 | 13 | | kak | Kalanguya | 156 | 20 | 19 | | kan | Kannada | 17 | 2 | 2 | | kau | Kanuri | 36 | 5 | 4 | | kek | Kekchí | 29 | 4 | 3 | | khb | Lü | 25 | 3 | 3 | | khm | Khmer | 28 | 4 | 3 | | kik | Kikuyu | 8 | 1 | 1 | | kir | Kirghiz | 306 | 38 | 38 | | kjb | Q'anjob'al | 82 | 10 | 10 | | kmg | Kâte | 16 | 2 | 1 | | kor | Korean | 106 | 13 | 13 | | krr | Krung | 24 | 3 | 3 | | kwd | Kwaio | 19 | 2 | 2 | | kwu | Kwakum | 16 | 2 | 2 | | lbr | Lohorung | 8 | 1 | 1 | | lhu | Lahu | 32 | 4 | 4 | | lsi | Lashi | 21 | 3 | 2 | | mai | Maithili | 144 | 18 | 18 | | mal | Malayalam | 12 | 1 | 1 | | mam | Mam | 108 | 13 | 13 | | mar | Marathi | 8 | 1 | 1 | | mgm | Mambae | 12 | 2 | 1 | | mhx | Maru | 79 | 10 | 9 | | mkz | Makasae | 16 | 2 | 2 | | mya | Burmese | 31 | 4 | 3 | | myk | Mamara Senoufo | 28 | 3 | 3 | | nep | Nepali (macrolanguage) | 160 | 20 | 20 | | new | Newari | 142 | 18 | 17 | | nlv | Orizaba Nahuatl | 8 | 1 | 1 | | nsn | Nehan | 9 | 1 | 1 | | nwi | Southwest Tanna | 9 | 1 | 1 | | nxa | Nauete | 12 | 1 | 1 | | omw | South Tairora | 10 | 1 | 1 | | pbt | Southern Pashto | 164 | 21 | 20 | | pce | Ruching Palaung | 30 | 4 | 3 | | pis | Pijin | 14 | 2 | 1 | | por | Portuguese | 131 | 16 | 16 | | quc | K'iche' | 80 | 10 | 9 | | rus | Russian | 283 | 35 | 35 | | sdk | Sos Kundi | 9 | 1 | 1 | | snk | Soninke | 28 | 4 | 3 | | spa | Spanish | 423 | 53 | 52 | | swh | Swahili (individual language) | 58 | 7 | 7 | | tam | Tamil | 13 | 2 | 1 | | tdg | Western Tamang | 26 | 3 | 3 | | tdt | Tetun Dili | 22 | 3 | 2 | | tet | Tetum | 8 | 1 | 1 | | tgk | Tajik | 24 | 3 | 2 | | tha | Thai | 228 | 29 | 28 | | the | Chitwania Tharu | 11 | 1 | 1 | | thl | Dangaura Tharu | 148 | 19 | 18 | | tnl | Lenakel | 10 | 1 | 1 | | tnn | North Tanna | 9 | 1 | 1 | | tpi | Tok Pisin | 161 | 20 | 20 | | tpu | Tampuan | 24 | 3 | 2 | | uzb | Uzbek | 24 | 3 | 2 | | war | Waray (Philippines) | 16 | 2 | 2 | | wbr | Wagdi | 10 | 1 | 1 | | wni | Ndzwani Comorian | 12 | 2 | 1 | | xkg | Kagoro | 16 | 2 | 1 | | ybh | Yakha | 16 | 2 | 1 | | zho | Chinese | 34 | 4 | 4 | | zlm | Malay (individual language) | 8 | 1 | 1 | | zul | Zulu | 19 | 2 | 2 | ## Dataset Structure ### Data Instances The examples look like this for Hindi: ``` from datasets import load_dataset # Specify the language code. dataset = load_dataset("sil-ai/bloom-lm", 'hin') # A data point consists of stories in the specified language code. # To see a story: print(dataset['train']['text'][0]) ``` This would produce an output: ``` साबू ने एक कंकड़ को ठोकर मारी। कंकड़ लुढ़कता हुआ एक पेड़ के पास पहुँचा। पेड़ के तने पर मुलायम बाल थे। साबू ने छुए और ऊपर देखा, ऊपर, ऊपर और उससे भी ऊपर...दो आँखें नीचे देख रही थीं। “हेलो, तुम कौन हो?” साबू को बड़ा अचम्भा हुआ।“हेलो, मैं जिराफ़ हूँ। मेरा नाम है जोजो। मैं तुम्हारे साथ खेल सकता हूँ। मेरी पीठ पर चढ़ जाओ, मैं तुम्हें घुमा के लाता हूँ।” साबू जोजो की पीठ पर चढ़ गया और वे सड़क पर चल निकले। फिर पहाड़ी पर और शहर के बीचों बीच। साबू खुशी से चिल्लाया, “जोजो दाएँ मुड़ो, बाएँ मुड़ो और फिर दाएँ।” अब वे उसकी दोस्त मुन्नी के घर पहुँच गये। आज मुन्नी का जन्मदिन था। साबू को जोजो पर सवारी करते देख बच्चों ने ताली बजायी। जोजो ने गुब्बारे लटकाने में आन्टी की मदद करी क्योंकि वह इतना... लम्बा था। कितना आसान था! जोजो ने सब बच्चों को सवारी कराई। उनके साथ बॉल भी खेली। बड़े मज़े की पार्टी थी।सब ने गाया, “हैप्पी बर्थ डे टु यू ।” आन्टी ने मेज़ पर समोसे, गुलाब जामुन और आइसक्रीम सजाई। जोजो को आइसक्रीम बहुत पसन्द आई। अंकल उसके लिये एक बाल्टी भर के आइसक्रीम लाये। जोजो ने पूरी बाल्टी ख़त्म कर दी। अब घर जाने का समय हो गया। सब ने कहा, “बाय बाय जोजो, बाय बाय साबू।” साबू और जोजो घर लौटे। ``` Whereas if you wish to gather all the text for a language you may use this: ``` dataset['train']['text'] ``` ### Data Fields The metadata fields below are available and the full dataset will be updated with per story metadata soon (in August 2022). As of now a majority of stories have metadata, but some are missing certain fields. In terms of licenses, all stories included in the current release are released under a Creative Commons license (even if the individual story metadata fields are missing). - **text**: the text of the story/book, concatenated together from the different pages. - **id**: id of the sample - **title**: title of the book, e.g. "Going to Buy a Book". - **license**: specific license used, e.g. "cc-by-sa" for "Creative Commons, by attribution, share-alike". - **copyright**: copyright notice from the original book on bloomlibrary.org - **pageCount**: page count from the metadata on the original book on bloomlibrary.org. - **bookInstanceId**: unique ID for each book/translation assigned by Bloom. For example the Hindi version of 'Going to Buy a Book' is 'af86eefd-f69c-4e06-b8eb-e0451853aab9'. - **bookLineage**: Unique bookInstanceIDs of _other_ Bloom books that this book is in some way based on. For example, the Hindi version in the example above is based on '056B6F11-4A6C-4942-B2BC-8861E62B03B3'. It's quite possible for this to be either empty, or have multiple entries. For example, the book 'Saboo y Jojo' with ID '5b232a5f-561d-4514-afe7-d6ed2f6a940f' is based on two others, ['056B6F11-4A6C-4942-B2BC-8861E62B03B3', '10a6075b-3c4f-40e4-94f3-593497f2793a'] - (coming soon) **contentLanguages**: Other languages this book may be available in. "Going to Buy a Book" is available in ['eng', 'kan', 'mar', 'pan', 'ben', 'guj', 'hin'] for example. ### Data Splits All languages include a train, validation, and test split. However, for language having a small number of stories, certain of these splits maybe empty. In such cases, we recommend using any data for testing only or for zero-shot experiments. ## Changelog - **25 August 2022** - add the remaining metadata, change data type of `pageCount` to int32 - **24 August 2022** - majority of metadata added back in to the filtered/ clean data - **23 August 2022** - metadata temporarily removed to update to cleaner dataset
sil-ai/bloom-lm
[ "task_ids:language-modeling", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:multilingual", "size_categories:10K<n<100K", "source_datasets:original", "language:afr", "language:af", "language:aaa", "language:abc", "language:ada", "language:adq", "language:aeu", "language:agq", "language:ags", "language:ahk", "language:aia", "language:ajz", "language:aka", "language:ak", "language:ame", "language:amh", "language:am", "language:amp", "language:amu", "language:ann", "language:aph", "language:awa", "language:awb", "language:azn", "language:azo", "language:bag", "language:bam", "language:bm", "language:baw", "language:bax", "language:bbk", "language:bcc", "language:bce", "language:bec", "language:bef", "language:ben", "language:bn", "language:bfd", "language:bfm", "language:bfn", "language:bgf", "language:bho", "language:bhs", "language:bis", "language:bi", "language:bjn", "language:bjr", "language:bkc", "language:bkh", "language:bkm", "language:bkx", "language:bob", "language:bod", "language:bo", "language:boz", "language:bqm", "language:bra", "language:brb", "language:bri", "language:brv", "language:bss", "language:bud", "language:buo", "language:bwt", "language:bwx", "language:bxa", "language:bya", "language:bze", "language:bzi", "language:cak", "language:cbr", "language:ceb", "language:cgc", "language:chd", "language:chp", "language:cim", "language:clo", "language:cmn", "language:zh", "language:cmo", "language:csw", "language:cuh", "language:cuv", "language:dag", "language:ddg", "language:ded", "language:deu", "language:de", "language:dig", "language:dje", "language:dmg", "language:dnw", "language:dtp", "language:dtr", "language:dty", "language:dug", "language:eee", "language:ekm", "language:enb", "language:enc", "language:eng", "language:en", "language:ewo", "language:fas", "language:fa", "language:fil", "language:fli", "language:fon", "language:fra", "language:fr", "language:fub", "language:fuh", "language:gal", "language:gbj", "language:gou", "language:gsw", "language:guc", "language:guj", "language:gu", "language:guz", "language:gwc", "language:hao", "language:hat", "language:ht", "language:hau", "language:ha", "language:hbb", "language:hig", "language:hil", "language:hin", "language:hi", "language:hla", "language:hna", "language:hre", "language:hro", "language:idt", "language:ilo", "language:ind", "language:id", "language:ino", "language:isu", "language:ita", "language:it", "language:jgo", "language:jmx", "language:jpn", "language:ja", "language:jra", "language:kak", "language:kam", "language:kan", "language:kn", "language:kau", "language:kr", "language:kbq", "language:kbx", "language:kby", "language:kek", "language:ken", "language:khb", "language:khm", "language:km", "language:kik", "language:ki", "language:kin", "language:rw", "language:kir", "language:ky", "language:kjb", "language:kmg", "language:kmr", "language:ku", "language:kms", "language:kmu", "language:kor", "language:ko", "language:kqr", "language:krr", "language:ksw", "language:kur", "language:kvt", "language:kwd", "language:kwu", "language:kwx", "language:kxp", "language:kyq", "language:laj", "language:lan", "language:lao", "language:lo", "language:lbr", "language:lfa", "language:lgg", "language:lgr", "language:lhm", "language:lhu", "language:lkb", "language:llg", "language:lmp", "language:lns", "language:loh", "language:lsi", "language:lts", "language:lug", "language:lg", "language:luy", "language:lwl", "language:mai", "language:mal", "language:ml", "language:mam", "language:mar", "language:mr", "language:mdr", "language:mfh", "language:mfj", "language:mgg", "language:mgm", "language:mgo", "language:mgq", "language:mhx", "language:miy", "language:mkz", "language:mle", "language:mlk", "language:mlw", "language:mmu", "language:mne", "language:mnf", "language:mnw", "language:mot", "language:mqj", "language:mrn", "language:mry", "language:msb", "language:muv", "language:mve", "language:mxu", "language:mya", "language:my", "language:myk", "language:myx", "language:mzm", "language:nas", "language:nco", "language:nep", "language:ne", "language:new", "language:nge", "language:ngn", "language:nhx", "language:njy", "language:nla", "language:nld", "language:nl", "language:nlv", "language:nod", "language:nsk", "language:nsn", "language:nso", "language:nst", "language:nuj", "language:nwe", "language:nwi", "language:nxa", "language:nxl", "language:nya", "language:ny", "language:nyo", "language:nyu", "language:nza", "language:odk", "language:oji", "language:oj", "language:oki", "language:omw", "language:ori", "language:or", "language:ozm", "language:pae", "language:pag", "language:pan", "language:pa", "language:pbt", "language:pce", "language:pcg", "language:pdu", "language:pea", "language:pex", "language:pis", "language:pkb", "language:pmf", "language:pnz", "language:por", "language:pt", "language:psp", "language:pwg", "language:qaa", "language:qub", "language:quc", "language:quf", "language:quz", "language:qve", "language:qvh", "language:qvm", "language:qvo", "language:qxh", "language:rel", "language:rnl", "language:ron", "language:ro", "language:roo", "language:rue", "language:rug", "language:rus", "language:ru", "language:san", "language:sa", "language:saq", "language:sat", "language:sdk", "language:sea", "language:sgd", "language:shn", "language:sml", "language:snk", "language:snl", "language:som", "language:so", "language:sot", "language:st", "language:sox", "language:spa", "language:es", "language:sps", "language:ssn", "language:stk", "language:swa", "language:sw", "language:swh", "language:sxb", "language:syw", "language:taj", "language:tam", "language:ta", "language:tbj", "language:tdb", "language:tdg", "language:tdt", "language:teo", "language:tet", "language:tgk", "language:tg", "language:tha", "language:th", "language:the", "language:thk", "language:thl", "language:thy", "language:tio", "language:tkd", "language:tnl", "language:tnn", "language:tnp", "language:tnt", "language:tod", "language:tom", "language:tpi", "language:tpl", "language:tpu", "language:tsb", "language:tsn", "language:tn", "language:tso", "language:ts", "language:tuv", "language:tuz", "language:tvs", "language:udg", "language:unr", "language:urd", "language:ur", "language:uzb", "language:uz", "language:ven", "language:ve", "language:vie", "language:vi", "language:vif", "language:war", "language:wbm", "language:wbr", "language:wms", "language:wni", "language:wnk", "language:wtk", "language:xho", "language:xh", "language:xkg", "language:xmd", "language:xmg", "language:xmm", "language:xog", "language:xty", "language:yas", "language:yav", "language:ybb", "language:ybh", "language:ybi", "language:ydd", "language:yea", "language:yet", "language:yid", "language:yi", "language:yin", "language:ymp", "language:zaw", "language:zho", "language:zlm", "language:zuh", "language:zul", "language:zu", "license:cc-by-4.0", "license:cc-by-nc-4.0", "license:cc-by-nd-4.0", "license:cc-by-sa-4.0", "license:cc-by-nc-nd-4.0", "license:cc-by-nc-sa-4.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["afr", "af", "aaa", "abc", "ada", "adq", "aeu", "agq", "ags", "ahk", "aia", "ajz", "aka", "ak", "ame", "amh", "am", "amp", "amu", "ann", "aph", "awa", "awb", "azn", "azo", "bag", "bam", "bm", "baw", "bax", "bbk", "bcc", "bce", "bec", "bef", "ben", "bn", "bfd", "bfm", "bfn", "bgf", "bho", "bhs", "bis", "bi", "bjn", "bjr", "bkc", "bkh", "bkm", "bkx", "bob", "bod", "bo", "boz", "bqm", "bra", "brb", "bri", "brv", "bss", "bud", "buo", "bwt", "bwx", "bxa", "bya", "bze", "bzi", "cak", "cbr", "ceb", "cgc", "chd", "chp", "cim", "clo", "cmn", "zh", "cmo", "csw", "cuh", "cuv", "dag", "ddg", "ded", "deu", "de", "dig", "dje", "dmg", "dnw", "dtp", "dtr", "dty", "dug", "eee", "ekm", "enb", "enc", "eng", "en", "ewo", "fas", "fa", "fil", "fli", "fon", "fra", "fr", "fub", "fuh", "gal", "gbj", "gou", "gsw", "guc", "guj", "gu", "guz", "gwc", "hao", "hat", "ht", "hau", "ha", "hbb", "hig", "hil", "hin", "hi", "hla", "hna", "hre", "hro", "idt", "ilo", "ind", "id", "ino", "isu", "ita", "it", "jgo", "jmx", "jpn", "ja", "jra", "kak", "kam", "kan", "kn", "kau", "kr", "kbq", "kbx", "kby", "kek", "ken", "khb", "khm", "km", "kik", "ki", "kin", "rw", "kir", "ky", "kjb", "kmg", "kmr", "ku", "kms", "kmu", "kor", "ko", "kqr", "krr", "ksw", "kur", "ku", "kvt", "kwd", "kwu", "kwx", "kxp", "kyq", "laj", "lan", "lao", "lo", "lbr", "lfa", "lgg", "lgr", "lhm", "lhu", "lkb", "llg", "lmp", "lns", "loh", "lsi", "lts", "lug", "lg", "luy", "lwl", "mai", "mal", "ml", "mam", "mar", "mr", "mdr", "mfh", "mfj", "mgg", "mgm", "mgo", "mgq", "mhx", "miy", "mkz", "mle", "mlk", "mlw", "mmu", "mne", "mnf", "mnw", "mot", "mqj", "mrn", "mry", "msb", "muv", "mve", "mxu", "mya", "my", "myk", "myx", "mzm", "nas", "nco", "nep", "ne", "new", "nge", "ngn", "nhx", "njy", "nla", "nld", "nl", "nlv", "nod", "nsk", "nsn", "nso", "nst", "nuj", "nwe", "nwi", "nxa", "nxl", "nya", "ny", "nyo", "nyu", "nza", "odk", "oji", "oj", "oki", "omw", "ori", "or", "ozm", "pae", "pag", "pan", "pa", "pbt", "pce", "pcg", "pdu", "pea", "pex", "pis", "pkb", "pmf", "pnz", "por", "pt", "psp", "pwg", "qaa", "qub", "quc", "quf", "quz", "qve", "qvh", "qvm", "qvo", "qxh", "rel", "rnl", "ron", "ro", "roo", "rue", "rug", "rus", "ru", "san", "sa", "saq", "sat", "sdk", "sea", "sgd", "shn", "sml", "snk", "snl", "som", "so", "sot", "st", "sox", "spa", "es", "sps", "ssn", "stk", "swa", "sw", "swh", "sxb", "syw", "taj", "tam", "ta", "tbj", "tdb", "tdg", "tdt", "teo", "tet", "tgk", "tg", "tha", "th", "the", "thk", "thl", "thy", "tio", "tkd", "tnl", "tnn", "tnp", "tnt", "tod", "tom", "tpi", "tpl", "tpu", "tsb", "tsn", "tn", "tso", "ts", "tuv", "tuz", "tvs", "udg", "unr", "urd", "ur", "uzb", "uz", "ven", "ve", "vie", "vi", "vif", "war", "wbm", "wbr", "wms", "wni", "wnk", "wtk", "xho", "xh", "xkg", "xmd", "xmg", "xmm", "xog", "xty", "yas", "yav", "ybb", "ybh", "ybi", "ydd", "yea", "yet", "yid", "yi", "yin", "ymp", "zaw", "zho", "zh", "zlm", "zuh", "zul", "zu"], "license": ["cc-by-4.0", "cc-by-nc-4.0", "cc-by-nd-4.0", "cc-by-sa-4.0", "cc-by-nc-nd-4.0", "cc-by-nc-sa-4.0"], "multilinguality": ["multilingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_ids": ["language-modeling"], "pretty_name": "BloomLM", "extra_gated_prompt": "One more step before getting this dataset. This dataset is open access and available only for non-commercial use (except for portions of the dataset labeled with a `cc-by-sa` license). A \"license\" field paired with each of the dataset entries/samples specifies the Creative Commons license for that entry/sample.\n\nThese [Creative Commons licenses](https://creativecommons.org/about/cclicenses/) specify that: \n\n1. You cannot use the dataset for or directed toward commercial advantage or monetary compensation (except for those portions of the dataset labeled specifically with a `cc-by-sa` license. If you would like to ask about commercial uses of this dataset, please [email us](mailto:[email protected]).\n2. Any public, non-commercial use of the data must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use. \n3. For those portions of the dataset marked with an ND license, you cannot remix, transform, or build upon the material, and you may not distribute modified material. \n\nIn addition to the above implied by Creative Commons and when clicking \"Access Repository\" below, you agree: \n\n1. Not to use the dataset for any use intended to or which has the effect of harming or enabling discrimination against individuals or groups based on legally protected characteristics or categories, including but not limited to discrimination against Indigenous People as outlined in Articles 2; 13-16; and 31 of the United Nations Declaration on the Rights of Indigenous People, 13 September 2007 and as subsequently amended and revised.\n2. That your *contact information* (email address and username) can be shared with the model authors as well.\n ", "extra_gated_fields": {"I have read the License and agree with its terms": "checkbox"}}
2022-10-21T11:13:50+00:00
[]
[ "afr", "af", "aaa", "abc", "ada", "adq", "aeu", "agq", "ags", "ahk", "aia", "ajz", "aka", "ak", "ame", "amh", "am", "amp", "amu", "ann", "aph", "awa", "awb", "azn", "azo", "bag", "bam", "bm", "baw", "bax", "bbk", "bcc", "bce", "bec", "bef", "ben", "bn", "bfd", "bfm", "bfn", "bgf", "bho", "bhs", "bis", "bi", "bjn", "bjr", "bkc", "bkh", "bkm", "bkx", "bob", "bod", "bo", "boz", "bqm", "bra", "brb", "bri", "brv", "bss", "bud", "buo", "bwt", "bwx", "bxa", "bya", "bze", "bzi", "cak", "cbr", "ceb", "cgc", "chd", "chp", "cim", "clo", "cmn", "zh", "cmo", "csw", "cuh", "cuv", "dag", "ddg", "ded", "deu", "de", "dig", "dje", "dmg", "dnw", "dtp", "dtr", "dty", "dug", "eee", "ekm", "enb", "enc", "eng", "en", "ewo", "fas", "fa", "fil", "fli", "fon", "fra", "fr", "fub", "fuh", "gal", "gbj", "gou", "gsw", "guc", "guj", "gu", "guz", "gwc", "hao", "hat", "ht", "hau", "ha", "hbb", "hig", "hil", "hin", "hi", "hla", "hna", "hre", "hro", "idt", "ilo", "ind", "id", "ino", "isu", "ita", "it", "jgo", "jmx", "jpn", "ja", "jra", "kak", "kam", "kan", "kn", "kau", "kr", "kbq", "kbx", "kby", "kek", "ken", "khb", "khm", "km", "kik", "ki", "kin", "rw", "kir", "ky", "kjb", "kmg", "kmr", "ku", "kms", "kmu", "kor", "ko", "kqr", "krr", "ksw", "kur", "kvt", "kwd", "kwu", "kwx", "kxp", "kyq", "laj", "lan", "lao", "lo", "lbr", "lfa", "lgg", "lgr", "lhm", "lhu", "lkb", "llg", "lmp", "lns", "loh", "lsi", "lts", "lug", "lg", "luy", "lwl", "mai", "mal", "ml", "mam", "mar", "mr", "mdr", "mfh", "mfj", "mgg", "mgm", "mgo", "mgq", "mhx", "miy", "mkz", "mle", "mlk", "mlw", "mmu", "mne", "mnf", "mnw", "mot", "mqj", "mrn", "mry", "msb", "muv", "mve", "mxu", "mya", "my", "myk", "myx", "mzm", "nas", "nco", "nep", "ne", "new", "nge", "ngn", "nhx", "njy", "nla", "nld", "nl", "nlv", "nod", "nsk", "nsn", "nso", "nst", "nuj", "nwe", "nwi", "nxa", "nxl", "nya", "ny", "nyo", "nyu", "nza", "odk", "oji", "oj", "oki", "omw", "ori", "or", "ozm", "pae", "pag", "pan", "pa", "pbt", "pce", "pcg", "pdu", "pea", "pex", "pis", "pkb", "pmf", "pnz", "por", "pt", "psp", "pwg", "qaa", "qub", "quc", "quf", "quz", "qve", "qvh", "qvm", "qvo", "qxh", "rel", "rnl", "ron", "ro", "roo", "rue", "rug", "rus", "ru", "san", "sa", "saq", "sat", "sdk", "sea", "sgd", "shn", "sml", "snk", "snl", "som", "so", "sot", "st", "sox", "spa", "es", "sps", "ssn", "stk", "swa", "sw", "swh", "sxb", "syw", "taj", "tam", "ta", "tbj", "tdb", "tdg", "tdt", "teo", "tet", "tgk", "tg", "tha", "th", "the", "thk", "thl", "thy", "tio", "tkd", "tnl", "tnn", "tnp", "tnt", "tod", "tom", "tpi", "tpl", "tpu", "tsb", "tsn", "tn", "tso", "ts", "tuv", "tuz", "tvs", "udg", "unr", "urd", "ur", "uzb", "uz", "ven", "ve", "vie", "vi", "vif", "war", "wbm", "wbr", "wms", "wni", "wnk", "wtk", "xho", "xh", "xkg", "xmd", "xmg", "xmm", "xog", "xty", "yas", "yav", "ybb", "ybh", "ybi", "ydd", "yea", "yet", "yid", "yi", "yin", "ymp", "zaw", "zho", "zlm", "zuh", "zul", "zu" ]
TAGS #task_ids-language-modeling #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-multilingual #size_categories-10K<n<100K #source_datasets-original #language-Afrikaans #language-Afrikaans #language-Ghotuo #language-Ambala Ayta #language-Adangme #language-Adangbe #language-Akeu #language-Aghem #language-Esimbi #language-Akha #language-Arosi #language-Amri Karbi #language-Akan #language-Akan #language-Yanesha' #language-Amharic #language-Amharic #language-Alamblak #language-Guerrero Amuzgo #language-Obolo #language-Athpariya #language-Awadhi #language-Awa (Papua New Guinea) #language-Western Durango Nahuatl #language-Awing #language-Tuki #language-Bambara #language-Bambara #language-Bambili-Bambui #language-Bamun #language-Babanki #language-Southern Balochi #language-Bamenyam #language-Iceve-Maci #language-Benabena #language-Bengali #language-Bengali #language-Bafut #language-Mmen #language-Bunak #language-Bangandu #language-Bhojpuri #language-Buwal #language-Bislama #language-Bislama #language-Banjar #language-Binumarien #language-Baka (Cameroon) #language-Bakoko #language-Kom (Cameroon) #language-Baikeno #language-Aweer #language-Tibetan #language-Tibetan #language-Tiéyaxo Bozo #language-Wumboko #language-Braj #language-Brao #language-Mokpwe #language-Western Bru #language-Akoose #language-Ntcham #language-Terei #language-Bafaw-Balong #language-Bu-Nao Bunu #language-Tairaha #language-Batak #language-Jenaama Bozo #language-Bisu #language-Kaqchikel #language-Cashibo-Cacataibo #language-Cebuano #language-Kagayanen #language-Highland Oaxaca Chontal #language-Chipewyan #language-Cimbrian #language-Lowland Oaxaca Chontal #language-Mandarin Chinese #language-Chinese #language-Central Mnong #language-Swampy Cree #language-Chuka #language-Cuvok #language-Dagbani #language-Fataluku #language-Dedua #language-German #language-German #language-Digo #language-Zarma #language-Upper Kinabatangan #language-Western Dani #language-Kadazan Dusun #language-Lotud #language-Dotyali #language-Duruma #language-E #language-Elip #language-Markweeta #language-En #language-English #language-English #language-Ewondo #language-Persian #language-Persian #language-Filipino #language-Fali #language-Fon #language-French #language-French #language-Adamawa Fulfulde #language-Western Niger Fulfulde #language-Galolen #language-Bodo Gadaba #language-Gavar #language-Swiss German #language-Wayuu #language-Gujarati #language-Gujarati #language-Gusii #language-Gawri #language-Hakö #language-Haitian #language-Haitian #language-Hausa #language-Hausa #language-Huba #language-Kamwe #language-Hiligaynon #language-Hindi #language-Hindi #language-Halia #language-Mina (Cameroon) #language-Hre #language-Haroi #language-Idaté #language-Iloko #language-Indonesian #language-Indonesian #language-Inoke-Yate #language-Isu (Menchum Division) #language-Italian #language-Italian #language-Ngomba #language-Western Juxtlahuaca Mixtec #language-Japanese #language-Japanese #language-Jarai #language-Kalanguya #language-Kamba (Kenya) #language-Kannada #language-Kannada #language-Kanuri #language-Kanuri #language-Kamano #language-Ap Ma #language-Manga Kanuri #language-Kekchí #language-Kenyang #language-Lü #language-Khmer #language-Khmer #language-Kikuyu #language-Kikuyu #language-Kinyarwanda #language-Kinyarwanda #language-Kirghiz #language-Kirghiz #language-Q'anjob'al #language-Kâte #language-Northern Kurdish #language-Kurdish #language-Kamasau #language-Kanite #language-Korean #language-Korean #language-Kimaragang #language-Krung #language-S'gaw Karen #language-Kurdish #language-Lahta Karen #language-Kwaio #language-Kwakum #language-Khirwar #language-Wadiyara Koli #language-Kenga #language-Lango (Uganda) #language-Laru #language-Lao #language-Lao #language-Lohorung #language-Lefa #language-Lugbara #language-Lengo #language-Lhomi #language-Lahu #language-Kabras #language-Lole #language-Limbum #language-Lamnso' #language-Laarim #language-Lashi #language-Tachoni #language-Ganda #language-Ganda #language-Luyia #language-Eastern Lawa #language-Maithili #language-Malayalam #language-Malayalam #language-Mam #language-Marathi #language-Marathi #language-Mandar #language-Matal #language-Mefele #language-Mpumpong #language-Mambae #language-Meta' #language-Malila #language-Maru #language-Ayutla Mixtec #language-Makasae #language-Manambu #language-Ilwana #language-Moloko #language-Mmaala #language-Naba #language-Mundani #language-Mon #language-Barí #language-Mamasa #language-Cheke Holo #language-Mandaya #language-Masbatenyo #language-Muthuvan #language-Marwari (Pakistan) #language-Mada (Cameroon) #language-Burmese #language-Burmese #language-Mamara Senoufo #language-Masaaba #language-Mumuye #language-Naasioi #language-Sibe #language-Nepali (macrolanguage) #language-Nepali (macrolanguage) #language-Newari #language-Ngemba #language-Ngwo #language-Isthmus-Mecayapan Nahuatl #language-Njyem #language-Ngombale #language-Dutch #language-Dutch #language-Orizaba Nahuatl #language-Northern Thai #language-Naskapi #language-Nehan #language-Pedi #language-Tase Naga #language-Nyole #language-Ngwe #language-Southwest Tanna #language-Nauete #language-South Nuaulu #language-Nyanja #language-Nyanja #language-Nyoro #language-Nyungwe #language-Tigon Mbembe #language-Od #language-Ojibwa #language-Ojibwa #language-Okiek #language-South Tairora #language-Oriya (macrolanguage) #language-Oriya (macrolanguage) #language-Koonzime #language-Pagibete #language-Pangasinan #language-Panjabi #language-Panjabi #language-Southern Pashto #language-Ruching Palaung #language-Paniya #language-Kayan #language-Peranakan Indonesian #language-Petats #language-Pijin #language-Pokomo #language-Pamona #language-Pana (Central African Republic) #language-Portuguese #language-Portuguese #language-Philippine Sign Language #language-Gapapaiwa #language-qaa #language-Huallaga Huánuco Quechua #language-K'iche' #language-Lambayeque Quechua #language-Cusco Quechua #language-Eastern Apurímac Quechua #language-Huamalíes-Dos de Mayo Huánuco Quechua #language-Margos-Yarowilca-Lauricocha Quechua #language-Napo Lowland Quechua #language-Panao Huánuco Quechua #language-Rendille #language-Ranglong #language-Romanian #language-Romanian #language-Rotokas #language-Rusyn #language-Roviana #language-Russian #language-Russian #language-Sanskrit #language-Sanskrit #language-Samburu #language-Santali #language-Sos Kundi #language-Semai #language-Surigaonon #language-Shan #language-Central Sama #language-Soninke #language-Sangil #language-Somali #language-Somali #language-Southern Sotho #language-Southern Sotho #language-Swo #language-Spanish #language-Spanish #language-Saposa #language-Waata #language-Arammba #language-Swahili (macrolanguage) #language-Swahili (macrolanguage) #language-Swahili (individual language) #language-Suba #language-Kagate #language-Eastern Tamang #language-Tamil #language-Tamil #language-Tiang #language-Panchpargania #language-Western Tamang #language-Tetun Dili #language-Teso #language-Tetum #language-Tajik #language-Tajik #language-Thai #language-Thai #language-Chitwania Tharu #language-Tharaka #language-Dangaura Tharu #language-Tha #language-Teop #language-Tukudede #language-Lenakel #language-North Tanna #language-Whitesands #language-Tontemboan #language-Toma #language-Tombulu #language-Tok Pisin #language-Tlacoapa Me'phaa #language-Tampuan #language-Tsamai #language-Tswana #language-Tswana #language-Tsonga #language-Tsonga #language-Turkana #language-Turka #language-Taveta #language-Muduga #language-Mundari #language-Urdu #language-Urdu #language-Uzbek #language-Uzbek #language-Venda #language-Venda #language-Vietnamese #language-Vietnamese #language-Vili #language-Waray (Philippines) #language-Wa #language-Wagdi #language-Wambon #language-Ndzwani Comorian #language-Wanukaka #language-Watakataui #language-Xhosa #language-Xhosa #language-Kagoro #language-Mbudum #language-Mengaka #language-Manado Malay #language-Soga #language-Yoloxochitl Mixtec #language-Nugunu (Cameroon) #language-Yangben #language-Yemba #language-Yakha #language-Yamphu #language-Eastern Yiddish #language-Ravula #language-Yetfa #language-Yiddish #language-Yiddish #language-Riang Lai #language-Yamap #language-Mitla Zapotec #language-Chinese #language-Malay (individual language) #language-Tokano #language-Zulu #language-Zulu #license-cc-by-4.0 #license-cc-by-nc-4.0 #license-cc-by-nd-4.0 #license-cc-by-sa-4.0 #license-cc-by-nc-nd-4.0 #license-cc-by-nc-sa-4.0 #region-us
Table of Contents ----------------- * Dataset Description + Dataset Summary + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits Dataset Description ------------------- * Homepage: SIL AI * Point of Contact: SIL AI email * Source Data: Bloom Library !logo for Bloom Library !sil-ai logo Dataset Summary --------------- Bloom is free, open-source software and an associated website Bloom Library, app, and services developed by SIL International. Bloom’s primary goal is to equip non-dominant language communities and their members to create the literature they want for their community and children. Bloom also serves organizations that help such communities develop literature and education or other aspects of community development. This version of the Bloom Library data is developed specifically for the language modeling task. It includes data from 364 languages across 31 language families. There is a mean of 32 stories and median of 2 stories per language. Note: If you speak one of these languages and can help provide feedback or corrections, please let us know! Note: Although this data was used in the training of the BLOOM model, this dataset only represents a small portion of the data used to train that model. Data from "Bloom Library" was combined with a large number of other datasets to train that model. "Bloom Library" is a project that existed prior to the BLOOM model, and is something separate. All that to say... We were using the "Bloom" name before it was cool. Languages --------- Of the 500+ languages listed at URL, there are 363 languages available in this dataset. Here are the corresponding ISO 639-3 codes: aaa, abc, ada, adq, aeu, afr, agq, ags, ahk, aia, ajz, aka, ame, amh, amp, amu, ann, aph, awa, awb, azn, azo, bag, bam, baw, bax, bbk, bcc, bce, bec, bef, ben, bfd, bfm, bfn, bgf, bho, bhs, bis, bjn, bjr, bkc, bkh, bkm, bkx, bob, bod, boz, bqm, bra, brb, bri, brv, bss, bud, buo, bwt, bwx, bxa, bya, bze, bzi, cak, cbr, ceb, cgc, chd, chp, cim, clo, cmn, cmo, csw, cuh, cuv, dag, ddg, ded, deu, dig, dje, dmg, dnw, dtp, dtr, dty, dug, eee, ekm, enb, enc, eng, ewo, fas, fil, fli, fon, fra, fub, fuh, gal, gbj, gou, gsw, guc, guj, guz, gwc, hao, hat, hau, hbb, hig, hil, hin, hla, hna, hre, hro, idt, ilo, ind, ino, isu, ita, jgo, jmx, jpn, jra, kak, kam, kan, kau, kbq, kbx, kby, kek, ken, khb, khm, kik, kin, kir, kjb, kmg, kmr, kms, kmu, kor, kqr, krr, ksw, kur, kvt, kwd, kwu, kwx, kxp, kyq, laj, lan, lao, lbr, lfa, lgg, lgr, lhm, lhu, lkb, llg, lmp, lns, loh, lsi, lts, lug, luy, lwl, mai, mal, mam, mar, mdr, mfh, mfj, mgg, mgm, mgo, mgq, mhx, miy, mkz, mle, mlk, mlw, mmu, mne, mnf, mnw, mot, mqj, mrn, mry, msb, muv, mve, mxu, mya, myk, myx, mzm, nas, nco, nep, new, nge, ngn, nhx, njy, nla, nld, nlv, nod, nsk, nsn, nso, nst, nuj, nwe, nwi, nxa, nxl, nya, nyo, nyu, nza, odk, oji, oki, omw, ori, ozm, pae, pag, pan, pbt, pce, pcg, pdu, pea, pex, pis, pkb, pmf, pnz, por, psp, pwg, qub, quc, quf, quz, qve, qvh, qvm, qvo, qxh, rel, rnl, ron, roo, rue, rug, rus, san, saq, sat, sdk, sea, sgd, shn, sml, snk, snl, som, sot, sox, spa, sps, ssn, stk, swa, swh, sxb, syw, taj, tam, tbj, tdb, tdg, tdt, teo, tet, tgk, tha, the, thk, thl, thy, tio, tkd, tnl, tnn, tnp, tnt, tod, tom, tpi, tpl, tpu, tsb, tsn, tso, tuv, tuz, tvs, udg, unr, urd, uzb, ven, vie, vif, war, wbm, wbr, wms, wni, wnk, wtk, xho, xkg, xmd, xmg, xmm, xog, xty, yas, yav, ybb, ybh, ybi, ydd, yea, yet, yid, yin, ymp, zaw, zho, zlm, zuh, zul Dataset Statistics ------------------ Some of the languages included in the dataset just include 1 or a couple of "stories." These are not split between training, validation, and test. For those with higher numbers of available stories we include the following numbers of stories in each split: Dataset Structure ----------------- ### Data Instances The examples look like this for Hindi: This would produce an output: Whereas if you wish to gather all the text for a language you may use this: ### Data Fields The metadata fields below are available and the full dataset will be updated with per story metadata soon (in August 2022). As of now a majority of stories have metadata, but some are missing certain fields. In terms of licenses, all stories included in the current release are released under a Creative Commons license (even if the individual story metadata fields are missing). * text: the text of the story/book, concatenated together from the different pages. * id: id of the sample * title: title of the book, e.g. "Going to Buy a Book". * license: specific license used, e.g. "cc-by-sa" for "Creative Commons, by attribution, share-alike". * copyright: copyright notice from the original book on URL * pageCount: page count from the metadata on the original book on URL. * bookInstanceId: unique ID for each book/translation assigned by Bloom. For example the Hindi version of 'Going to Buy a Book' is 'af86eefd-f69c-4e06-b8eb-e0451853aab9'. * bookLineage: Unique bookInstanceIDs of *other* Bloom books that this book is in some way based on. For example, the Hindi version in the example above is based on '056B6F11-4A6C-4942-B2BC-8861E62B03B3'. It's quite possible for this to be either empty, or have multiple entries. For example, the book 'Saboo y Jojo' with ID '5b232a5f-561d-4514-afe7-d6ed2f6a940f' is based on two others, ['056B6F11-4A6C-4942-B2BC-8861E62B03B3', '10a6075b-3c4f-40e4-94f3-593497f2793a'] * (coming soon) contentLanguages: Other languages this book may be available in. "Going to Buy a Book" is available in ['eng', 'kan', 'mar', 'pan', 'ben', 'guj', 'hin'] for example. ### Data Splits All languages include a train, validation, and test split. However, for language having a small number of stories, certain of these splits maybe empty. In such cases, we recommend using any data for testing only or for zero-shot experiments. Changelog --------- * 25 August 2022 - add the remaining metadata, change data type of 'pageCount' to int32 * 24 August 2022 - majority of metadata added back in to the filtered/ clean data * 23 August 2022 - metadata temporarily removed to update to cleaner dataset
[ "### Data Instances\n\n\nThe examples look like this for Hindi:\n\n\nThis would produce an output:\n\n\nWhereas if you wish to gather all the text for a language you may use this:", "### Data Fields\n\n\nThe metadata fields below are available and the full dataset will be updated with per story metadata soon (in August 2022). As of now a majority of stories have metadata, but some are missing certain fields. In terms of licenses, all stories included in the current release are released under a Creative Commons license (even if the individual story metadata fields are missing).\n\n\n* text: the text of the story/book, concatenated together from the different pages.\n* id: id of the sample\n* title: title of the book, e.g. \"Going to Buy a Book\".\n* license: specific license used, e.g. \"cc-by-sa\" for \"Creative Commons, by attribution, share-alike\".\n* copyright: copyright notice from the original book on URL\n* pageCount: page count from the metadata on the original book on URL.\n* bookInstanceId: unique ID for each book/translation assigned by Bloom. For example the Hindi version of 'Going to Buy a Book' is 'af86eefd-f69c-4e06-b8eb-e0451853aab9'.\n* bookLineage: Unique bookInstanceIDs of *other* Bloom books that this book is in some way based on. For example, the Hindi version in the example above is based on '056B6F11-4A6C-4942-B2BC-8861E62B03B3'. It's quite possible for this to be either empty, or have multiple entries. For example, the book 'Saboo y Jojo' with ID '5b232a5f-561d-4514-afe7-d6ed2f6a940f' is based on two others, ['056B6F11-4A6C-4942-B2BC-8861E62B03B3', '10a6075b-3c4f-40e4-94f3-593497f2793a']\n* (coming soon) contentLanguages: Other languages this book may be available in. \"Going to Buy a Book\" is available in ['eng', 'kan', 'mar', 'pan', 'ben', 'guj', 'hin'] for example.", "### Data Splits\n\n\nAll languages include a train, validation, and test split. However, for language having a small number of stories, certain of these splits maybe empty. In such cases, we recommend using any data for testing only or for zero-shot experiments.\n\n\nChangelog\n---------\n\n\n* 25 August 2022 - add the remaining metadata, change data type of 'pageCount' to int32\n* 24 August 2022 - majority of metadata added back in to the filtered/ clean data\n* 23 August 2022 - metadata temporarily removed to update to cleaner dataset" ]
[ "TAGS\n#task_ids-language-modeling #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-multilingual #size_categories-10K<n<100K #source_datasets-original #language-Afrikaans #language-Afrikaans #language-Ghotuo #language-Ambala Ayta #language-Adangme #language-Adangbe #language-Akeu #language-Aghem #language-Esimbi #language-Akha #language-Arosi #language-Amri Karbi #language-Akan #language-Akan #language-Yanesha' #language-Amharic #language-Amharic #language-Alamblak #language-Guerrero Amuzgo #language-Obolo #language-Athpariya #language-Awadhi #language-Awa (Papua New Guinea) #language-Western Durango Nahuatl #language-Awing #language-Tuki #language-Bambara #language-Bambara #language-Bambili-Bambui #language-Bamun #language-Babanki #language-Southern Balochi #language-Bamenyam #language-Iceve-Maci #language-Benabena #language-Bengali #language-Bengali #language-Bafut #language-Mmen #language-Bunak #language-Bangandu #language-Bhojpuri #language-Buwal #language-Bislama #language-Bislama #language-Banjar #language-Binumarien #language-Baka (Cameroon) #language-Bakoko #language-Kom (Cameroon) #language-Baikeno #language-Aweer #language-Tibetan #language-Tibetan #language-Tiéyaxo Bozo #language-Wumboko #language-Braj #language-Brao #language-Mokpwe #language-Western Bru #language-Akoose #language-Ntcham #language-Terei #language-Bafaw-Balong #language-Bu-Nao Bunu #language-Tairaha #language-Batak #language-Jenaama Bozo #language-Bisu #language-Kaqchikel #language-Cashibo-Cacataibo #language-Cebuano #language-Kagayanen #language-Highland Oaxaca Chontal #language-Chipewyan #language-Cimbrian #language-Lowland Oaxaca Chontal #language-Mandarin Chinese #language-Chinese #language-Central Mnong #language-Swampy Cree #language-Chuka #language-Cuvok #language-Dagbani #language-Fataluku #language-Dedua #language-German #language-German #language-Digo #language-Zarma #language-Upper Kinabatangan #language-Western Dani #language-Kadazan Dusun #language-Lotud #language-Dotyali #language-Duruma #language-E #language-Elip #language-Markweeta #language-En #language-English #language-English #language-Ewondo #language-Persian #language-Persian #language-Filipino #language-Fali #language-Fon #language-French #language-French #language-Adamawa Fulfulde #language-Western Niger Fulfulde #language-Galolen #language-Bodo Gadaba #language-Gavar #language-Swiss German #language-Wayuu #language-Gujarati #language-Gujarati #language-Gusii #language-Gawri #language-Hakö #language-Haitian #language-Haitian #language-Hausa #language-Hausa #language-Huba #language-Kamwe #language-Hiligaynon #language-Hindi #language-Hindi #language-Halia #language-Mina (Cameroon) #language-Hre #language-Haroi #language-Idaté #language-Iloko #language-Indonesian #language-Indonesian #language-Inoke-Yate #language-Isu (Menchum Division) #language-Italian #language-Italian #language-Ngomba #language-Western Juxtlahuaca Mixtec #language-Japanese #language-Japanese #language-Jarai #language-Kalanguya #language-Kamba (Kenya) #language-Kannada #language-Kannada #language-Kanuri #language-Kanuri #language-Kamano #language-Ap Ma #language-Manga Kanuri #language-Kekchí #language-Kenyang #language-Lü #language-Khmer #language-Khmer #language-Kikuyu #language-Kikuyu #language-Kinyarwanda #language-Kinyarwanda #language-Kirghiz #language-Kirghiz #language-Q'anjob'al #language-Kâte #language-Northern Kurdish #language-Kurdish #language-Kamasau #language-Kanite #language-Korean #language-Korean #language-Kimaragang #language-Krung #language-S'gaw Karen #language-Kurdish #language-Lahta Karen #language-Kwaio #language-Kwakum #language-Khirwar #language-Wadiyara Koli #language-Kenga #language-Lango (Uganda) #language-Laru #language-Lao #language-Lao #language-Lohorung #language-Lefa #language-Lugbara #language-Lengo #language-Lhomi #language-Lahu #language-Kabras #language-Lole #language-Limbum #language-Lamnso' #language-Laarim #language-Lashi #language-Tachoni #language-Ganda #language-Ganda #language-Luyia #language-Eastern Lawa #language-Maithili #language-Malayalam #language-Malayalam #language-Mam #language-Marathi #language-Marathi #language-Mandar #language-Matal #language-Mefele #language-Mpumpong #language-Mambae #language-Meta' #language-Malila #language-Maru #language-Ayutla Mixtec #language-Makasae #language-Manambu #language-Ilwana #language-Moloko #language-Mmaala #language-Naba #language-Mundani #language-Mon #language-Barí #language-Mamasa #language-Cheke Holo #language-Mandaya #language-Masbatenyo #language-Muthuvan #language-Marwari (Pakistan) #language-Mada (Cameroon) #language-Burmese #language-Burmese #language-Mamara Senoufo #language-Masaaba #language-Mumuye #language-Naasioi #language-Sibe #language-Nepali (macrolanguage) #language-Nepali (macrolanguage) #language-Newari #language-Ngemba #language-Ngwo #language-Isthmus-Mecayapan Nahuatl #language-Njyem #language-Ngombale #language-Dutch #language-Dutch #language-Orizaba Nahuatl #language-Northern Thai #language-Naskapi #language-Nehan #language-Pedi #language-Tase Naga #language-Nyole #language-Ngwe #language-Southwest Tanna #language-Nauete #language-South Nuaulu #language-Nyanja #language-Nyanja #language-Nyoro #language-Nyungwe #language-Tigon Mbembe #language-Od #language-Ojibwa #language-Ojibwa #language-Okiek #language-South Tairora #language-Oriya (macrolanguage) #language-Oriya (macrolanguage) #language-Koonzime #language-Pagibete #language-Pangasinan #language-Panjabi #language-Panjabi #language-Southern Pashto #language-Ruching Palaung #language-Paniya #language-Kayan #language-Peranakan Indonesian #language-Petats #language-Pijin #language-Pokomo #language-Pamona #language-Pana (Central African Republic) #language-Portuguese #language-Portuguese #language-Philippine Sign Language #language-Gapapaiwa #language-qaa #language-Huallaga Huánuco Quechua #language-K'iche' #language-Lambayeque Quechua #language-Cusco Quechua #language-Eastern Apurímac Quechua #language-Huamalíes-Dos de Mayo Huánuco Quechua #language-Margos-Yarowilca-Lauricocha Quechua #language-Napo Lowland Quechua #language-Panao Huánuco Quechua #language-Rendille #language-Ranglong #language-Romanian #language-Romanian #language-Rotokas #language-Rusyn #language-Roviana #language-Russian #language-Russian #language-Sanskrit #language-Sanskrit #language-Samburu #language-Santali #language-Sos Kundi #language-Semai #language-Surigaonon #language-Shan #language-Central Sama #language-Soninke #language-Sangil #language-Somali #language-Somali #language-Southern Sotho #language-Southern Sotho #language-Swo #language-Spanish #language-Spanish #language-Saposa #language-Waata #language-Arammba #language-Swahili (macrolanguage) #language-Swahili (macrolanguage) #language-Swahili (individual language) #language-Suba #language-Kagate #language-Eastern Tamang #language-Tamil #language-Tamil #language-Tiang #language-Panchpargania #language-Western Tamang #language-Tetun Dili #language-Teso #language-Tetum #language-Tajik #language-Tajik #language-Thai #language-Thai #language-Chitwania Tharu #language-Tharaka #language-Dangaura Tharu #language-Tha #language-Teop #language-Tukudede #language-Lenakel #language-North Tanna #language-Whitesands #language-Tontemboan #language-Toma #language-Tombulu #language-Tok Pisin #language-Tlacoapa Me'phaa #language-Tampuan #language-Tsamai #language-Tswana #language-Tswana #language-Tsonga #language-Tsonga #language-Turkana #language-Turka #language-Taveta #language-Muduga #language-Mundari #language-Urdu #language-Urdu #language-Uzbek #language-Uzbek #language-Venda #language-Venda #language-Vietnamese #language-Vietnamese #language-Vili #language-Waray (Philippines) #language-Wa #language-Wagdi #language-Wambon #language-Ndzwani Comorian #language-Wanukaka #language-Watakataui #language-Xhosa #language-Xhosa #language-Kagoro #language-Mbudum #language-Mengaka #language-Manado Malay #language-Soga #language-Yoloxochitl Mixtec #language-Nugunu (Cameroon) #language-Yangben #language-Yemba #language-Yakha #language-Yamphu #language-Eastern Yiddish #language-Ravula #language-Yetfa #language-Yiddish #language-Yiddish #language-Riang Lai #language-Yamap #language-Mitla Zapotec #language-Chinese #language-Malay (individual language) #language-Tokano #language-Zulu #language-Zulu #license-cc-by-4.0 #license-cc-by-nc-4.0 #license-cc-by-nd-4.0 #license-cc-by-sa-4.0 #license-cc-by-nc-nd-4.0 #license-cc-by-nc-sa-4.0 #region-us \n", "### Data Instances\n\n\nThe examples look like this for Hindi:\n\n\nThis would produce an output:\n\n\nWhereas if you wish to gather all the text for a language you may use this:", "### Data Fields\n\n\nThe metadata fields below are available and the full dataset will be updated with per story metadata soon (in August 2022). As of now a majority of stories have metadata, but some are missing certain fields. In terms of licenses, all stories included in the current release are released under a Creative Commons license (even if the individual story metadata fields are missing).\n\n\n* text: the text of the story/book, concatenated together from the different pages.\n* id: id of the sample\n* title: title of the book, e.g. \"Going to Buy a Book\".\n* license: specific license used, e.g. \"cc-by-sa\" for \"Creative Commons, by attribution, share-alike\".\n* copyright: copyright notice from the original book on URL\n* pageCount: page count from the metadata on the original book on URL.\n* bookInstanceId: unique ID for each book/translation assigned by Bloom. For example the Hindi version of 'Going to Buy a Book' is 'af86eefd-f69c-4e06-b8eb-e0451853aab9'.\n* bookLineage: Unique bookInstanceIDs of *other* Bloom books that this book is in some way based on. For example, the Hindi version in the example above is based on '056B6F11-4A6C-4942-B2BC-8861E62B03B3'. It's quite possible for this to be either empty, or have multiple entries. For example, the book 'Saboo y Jojo' with ID '5b232a5f-561d-4514-afe7-d6ed2f6a940f' is based on two others, ['056B6F11-4A6C-4942-B2BC-8861E62B03B3', '10a6075b-3c4f-40e4-94f3-593497f2793a']\n* (coming soon) contentLanguages: Other languages this book may be available in. \"Going to Buy a Book\" is available in ['eng', 'kan', 'mar', 'pan', 'ben', 'guj', 'hin'] for example.", "### Data Splits\n\n\nAll languages include a train, validation, and test split. However, for language having a small number of stories, certain of these splits maybe empty. In such cases, we recommend using any data for testing only or for zero-shot experiments.\n\n\nChangelog\n---------\n\n\n* 25 August 2022 - add the remaining metadata, change data type of 'pageCount' to int32\n* 24 August 2022 - majority of metadata added back in to the filtered/ clean data\n* 23 August 2022 - metadata temporarily removed to update to cleaner dataset" ]
169b139c0c38afc9942b94198c789b4b9ba8e2dc
# Dataset Card for Tilde-MODEL-Catalan ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://www.softcatala.org/ - **Repository:** https://github.com/Softcatala/Europarl-catalan - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset contains two dataset pairs corresponding to the Europarl corpus. Both the English and the German version are aligned with the Catalan translation, which has been obtained using Apertium's RBMT system from the Spanish version of the Spanish-English alignment. Catalan-German alignment has been obtained using this [alignment finder](https://github.com/davidcanovas/alignment-finder-with-pivot-language) from de-en and ca-en. - Catalan-English: 1 965 735 segments. - Catalan-German: 1 734 644 segments. ### Supported Tasks and Leaderboards This dataset can be used to train NMT and SMT systems. It has been used as a training corpus for the [Softcatalà machine translation engine](https://www.softcatala.org/traductor/). ### Languages Catalan (`ca`). German (`de`). English (`en`). ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields Raw text. ### Data Splits One file for language. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [@softcatala](https://github.com/Softcatala) [@jordimas](https://github.com/jordimas) [@davidcanovas](https://github.com/davidcanovas) ### Licensing Information [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/). ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
softcatala/Europarl-catalan
[ "task_categories:translation", "annotations_creators:no-annotation", "language_creators:machine-generated", "multilinguality:translation", "size_categories:1M<n<10M", "source_datasets:extended|europarl_bilingual", "language:ca", "language:de", "language:en", "license:cc-by-4.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["machine-generated"], "language": ["ca", "de", "en"], "license": ["cc-by-4.0"], "multilinguality": ["translation"], "size_categories": ["1M<n<10M"], "source_datasets": ["extended|europarl_bilingual"], "task_categories": ["translation"], "task_ids": [], "pretty_name": "Catalan-English and Catalan-German aligned corpora to train NMT systems."}
2022-10-24T16:37:43+00:00
[]
[ "ca", "de", "en" ]
TAGS #task_categories-translation #annotations_creators-no-annotation #language_creators-machine-generated #multilinguality-translation #size_categories-1M<n<10M #source_datasets-extended|europarl_bilingual #language-Catalan #language-German #language-English #license-cc-by-4.0 #region-us
# Dataset Card for Tilde-MODEL-Catalan ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: URL - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary This dataset contains two dataset pairs corresponding to the Europarl corpus. Both the English and the German version are aligned with the Catalan translation, which has been obtained using Apertium's RBMT system from the Spanish version of the Spanish-English alignment. Catalan-German alignment has been obtained using this alignment finder from de-en and ca-en. - Catalan-English: 1 965 735 segments. - Catalan-German: 1 734 644 segments. ### Supported Tasks and Leaderboards This dataset can be used to train NMT and SMT systems. It has been used as a training corpus for the Softcatalà machine translation engine. ### Languages Catalan ('ca'). German ('de'). English ('en'). ## Dataset Structure ### Data Instances ### Data Fields Raw text. ### Data Splits One file for language. ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators @softcatala @jordimas @davidcanovas ### Licensing Information CC BY 4.0. ### Contributions
[ "# Dataset Card for Tilde-MODEL-Catalan", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nThis dataset contains two dataset pairs corresponding to the Europarl corpus. Both the English and the German version are aligned with the Catalan translation, which has been obtained using Apertium's RBMT system from the Spanish version of the Spanish-English alignment. Catalan-German alignment has been obtained using this alignment finder from de-en and ca-en.\n- Catalan-English: 1 965 735 segments.\n- Catalan-German: 1 734 644 segments.", "### Supported Tasks and Leaderboards\n\nThis dataset can be used to train NMT and SMT systems.\nIt has been used as a training corpus for the Softcatalà machine translation engine.", "### Languages\n\nCatalan ('ca').\nGerman ('de').\nEnglish ('en').", "## Dataset Structure", "### Data Instances", "### Data Fields\n\nRaw text.", "### Data Splits\n\nOne file for language.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\n@softcatala\n@jordimas\n@davidcanovas", "### Licensing Information\n\nCC BY 4.0.", "### Contributions" ]
[ "TAGS\n#task_categories-translation #annotations_creators-no-annotation #language_creators-machine-generated #multilinguality-translation #size_categories-1M<n<10M #source_datasets-extended|europarl_bilingual #language-Catalan #language-German #language-English #license-cc-by-4.0 #region-us \n", "# Dataset Card for Tilde-MODEL-Catalan", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nThis dataset contains two dataset pairs corresponding to the Europarl corpus. Both the English and the German version are aligned with the Catalan translation, which has been obtained using Apertium's RBMT system from the Spanish version of the Spanish-English alignment. Catalan-German alignment has been obtained using this alignment finder from de-en and ca-en.\n- Catalan-English: 1 965 735 segments.\n- Catalan-German: 1 734 644 segments.", "### Supported Tasks and Leaderboards\n\nThis dataset can be used to train NMT and SMT systems.\nIt has been used as a training corpus for the Softcatalà machine translation engine.", "### Languages\n\nCatalan ('ca').\nGerman ('de').\nEnglish ('en').", "## Dataset Structure", "### Data Instances", "### Data Fields\n\nRaw text.", "### Data Splits\n\nOne file for language.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\n@softcatala\n@jordimas\n@davidcanovas", "### Licensing Information\n\nCC BY 4.0.", "### Contributions" ]
4d0a4d825b82a7ba9b191e8c157edf98d21e7618
# Dataset Card for Softcatala-Web-Texts-Dataset ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://www.softcatala.org/ - **Repository:** https://github.com/Softcatala/softcatala-web-dataset - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This repository contains Sofcatalà web site content (articles and programs descriptions). Dataset size: * articles.json contains 623 articles with 373233 words. * programes.json contains 330 program descriptions with 49868 words. The license of the data is Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) or Universal Public Domain Dedication (CC0 1.0) ### Supported Tasks and Leaderboards ### Languages Catalan (`ca`). ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields JSON (name/value pairs) format with the following fields: content, date, id and title. ### Data Splits One file for the descriptions text and one for the articles text. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? Softcatalà community. ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [@softcatala](https://github.com/Softcatala) [@jordimas](https://github.com/jordimas) ### Licensing Information [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/). [CC0-1.0](https://creativecommons.org/share-your-work/public-domain/cc0/). ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
softcatala/Softcatala-Web-Texts-Dataset
[ "task_categories:text-generation", "task_ids:language-modeling", "annotations_creators:no-annotation", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:ca", "license:cc-by-sa-4.0", "license:cc0-1.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["expert-generated"], "language": ["ca"], "license": ["cc-by-sa-4.0", "cc0-1.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "pretty_name": "Softcatal\u00e0 website content."}
2023-06-20T08:43:13+00:00
[]
[ "ca" ]
TAGS #task_categories-text-generation #task_ids-language-modeling #annotations_creators-no-annotation #language_creators-expert-generated #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-Catalan #license-cc-by-sa-4.0 #license-cc0-1.0 #region-us
# Dataset Card for Softcatala-Web-Texts-Dataset ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: URL - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary This repository contains Sofcatalà web site content (articles and programs descriptions). Dataset size: * URL contains 623 articles with 373233 words. * URL contains 330 program descriptions with 49868 words. The license of the data is Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) or Universal Public Domain Dedication (CC0 1.0) ### Supported Tasks and Leaderboards ### Languages Catalan ('ca'). ## Dataset Structure ### Data Instances ### Data Fields JSON (name/value pairs) format with the following fields: content, date, id and title. ### Data Splits One file for the descriptions text and one for the articles text. ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? Softcatalà community. ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators @softcatala @jordimas ### Licensing Information CC BY-SA 4.0. CC0-1.0. ### Contributions
[ "# Dataset Card for Softcatala-Web-Texts-Dataset", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n- Homepage: URL\n- Repository: URL\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\nThis repository contains Sofcatalà web site content (articles and programs descriptions).\n\nDataset size:\n* URL contains 623 articles with 373233 words.\n* URL contains 330 program descriptions with 49868 words.\n\nThe license of the data is Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) or Universal Public Domain Dedication (CC0 1.0)", "### Supported Tasks and Leaderboards", "### Languages\nCatalan ('ca').", "## Dataset Structure", "### Data Instances", "### Data Fields\nJSON (name/value pairs) format with the following fields: content, date, id and title.", "### Data Splits\nOne file for the descriptions text and one for the articles text.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?\nSoftcatalà community.", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n@softcatala\n@jordimas", "### Licensing Information\nCC BY-SA 4.0.\nCC0-1.0.", "### Contributions" ]
[ "TAGS\n#task_categories-text-generation #task_ids-language-modeling #annotations_creators-no-annotation #language_creators-expert-generated #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-Catalan #license-cc-by-sa-4.0 #license-cc0-1.0 #region-us \n", "# Dataset Card for Softcatala-Web-Texts-Dataset", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n- Homepage: URL\n- Repository: URL\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\nThis repository contains Sofcatalà web site content (articles and programs descriptions).\n\nDataset size:\n* URL contains 623 articles with 373233 words.\n* URL contains 330 program descriptions with 49868 words.\n\nThe license of the data is Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) or Universal Public Domain Dedication (CC0 1.0)", "### Supported Tasks and Leaderboards", "### Languages\nCatalan ('ca').", "## Dataset Structure", "### Data Instances", "### Data Fields\nJSON (name/value pairs) format with the following fields: content, date, id and title.", "### Data Splits\nOne file for the descriptions text and one for the articles text.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?\nSoftcatalà community.", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n@softcatala\n@jordimas", "### Licensing Information\nCC BY-SA 4.0.\nCC0-1.0.", "### Contributions" ]
0f51602f5bdb3884bb1730b4f47f8b0f37eddf6a
# Dataset Card for Tilde-MODEL-Catalan ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://www.softcatala.org/ - **Repository:** https://github.com/Softcatala/Tilde-MODEL-catalan - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset contains the German version of the Tilde-MODEL corpus aligned with a Catalan translation. The catalan text has been obtained using Apertium's RBMT system from the Spanish version. It cotains 3.4M segments. ### Supported Tasks and Leaderboards This dataset can be used to train NMT and SMT systems. It has been used as a training corpus for the [Softcatalà machine translation engine](https://www.softcatala.org/traductor/). ### Languages Catalan (`ca`). German (`de`). ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields Raw text. ### Data Splits One file for language. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [@softcatala](https://github.com/Softcatala) [@jordimas](https://github.com/jordimas) [@davidcanovas](https://github.com/davidcanovas) ### Licensing Information [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/). ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
softcatala/Tilde-MODEL-Catalan
[ "task_categories:text2text-generation", "task_categories:translation", "language_creators:machine-generated", "multilinguality:translation", "size_categories:1M<n<10M", "source_datasets:extended|tilde_model", "language:ca", "language:de", "license:cc-by-4.0", "conditional-text-generation", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": [], "language_creators": ["machine-generated"], "language": ["ca", "de"], "license": ["cc-by-4.0"], "multilinguality": ["translation"], "size_categories": ["1M<n<10M"], "source_datasets": ["extended|tilde_model"], "task_categories": ["text2text-generation", "translation"], "task_ids": [], "pretty_name": "Catalan-German aligned corpora to train NMT systems.", "tags": ["conditional-text-generation"]}
2022-10-24T16:38:21+00:00
[]
[ "ca", "de" ]
TAGS #task_categories-text2text-generation #task_categories-translation #language_creators-machine-generated #multilinguality-translation #size_categories-1M<n<10M #source_datasets-extended|tilde_model #language-Catalan #language-German #license-cc-by-4.0 #conditional-text-generation #region-us
# Dataset Card for Tilde-MODEL-Catalan ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: URL - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary This dataset contains the German version of the Tilde-MODEL corpus aligned with a Catalan translation. The catalan text has been obtained using Apertium's RBMT system from the Spanish version. It cotains 3.4M segments. ### Supported Tasks and Leaderboards This dataset can be used to train NMT and SMT systems. It has been used as a training corpus for the Softcatalà machine translation engine. ### Languages Catalan ('ca'). German ('de'). ## Dataset Structure ### Data Instances ### Data Fields Raw text. ### Data Splits One file for language. ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators @softcatala @jordimas @davidcanovas ### Licensing Information CC BY 4.0. ### Contributions
[ "# Dataset Card for Tilde-MODEL-Catalan", "## Table of Contents\n\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nThis dataset contains the German version of the Tilde-MODEL corpus aligned with a Catalan translation.\nThe catalan text has been obtained using Apertium's RBMT system from the Spanish version. It cotains 3.4M segments.", "### Supported Tasks and Leaderboards\n\nThis dataset can be used to train NMT and SMT systems.\nIt has been used as a training corpus for the Softcatalà machine translation engine.", "### Languages\n\nCatalan ('ca').\nGerman ('de').", "## Dataset Structure", "### Data Instances", "### Data Fields\n\nRaw text.", "### Data Splits\n\nOne file for language.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\n@softcatala\n@jordimas\n@davidcanovas", "### Licensing Information\n\nCC BY 4.0.", "### Contributions" ]
[ "TAGS\n#task_categories-text2text-generation #task_categories-translation #language_creators-machine-generated #multilinguality-translation #size_categories-1M<n<10M #source_datasets-extended|tilde_model #language-Catalan #language-German #license-cc-by-4.0 #conditional-text-generation #region-us \n", "# Dataset Card for Tilde-MODEL-Catalan", "## Table of Contents\n\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nThis dataset contains the German version of the Tilde-MODEL corpus aligned with a Catalan translation.\nThe catalan text has been obtained using Apertium's RBMT system from the Spanish version. It cotains 3.4M segments.", "### Supported Tasks and Leaderboards\n\nThis dataset can be used to train NMT and SMT systems.\nIt has been used as a training corpus for the Softcatalà machine translation engine.", "### Languages\n\nCatalan ('ca').\nGerman ('de').", "## Dataset Structure", "### Data Instances", "### Data Fields\n\nRaw text.", "### Data Splits\n\nOne file for language.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators\n\n@softcatala\n@jordimas\n@davidcanovas", "### Licensing Information\n\nCC BY 4.0.", "### Contributions" ]
90ada72ef9df2252797ae8fd4a40e2c071412355
# Dataset Card for ca-text-corpus ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** https://github.com/Softcatala/ca-text-corpus - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary Public domain corpus of Catalan text. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Catalan (`ca`). ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [CC0 1.0 Universal](https://creativecommons.org/publicdomain/zero/1.0/). ### Citation Information [More Information Needed] ### Contributions Thanks to [@albertvillanova](https://github.com/albertvillanova) for adding this dataset.
softcatala/ca_text_corpus
[ "task_categories:text-generation", "task_ids:language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:ca", "license:cc0-1.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["ca"], "license": ["cc0-1.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "pretty_name": "ca-text-corpus"}
2022-10-24T16:38:51+00:00
[]
[ "ca" ]
TAGS #task_categories-text-generation #task_ids-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-Catalan #license-cc0-1.0 #region-us
# Dataset Card for ca-text-corpus ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: - Repository: URL - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary Public domain corpus of Catalan text. ### Supported Tasks and Leaderboards ### Languages Catalan ('ca'). ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information CC0 1.0 Universal. ### Contributions Thanks to @albertvillanova for adding this dataset.
[ "# Dataset Card for ca-text-corpus", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage:\n- Repository: URL\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nPublic domain corpus of Catalan text.", "### Supported Tasks and Leaderboards", "### Languages\n\nCatalan ('ca').", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nCC0 1.0 Universal.", "### Contributions\n\nThanks to @albertvillanova for adding this dataset." ]
[ "TAGS\n#task_categories-text-generation #task_ids-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-Catalan #license-cc0-1.0 #region-us \n", "# Dataset Card for ca-text-corpus", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage:\n- Repository: URL\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nPublic domain corpus of Catalan text.", "### Supported Tasks and Leaderboards", "### Languages\n\nCatalan ('ca').", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nCC0 1.0 Universal.", "### Contributions\n\nThanks to @albertvillanova for adding this dataset." ]
585057c3fed0b67a04ee1d6eeae4b3344ef8b587
# Dataset Card for ca-text-corpus ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** https://github.com/Softcatala/catalan-dict-tools - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary Catalan word lists with part of speech labeling curated by humans. Contains 1 180 773 forms including verbs, nouns, adjectives, names or toponyms. These word lists are used to build applications like Catalan spellcheckers or verb querying applications. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Catalan (`ca`). ## Dataset Structure The dataset contains 3 columns: * Form (e.g. cantaré) * Lemma (e.g. cantar) * POS tag (e.g. VMIF1S00) You can have the meaning of the POS tag here: https://freeling-user-manual.readthedocs.io/en/latest/tagsets/tagset-ca/#part-of-speech-verb ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [LGPL 2.1](https://www.gnu.org/licenses/old-licenses/lgpl-2.1.html). [GPL 2.0](https://www.gnu.org/licenses/old-licenses/gpl-2.0.html). ### Citation Information [More Information Needed] ### Contributions Softcatalà Jaume Ortolà Joan Moratinos
softcatala/catalan-dictionary
[ "task_categories:text-generation", "task_ids:language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:ca", "license:gpl-2.0", "license:lgpl-2.1", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["ca"], "license": ["gpl-2.0", "lgpl-2.1"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "pretty_name": "catalan-dictionary"}
2022-10-24T16:38:30+00:00
[]
[ "ca" ]
TAGS #task_categories-text-generation #task_ids-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-Catalan #license-gpl-2.0 #license-lgpl-2.1 #region-us
# Dataset Card for ca-text-corpus ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: - Repository: URL - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary Catalan word lists with part of speech labeling curated by humans. Contains 1 180 773 forms including verbs, nouns, adjectives, names or toponyms. These word lists are used to build applications like Catalan spellcheckers or verb querying applications. ### Supported Tasks and Leaderboards ### Languages Catalan ('ca'). ## Dataset Structure The dataset contains 3 columns: * Form (e.g. cantaré) * Lemma (e.g. cantar) * POS tag (e.g. VMIF1S00) You can have the meaning of the POS tag here: URL ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information LGPL 2.1. GPL 2.0. ### Contributions Softcatalà Jaume Ortolà Joan Moratinos
[ "# Dataset Card for ca-text-corpus", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage:\n- Repository: URL\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nCatalan word lists with part of speech labeling curated by humans. Contains 1 180 773 forms including verbs, nouns, adjectives, names or toponyms. These word lists are used to build applications like Catalan spellcheckers or verb querying applications.", "### Supported Tasks and Leaderboards", "### Languages\n\nCatalan ('ca').", "## Dataset Structure\n\nThe dataset contains 3 columns:\n* Form (e.g. cantaré)\n* Lemma (e.g. cantar)\n* POS tag (e.g. VMIF1S00)\n\nYou can have the meaning of the POS tag here: URL", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nLGPL 2.1.\n\nGPL 2.0.", "### Contributions\n\nSoftcatalà\n\nJaume Ortolà\n\nJoan Moratinos" ]
[ "TAGS\n#task_categories-text-generation #task_ids-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-Catalan #license-gpl-2.0 #license-lgpl-2.1 #region-us \n", "# Dataset Card for ca-text-corpus", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage:\n- Repository: URL\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nCatalan word lists with part of speech labeling curated by humans. Contains 1 180 773 forms including verbs, nouns, adjectives, names or toponyms. These word lists are used to build applications like Catalan spellcheckers or verb querying applications.", "### Supported Tasks and Leaderboards", "### Languages\n\nCatalan ('ca').", "## Dataset Structure\n\nThe dataset contains 3 columns:\n* Form (e.g. cantaré)\n* Lemma (e.g. cantar)\n* POS tag (e.g. VMIF1S00)\n\nYou can have the meaning of the POS tag here: URL", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nLGPL 2.1.\n\nGPL 2.0.", "### Contributions\n\nSoftcatalà\n\nJaume Ortolà\n\nJoan Moratinos" ]
612ada74b3354d520e4c42e35251bc4ed3686b33
# Dataset Card for open-source-english-catalan-corpus ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** https://www.softcatala.org/recursos/memories/ - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary Translation memory built from more than 180 open source projects. These include LibreOffice, Mozilla, KDE, GNOME, GIMP, Inkscape and many others. It can be used as translation memory or as training corpus for neural translators. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages Catalan (`ca`) English (`en`) ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [GPL 3.0](https://www.gnu.org/licenses/gpl-3.0.html). ### Citation Information [More Information Needed] ### Contributions Softcatalà
softcatala/open-source-english-catalan-corpus
[ "task_categories:text-generation", "task_ids:language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:1M<n<10M", "source_datasets:original", "language:ca", "language:en", "license:gpl-3.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["ca", "en"], "license": ["gpl-3.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "pretty_name": "open-source-english-catalan-corpus"}
2022-10-24T16:38:59+00:00
[]
[ "ca", "en" ]
TAGS #task_categories-text-generation #task_ids-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-Catalan #language-English #license-gpl-3.0 #region-us
# Dataset Card for open-source-english-catalan-corpus ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: - Repository: URL - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary Translation memory built from more than 180 open source projects. These include LibreOffice, Mozilla, KDE, GNOME, GIMP, Inkscape and many others. It can be used as translation memory or as training corpus for neural translators. ### Supported Tasks and Leaderboards ### Languages Catalan ('ca') English ('en') ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information GPL 3.0. ### Contributions Softcatalà
[ "# Dataset Card for open-source-english-catalan-corpus", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage:\n- Repository: URL\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nTranslation memory built from more than 180 open source projects. These include LibreOffice, Mozilla, KDE, GNOME, GIMP, Inkscape and many others. It can be used as translation memory or as training corpus for neural translators.", "### Supported Tasks and Leaderboards", "### Languages\n\nCatalan ('ca')\nEnglish ('en')", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nGPL 3.0.", "### Contributions\n\nSoftcatalà" ]
[ "TAGS\n#task_categories-text-generation #task_ids-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-Catalan #language-English #license-gpl-3.0 #region-us \n", "# Dataset Card for open-source-english-catalan-corpus", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage:\n- Repository: URL\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nTranslation memory built from more than 180 open source projects. These include LibreOffice, Mozilla, KDE, GNOME, GIMP, Inkscape and many others. It can be used as translation memory or as training corpus for neural translators.", "### Supported Tasks and Leaderboards", "### Languages\n\nCatalan ('ca')\nEnglish ('en')", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nGPL 3.0.", "### Contributions\n\nSoftcatalà" ]
48c23354c088c4273b260a877dafa424e1c6cc95
# Reddit posts about mental health ## files - adhd.csv from r/adhd - aspergers.csv from r/aspergers - depression.csv from r/depression - ocd.csv from r/ocd - ptsd.csv from r/ptsd ## fields - author - body - created_utc - id - num_comments - score - subreddit - title - upvote_ratio - url for more details about theses fields [Praw Submission](https://praw.readthedocs.io/en/latest/code_overview/models/submission.html).
solomonk/reddit_mental_health_posts
[ "region:us" ]
2022-03-02T23:29:22+00:00
{}
2022-01-11T15:40:01+00:00
[]
[]
TAGS #region-us
# Reddit posts about mental health ## files - URL from r/adhd - URL from r/aspergers - URL from r/depression - URL from r/ocd - URL from r/ptsd ## fields - author - body - created_utc - id - num_comments - score - subreddit - title - upvote_ratio - url for more details about theses fields Praw Submission.
[ "# Reddit posts about mental health", "## files\n\n- URL from r/adhd\n- URL from r/aspergers\n- URL from r/depression\n- URL from r/ocd\n- URL from r/ptsd", "## fields\n\n- author\n- body\n- created_utc\n- id\n- num_comments\n- score\n- subreddit\n- title\n- upvote_ratio\n- url\n\nfor more details about theses fields Praw Submission." ]
[ "TAGS\n#region-us \n", "# Reddit posts about mental health", "## files\n\n- URL from r/adhd\n- URL from r/aspergers\n- URL from r/depression\n- URL from r/ocd\n- URL from r/ptsd", "## fields\n\n- author\n- body\n- created_utc\n- id\n- num_comments\n- score\n- subreddit\n- title\n- upvote_ratio\n- url\n\nfor more details about theses fields Praw Submission." ]
0e4f8bf4a8a6fefe60c5aa90547b7ebec1652e43
# Dataset Summary Starting with a paper released at NIPS 2016, MS MARCO is a collection of datasets focused on deep learning in search. # Dataset Creation ## Source Data More Information Needed ## Annotations More Information Needed ## Personal and Sensitive Information More Information Needed # Considerations for Using the Data ## Social Impact of Dataset More Information Needed ## Discussion of Biases More Information Needed ## Other Known Limitations More Information Needed # Additional Information ## Dataset Curators @spacemanidol # Licensing Information The MS MARCO datasets are intended for non-commercial research purposes only to promote advancement in the field of artificial intelligence and related areas, and is made available free of charge without extending any license or other intellectual property rights. The dataset is provided “as is” without warranty and usage of the data has risks since we may not own the underlying rights in the documents. We are not be liable for any damages related to use of the dataset. Feedback is voluntarily given and can be used as we see fit. Upon violation of any of these terms, your rights to use the dataset will end automatically. Please contact us at [email protected] if you own any of the documents made available but do not want them in this dataset. We will remove the data accordingly. If you have questions about use of the dataset or any research outputs in your products or services, we encourage you to undertake your own independent legal review. For other questions, please feel free to contact us. # Citation Information @article{Campos2016MSMA, title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset}, author={Daniel Fernando Campos and T. Nguyen and M. Rosenberg and Xia Song and Jianfeng Gao and Saurabh Tiwary and Rangan Majumder and L. Deng and Bhaskar Mitra}, journal={ArXiv}, year={2016}, volume={abs/1611.09268} } #Contributions @spacemanidol
spacemanidol/msmarco_passage_ranking
[ "region:us" ]
2022-03-02T23:29:22+00:00
{}
2021-04-09T18:33:13+00:00
[]
[]
TAGS #region-us
# Dataset Summary Starting with a paper released at NIPS 2016, MS MARCO is a collection of datasets focused on deep learning in search. # Dataset Creation ## Source Data ## Annotations ## Personal and Sensitive Information # Considerations for Using the Data ## Social Impact of Dataset ## Discussion of Biases ## Other Known Limitations # Additional Information ## Dataset Curators @spacemanidol # Licensing Information The MS MARCO datasets are intended for non-commercial research purposes only to promote advancement in the field of artificial intelligence and related areas, and is made available free of charge without extending any license or other intellectual property rights. The dataset is provided “as is” without warranty and usage of the data has risks since we may not own the underlying rights in the documents. We are not be liable for any damages related to use of the dataset. Feedback is voluntarily given and can be used as we see fit. Upon violation of any of these terms, your rights to use the dataset will end automatically. Please contact us at ms-marco@URL if you own any of the documents made available but do not want them in this dataset. We will remove the data accordingly. If you have questions about use of the dataset or any research outputs in your products or services, we encourage you to undertake your own independent legal review. For other questions, please feel free to contact us. @article{Campos2016MSMA, title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset}, author={Daniel Fernando Campos and T. Nguyen and M. Rosenberg and Xia Song and Jianfeng Gao and Saurabh Tiwary and Rangan Majumder and L. Deng and Bhaskar Mitra}, journal={ArXiv}, year={2016}, volume={abs/1611.09268} } #Contributions @spacemanidol
[ "# Dataset Summary\nStarting with a paper released at NIPS 2016, MS MARCO is a collection of datasets focused on deep learning in search.", "# Dataset Creation", "## Source Data", "## Annotations", "## Personal and Sensitive Information", "# Considerations for Using the Data", "## Social Impact of Dataset", "## Discussion of Biases", "## Other Known Limitations", "# Additional Information", "## Dataset Curators\n@spacemanidol", "# Licensing Information\nThe MS MARCO datasets are intended for non-commercial research purposes only to promote advancement in the field of artificial intelligence and related areas, and is made available free of charge without extending any license or other intellectual property rights. The dataset is provided “as is” without warranty and usage of the data has risks since we may not own the underlying rights in the documents. We are not be liable for any damages related to use of the dataset. Feedback is voluntarily given and can be used as we see fit. Upon violation of any of these terms, your rights to use the dataset will end automatically.\n\nPlease contact us at ms-marco@URL if you own any of the documents made available but do not want them in this dataset. We will remove the data accordingly. If you have questions about use of the dataset or any research outputs in your products or services, we encourage you to undertake your own independent legal review. For other questions, please feel free to contact us.\n\n\n@article{Campos2016MSMA,\n title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset},\n author={Daniel Fernando Campos and T. Nguyen and M. Rosenberg and Xia Song and Jianfeng Gao and Saurabh Tiwary and Rangan Majumder and L. Deng and Bhaskar Mitra},\n journal={ArXiv},\n year={2016},\n volume={abs/1611.09268}\n}" ]
[ "TAGS\n#region-us \n", "# Dataset Summary\nStarting with a paper released at NIPS 2016, MS MARCO is a collection of datasets focused on deep learning in search.", "# Dataset Creation", "## Source Data", "## Annotations", "## Personal and Sensitive Information", "# Considerations for Using the Data", "## Social Impact of Dataset", "## Discussion of Biases", "## Other Known Limitations", "# Additional Information", "## Dataset Curators\n@spacemanidol", "# Licensing Information\nThe MS MARCO datasets are intended for non-commercial research purposes only to promote advancement in the field of artificial intelligence and related areas, and is made available free of charge without extending any license or other intellectual property rights. The dataset is provided “as is” without warranty and usage of the data has risks since we may not own the underlying rights in the documents. We are not be liable for any damages related to use of the dataset. Feedback is voluntarily given and can be used as we see fit. Upon violation of any of these terms, your rights to use the dataset will end automatically.\n\nPlease contact us at ms-marco@URL if you own any of the documents made available but do not want them in this dataset. We will remove the data accordingly. If you have questions about use of the dataset or any research outputs in your products or services, we encourage you to undertake your own independent legal review. For other questions, please feel free to contact us.\n\n\n@article{Campos2016MSMA,\n title={MS MARCO: A Human Generated MAchine Reading COmprehension Dataset},\n author={Daniel Fernando Campos and T. Nguyen and M. Rosenberg and Xia Song and Jianfeng Gao and Saurabh Tiwary and Rangan Majumder and L. Deng and Bhaskar Mitra},\n journal={ArXiv},\n year={2016},\n volume={abs/1611.09268}\n}" ]
c94d921b402c05dc4ab1cb2bdcfd3841902d2d97
## Extreme Summarization (XSum) Dataset. There are two features: - document: Input news article. - summary: One sentence summary of the article. ### Citation ```bibtex @article{Narayan2018DontGM, title={Don't Give Me the Details, Just the Summary! Topic-Aware Convolutional Neural Networks for Extreme Summarization}, author={Shashi Narayan and Shay B. Cohen and Mirella Lapata}, journal={ArXiv}, year={2018}, volume={abs/1808.08745} } ```
sshleifer/pseudo_bart_xsum
[ "region:us" ]
2022-03-02T23:29:22+00:00
{}
2021-02-23T13:57:51+00:00
[]
[]
TAGS #region-us
## Extreme Summarization (XSum) Dataset. There are two features: - document: Input news article. - summary: One sentence summary of the article.
[ "## Extreme Summarization (XSum) Dataset.\n\nThere are two features:\n - document: Input news article.\n - summary: One sentence summary of the article." ]
[ "TAGS\n#region-us \n", "## Extreme Summarization (XSum) Dataset.\n\nThere are two features:\n - document: Input news article.\n - summary: One sentence summary of the article." ]
641dc93c008f3290112ae324a754aaf7e77dee15
# C4 EN 10K for testing This is a small subset representing the first 10K records of the original C4 dataset, "en" subset - created for testing. The records were extracted after having been shuffled. The full 1TB+ dataset is at https://huggingface.co/datasets/c4. ``` $ python -c "from datasets import load_dataset; ds=load_dataset('stas/c4-en-10k'); print(ds)" DatasetDict({ train: Dataset({ features: ['text'], num_rows: 10000 }) }) ``` * Records: 10,000 * compressed size: 6.4M * uncompressed size: 22M To convert to jsonlines: ``` from datasets import load_dataset dataset_name = "stas/c4-en-10k" name = dataset_name.split('/')[-1] ds = load_dataset(dataset_name, split='train') ds.to_json(f"{name}.jsonl", orient="records", lines=True) ``` To see how this subset was created, here is the [instructions file](https://huggingface.co/datasets/stas/c4-en-10k/blob/main/process.txt).
stas/c4-en-10k
[ "language:en", "license:apache-2.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"language": ["en"], "license": "apache-2.0"}
2022-10-19T20:40:11+00:00
[]
[ "en" ]
TAGS #language-English #license-apache-2.0 #region-us
# C4 EN 10K for testing This is a small subset representing the first 10K records of the original C4 dataset, "en" subset - created for testing. The records were extracted after having been shuffled. The full 1TB+ dataset is at URL * Records: 10,000 * compressed size: 6.4M * uncompressed size: 22M To convert to jsonlines: To see how this subset was created, here is the instructions file.
[ "# C4 EN 10K for testing\n\nThis is a small subset representing the first 10K records of the original C4 dataset, \"en\" subset - created for testing. The records were extracted after having been shuffled.\n\nThe full 1TB+ dataset is at URL\n\n\n\n* Records: 10,000\n* compressed size: 6.4M\n* uncompressed size: 22M\n\nTo convert to jsonlines:\n\n\n\nTo see how this subset was created, here is the instructions file." ]
[ "TAGS\n#language-English #license-apache-2.0 #region-us \n", "# C4 EN 10K for testing\n\nThis is a small subset representing the first 10K records of the original C4 dataset, \"en\" subset - created for testing. The records were extracted after having been shuffled.\n\nThe full 1TB+ dataset is at URL\n\n\n\n* Records: 10,000\n* compressed size: 6.4M\n* uncompressed size: 22M\n\nTo convert to jsonlines:\n\n\n\nTo see how this subset was created, here is the instructions file." ]
152771d7ae284673c3ad7ffdd9b3afc2741f1d00
10K slice of OpenWebText - An open-source replication of the WebText dataset from OpenAI. This is a small subset representing the first 10K records from the original dataset - created for testing. The full 8M-record dataset is [here](https://huggingface.co/datasets/openwebtext). ``` $ python -c "from datasets import load_dataset; ds=load_dataset('stas/openwebtext-10k'); print(ds)" DatasetDict({ train: Dataset({ features: ['text'], num_rows: 10000 }) }) ``` * Records: 10,000 * compressed size: ~15MB * uncompressed size: 50MB To convert to jsonlines: ``` from datasets import load_dataset dataset_name = "stas/openwebtext-10k" name = dataset_name.split('/')[-1] ds = load_dataset(dataset_name, split='train') ds.to_json(f"{name}.jsonl", orient="records", lines=True) ``` To see how this subset was created, here is the [instructions file](https://huggingface.co/datasets/stas/openwebtext-10k/blob/main/process.txt).
stas/openwebtext-10k
[ "region:us" ]
2022-03-02T23:29:22+00:00
{}
2021-09-14T23:18:50+00:00
[]
[]
TAGS #region-us
10K slice of OpenWebText - An open-source replication of the WebText dataset from OpenAI. This is a small subset representing the first 10K records from the original dataset - created for testing. The full 8M-record dataset is here. * Records: 10,000 * compressed size: ~15MB * uncompressed size: 50MB To convert to jsonlines: To see how this subset was created, here is the instructions file.
[]
[ "TAGS\n#region-us \n" ]
07713bf01c6e590a5d80b2c246de207d47724482
# OSCAR EN 10K for testing This is a small subset representing the 10K records from the original OSCAR dataset, "unshuffled_deduplicated_en" subset - created for testing. The records were extracted after having been shuffled. The full 1TB+ dataset is at https://huggingface.co/datasets/oscar. ``` $ python -c "from datasets import load_dataset; ds=load_dataset('stas/oscar-en-10k'); print(ds)" DatasetDict({ train: Dataset({ features: ['text'], num_rows: 10000 }) }) ``` * Records: 10,000 * compressed size: ~37MB * uncompressed size: 131MB To convert to jsonlines: ``` from datasets import load_dataset dataset_name = "stas/oscar-en-10k" name = dataset_name.split('/')[-1] ds = load_dataset(dataset_name, split='train') ds.to_json(f"{name}.jsonl", orient="records", lines=True) ``` To see how this subset was created, here is the [instructions file](https://huggingface.co/datasets/stas/oscar-en-10k/blob/main/process.txt).
stas/oscar-en-10k
[ "language:en", "license:apache-2.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"language": ["en"], "license": "apache-2.0"}
2022-10-19T20:40:14+00:00
[]
[ "en" ]
TAGS #language-English #license-apache-2.0 #region-us
# OSCAR EN 10K for testing This is a small subset representing the 10K records from the original OSCAR dataset, "unshuffled_deduplicated_en" subset - created for testing. The records were extracted after having been shuffled. The full 1TB+ dataset is at URL * Records: 10,000 * compressed size: ~37MB * uncompressed size: 131MB To convert to jsonlines: To see how this subset was created, here is the instructions file.
[ "# OSCAR EN 10K for testing\n\nThis is a small subset representing the 10K records from the original OSCAR dataset, \"unshuffled_deduplicated_en\" subset - created for testing. The records were extracted after having been shuffled.\n\nThe full 1TB+ dataset is at URL\n\n\n\n* Records: 10,000\n* compressed size: ~37MB\n* uncompressed size: 131MB\n\nTo convert to jsonlines:\n\n\n\nTo see how this subset was created, here is the instructions file." ]
[ "TAGS\n#language-English #license-apache-2.0 #region-us \n", "# OSCAR EN 10K for testing\n\nThis is a small subset representing the 10K records from the original OSCAR dataset, \"unshuffled_deduplicated_en\" subset - created for testing. The records were extracted after having been shuffled.\n\nThe full 1TB+ dataset is at URL\n\n\n\n* Records: 10,000\n* compressed size: ~37MB\n* uncompressed size: 131MB\n\nTo convert to jsonlines:\n\n\n\nTo see how this subset was created, here is the instructions file." ]
7617090058992d01345cbead219b800acd77d3ac
# WMT14 English-German Translation Data w/ further preprocessing The original pre-processing script is [here](https://github.com/pytorch/fairseq/blob/master/examples/translation/prepare-wmt14en2de.sh). This pre-processed dataset was created by running: ``` git clone https://github.com/pytorch/fairseq cd fairseq cd examples/translation/ ./prepare-wmt14en2de.sh ``` It was originally used by `transformers` [`finetune_trainer.py`](https://github.com/huggingface/transformers/blob/641f418e102218c4bf16fcd3124bfebed6217ef6/examples/seq2seq/finetune_trainer.py) The data itself resides at https://cdn-datasets.huggingface.co/translation/wmt_en_de.tgz
stas/wmt14-en-de-pre-processed
[ "region:us" ]
2022-03-02T23:29:22+00:00
{}
2021-02-16T04:41:04+00:00
[]
[]
TAGS #region-us
# WMT14 English-German Translation Data w/ further preprocessing The original pre-processing script is here. This pre-processed dataset was created by running: It was originally used by 'transformers' 'finetune_trainer.py' The data itself resides at URL
[ "# WMT14 English-German Translation Data w/ further preprocessing\n\nThe original pre-processing script is here.\n\nThis pre-processed dataset was created by running:\n\n\n\nIt was originally used by 'transformers' 'finetune_trainer.py'\n\nThe data itself resides at URL" ]
[ "TAGS\n#region-us \n", "# WMT14 English-German Translation Data w/ further preprocessing\n\nThe original pre-processing script is here.\n\nThis pre-processed dataset was created by running:\n\n\n\nIt was originally used by 'transformers' 'finetune_trainer.py'\n\nThe data itself resides at URL" ]
6dfdf691a1b18d8ebc206897b5cac2d7e4bcda3c
# WMT16 English-Romanian Translation Data w/ further preprocessing The original instructions are [here](https://github.com/rsennrich/wmt16-scripts/tree/master/sample). This pre-processed dataset was created by running: ``` git clone https://github.com/rsennrich/wmt16-scripts cd wmt16-scripts cd sample ./download_files.sh ./preprocess.sh ``` It was originally used by `transformers` [`finetune_trainer.py`](https://github.com/huggingface/transformers/blob/641f418e102218c4bf16fcd3124bfebed6217ef6/examples/seq2seq/finetune_trainer.py) The data itself resides at https://cdn-datasets.huggingface.co/translation/wmt_en_ro.tar.gz If you would like to convert it to jsonlines I've included a small script `convert-to-jsonlines.py` that will do it for you. But if you're using the `datasets` API, it will be done on the fly.
stas/wmt16-en-ro-pre-processed
[ "region:us" ]
2022-03-02T23:29:22+00:00
{}
2021-02-16T03:58:06+00:00
[]
[]
TAGS #region-us
# WMT16 English-Romanian Translation Data w/ further preprocessing The original instructions are here. This pre-processed dataset was created by running: It was originally used by 'transformers' 'finetune_trainer.py' The data itself resides at URL If you would like to convert it to jsonlines I've included a small script 'URL' that will do it for you. But if you're using the 'datasets' API, it will be done on the fly.
[ "# WMT16 English-Romanian Translation Data w/ further preprocessing\n\nThe original instructions are here.\n\nThis pre-processed dataset was created by running:\n\n\n\nIt was originally used by 'transformers' 'finetune_trainer.py'\n\nThe data itself resides at URL\n\nIf you would like to convert it to jsonlines I've included a small script 'URL' that will do it for you. But if you're using the 'datasets' API, it will be done on the fly." ]
[ "TAGS\n#region-us \n", "# WMT16 English-Romanian Translation Data w/ further preprocessing\n\nThe original instructions are here.\n\nThis pre-processed dataset was created by running:\n\n\n\nIt was originally used by 'transformers' 'finetune_trainer.py'\n\nThe data itself resides at URL\n\nIf you would like to convert it to jsonlines I've included a small script 'URL' that will do it for you. But if you're using the 'datasets' API, it will be done on the fly." ]
167bea6bb04170c41a7ab6b91e13fef446f94880
# Dataset Card for Demo ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This is a demo dataset with two files `train.csv` and `test.csv`. Load it by: ```python from datasets import load_dataset data_files = {"train": "train.csv", "test": "test.csv"} demo = load_dataset("stevhliu/demo", data_files=data_files) ``` ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
stevhliu/demo
[ "task_categories:summarization", "task_categories:text2text-generation", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:apache-2.0", "conditional-text-generation", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["summarization", "text2text-generation"], "task_ids": [], "tags": ["conditional-text-generation"]}
2022-10-24T17:02:42+00:00
[]
[ "en" ]
TAGS #task_categories-summarization #task_categories-text2text-generation #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-apache-2.0 #conditional-text-generation #region-us
# Dataset Card for Demo ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: - Repository: - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary This is a demo dataset with two files 'URL' and 'URL'. Load it by: ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions Thanks to @github-username for adding this dataset.
[ "# Dataset Card for Demo", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nThis is a demo dataset with two files 'URL' and 'URL'.\n\nLoad it by:", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @github-username for adding this dataset." ]
[ "TAGS\n#task_categories-summarization #task_categories-text2text-generation #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-apache-2.0 #conditional-text-generation #region-us \n", "# Dataset Card for Demo", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nThis is a demo dataset with two files 'URL' and 'URL'.\n\nLoad it by:", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @github-username for adding this dataset." ]
d9709e0c5512c125ce34aea05b3de3f912092c1b
# Dataset Card for "squad_v2_sv" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits Sample Size](#data-splits-sample-size) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://github.com/susumu2357/SQuAD_v2_sv](https://github.com/susumu2357/SQuAD_v2_sv) - **Repository:** [https://github.com/susumu2357/SQuAD_v2_sv](https://github.com/susumu2357/SQuAD_v2_sv) - **Paper:** None - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 10.09 MB - **Size of the generated dataset:** 113.27 MB - **Total amount of disk used:** 123.36 MB ### Dataset Summary SQuAD_v2_sv is a Swedish version of SQuAD2.0. Translation was done automatically using the Google Translate API but it is not so straightforward for the following reasons. - The span that determines the start and end of the answer in the context may change after translation. - If the context and the answer are translated independently, the translated answer may not be included in the translated context. Details on how to handle these dificulties are described in the git hub repo. ### Supported Tasks [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages Swedish ## Dataset Structure ### Data Fields The data fields are the same among all splits. #### squad_v2 - `id`: a `string` feature. - `title`: a `string` feature. - `context`: a `string` feature. - `question`: a `string` feature. - `answers`: a dictionary feature containing: - `text`: a `string` feature. - `answer_start`: a `int32` feature. ### Data Splits Sample Size | name |train |validation| |--------|-----:|---------:| |squad_v2_Sv|113898| 11156| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information ``` @misc{squad_v2_sv, author = {Susumu Okazawa}, title = {Swedish translation of SQuAD2.0}, year = {2021}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/susumu2357/SQuAD_v2_sv}} ```
susumu2357/squad_v2_sv
[ "task_categories:question-answering", "task_ids:extractive-qa", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:extended|wikipedia", "language:sv", "license:apache-2.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"language": ["sv"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["extended|wikipedia"], "task_categories": ["question-answering"], "task_ids": ["extractive-qa"]}
2022-07-01T17:31:20+00:00
[]
[ "sv" ]
TAGS #task_categories-question-answering #task_ids-extractive-qa #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|wikipedia #language-Swedish #license-apache-2.0 #region-us
Dataset Card for "squad\_v2\_sv" ================================ Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits Sample Size * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: None * Point of Contact: * Size of downloaded dataset files: 10.09 MB * Size of the generated dataset: 113.27 MB * Total amount of disk used: 123.36 MB ### Dataset Summary SQuAD\_v2\_sv is a Swedish version of SQuAD2.0. Translation was done automatically using the Google Translate API but it is not so straightforward for the following reasons. * The span that determines the start and end of the answer in the context may change after translation. * If the context and the answer are translated independently, the translated answer may not be included in the translated context. Details on how to handle these dificulties are described in the git hub repo. ### Supported Tasks ### Languages Swedish Dataset Structure ----------------- ### Data Fields The data fields are the same among all splits. #### squad\_v2 * 'id': a 'string' feature. * 'title': a 'string' feature. * 'context': a 'string' feature. * 'question': a 'string' feature. * 'answers': a dictionary feature containing: + 'text': a 'string' feature. + 'answer\_start': a 'int32' feature. ### Data Splits Sample Size Dataset Creation ---------------- ### Curation Rationale ### Source Data ### Annotations ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information
[ "### Dataset Summary\n\n\nSQuAD\\_v2\\_sv is a Swedish version of SQuAD2.0. Translation was done automatically using the Google Translate API but it is not so straightforward for the following reasons.\n\n\n* The span that determines the start and end of the answer in the context may change after translation.\n* If the context and the answer are translated independently, the translated answer may not be included in the translated context.\n\n\nDetails on how to handle these dificulties are described in the git hub repo.", "### Supported Tasks", "### Languages\n\n\nSwedish\n\n\nDataset Structure\n-----------------", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### squad\\_v2\n\n\n* 'id': a 'string' feature.\n* 'title': a 'string' feature.\n* 'context': a 'string' feature.\n* 'question': a 'string' feature.\n* 'answers': a dictionary feature containing:\n\t+ 'text': a 'string' feature.\n\t+ 'answer\\_start': a 'int32' feature.", "### Data Splits Sample Size\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "### Annotations", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information" ]
[ "TAGS\n#task_categories-question-answering #task_ids-extractive-qa #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-extended|wikipedia #language-Swedish #license-apache-2.0 #region-us \n", "### Dataset Summary\n\n\nSQuAD\\_v2\\_sv is a Swedish version of SQuAD2.0. Translation was done automatically using the Google Translate API but it is not so straightforward for the following reasons.\n\n\n* The span that determines the start and end of the answer in the context may change after translation.\n* If the context and the answer are translated independently, the translated answer may not be included in the translated context.\n\n\nDetails on how to handle these dificulties are described in the git hub repo.", "### Supported Tasks", "### Languages\n\n\nSwedish\n\n\nDataset Structure\n-----------------", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### squad\\_v2\n\n\n* 'id': a 'string' feature.\n* 'title': a 'string' feature.\n* 'context': a 'string' feature.\n* 'question': a 'string' feature.\n* 'answers': a dictionary feature containing:\n\t+ 'text': a 'string' feature.\n\t+ 'answer\\_start': a 'int32' feature.", "### Data Splits Sample Size\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "### Annotations", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information" ]
433e142816781b4cb97022bc2bd245e138a82140
# Dataset Card for QReCC: Question Rewriting in Conversational Context ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Source Data](#source-data) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - [**Repository:**](https://github.com/apple/ml-qrecc) - [**Paper:**](https://arxiv.org/pdf/2010.04898.pdf) - [**Leaderboard:**](https://www.tira.io/task/scai-qrecc/dataset/scai-qrecc21-test-dataset-2021-07-20) ### Dataset Summary QReCC (Question Rewriting in Conversational Context) is an end-to-end open-domain question answering dataset comprising of 14K conversations with 81K question-answer pairs. The goal of this dataset is to provide a challenging benchmark for end-to-end conversational question answering that includes the individual subtasks of question rewriting, passage retrieval and reading comprehension. The task in QReCC is to find answers to conversational questions within a collection of 10M web pages split into 54M passages. Answers to questions in the same conversation may be distributed across several web pages. The passage collection should be downloaded from [**Zenodo**](https://zenodo.org/record/5115890#.YaeD7C8RppR) (passages.zip) ### Supported Tasks and Leaderboards `question-answering` ### Languages English ## Dataset Structure ### Data Instances An example from the data set looks as follows: ``` { "Context": [ "What are the pros and cons of electric cars?", "Some pros are: They're easier on the environment. Electricity is cheaper than gasoline. Maintenance is less frequent and less expensive. They're very quiet. You'll get tax credits. They can shorten your commute time. Some cons are: Most EVs have pretty short ranges. Recharging can take a while." ], "Question": "Tell me more about Tesla", "Rewrite": "Tell me more about Tesla the car company.", "Answer": "Tesla Inc. is an American automotive and energy company based in Palo Alto, California. The company specializes in electric car manufacturing and, through its SolarCity subsidiary, solar panel manufacturing.", "Answer_URL": "https://en.wikipedia.org/wiki/Tesla,_Inc.", "Conversation_no": 74, "Turn_no": 2, "Conversation_source": "trec" } ``` ### Data Splits - train: 63501 - test: 16451 ## Dataset Creation ### Source Data - QuAC - TREC CAsT - Natural Questions ## Additional Information ### Licensing Information [CC BY-SA 3.0](http://creativecommons.org/licenses/by-sa/3.0/) ### Citation Information ``` @inproceedings{ qrecc, title={Open-Domain Question Answering Goes Conversational via Question Rewriting}, author={Anantha, Raviteja and Vakulenko, Svitlana and Tu, Zhucheng and Longpre, Shayne and Pulman, Stephen and Chappidi, Srinivas}, booktitle={ NAACL }, year={2021} } ```
svakulenk0/qrecc
[ "task_categories:question-answering", "task_ids:open-domain-qa", "language_creators:expert-generated", "language_creators:found", "multilinguality:monolingual", "source_datasets:extended|natural_questions", "source_datasets:extended|quac", "language:en", "license:cc-by-3.0", "arxiv:2010.04898", "region:us" ]
2022-03-02T23:29:22+00:00
{"language_creators": ["expert-generated", "found"], "language": ["en"], "license": ["cc-by-3.0"], "multilinguality": ["monolingual"], "source_datasets": ["extended|natural_questions", "extended|quac"], "task_categories": ["question-answering"], "task_ids": ["open-domain-qa"], "pretty_name": "QReCC"}
2022-07-02T16:35:21+00:00
[ "2010.04898" ]
[ "en" ]
TAGS #task_categories-question-answering #task_ids-open-domain-qa #language_creators-expert-generated #language_creators-found #multilinguality-monolingual #source_datasets-extended|natural_questions #source_datasets-extended|quac #language-English #license-cc-by-3.0 #arxiv-2010.04898 #region-us
# Dataset Card for QReCC: Question Rewriting in Conversational Context ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Splits - Dataset Creation - Source Data - Additional Information - Licensing Information - Citation Information ## Dataset Description - Repository: - Paper: - Leaderboard: ### Dataset Summary QReCC (Question Rewriting in Conversational Context) is an end-to-end open-domain question answering dataset comprising of 14K conversations with 81K question-answer pairs. The goal of this dataset is to provide a challenging benchmark for end-to-end conversational question answering that includes the individual subtasks of question rewriting, passage retrieval and reading comprehension. The task in QReCC is to find answers to conversational questions within a collection of 10M web pages split into 54M passages. Answers to questions in the same conversation may be distributed across several web pages. The passage collection should be downloaded from Zenodo (URL) ### Supported Tasks and Leaderboards 'question-answering' ### Languages English ## Dataset Structure ### Data Instances An example from the data set looks as follows: ### Data Splits - train: 63501 - test: 16451 ## Dataset Creation ### Source Data - QuAC - TREC CAsT - Natural Questions ## Additional Information ### Licensing Information CC BY-SA 3.0
[ "# Dataset Card for QReCC: Question Rewriting in Conversational Context", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Splits\n- Dataset Creation\n - Source Data\n- Additional Information\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Repository:\n- Paper:\n- Leaderboard:", "### Dataset Summary\n\nQReCC (Question Rewriting in Conversational Context) is an end-to-end open-domain question answering dataset comprising of 14K conversations with 81K question-answer pairs. The goal of this dataset is to provide a challenging benchmark for end-to-end conversational question answering that includes the individual subtasks of question rewriting, passage retrieval and reading comprehension.\n\nThe task in QReCC is to find answers to conversational questions within a collection of 10M web pages split into 54M passages. Answers to questions in the same conversation may be distributed across several web pages.\n\nThe passage collection should be downloaded from Zenodo (URL)", "### Supported Tasks and Leaderboards\n\n'question-answering'", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nAn example from the data set looks as follows:", "### Data Splits\n\n- train: 63501\n- test: 16451", "## Dataset Creation", "### Source Data\n\n- QuAC\n- TREC CAsT\n- Natural Questions", "## Additional Information", "### Licensing Information\n\nCC BY-SA 3.0" ]
[ "TAGS\n#task_categories-question-answering #task_ids-open-domain-qa #language_creators-expert-generated #language_creators-found #multilinguality-monolingual #source_datasets-extended|natural_questions #source_datasets-extended|quac #language-English #license-cc-by-3.0 #arxiv-2010.04898 #region-us \n", "# Dataset Card for QReCC: Question Rewriting in Conversational Context", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Splits\n- Dataset Creation\n - Source Data\n- Additional Information\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Repository:\n- Paper:\n- Leaderboard:", "### Dataset Summary\n\nQReCC (Question Rewriting in Conversational Context) is an end-to-end open-domain question answering dataset comprising of 14K conversations with 81K question-answer pairs. The goal of this dataset is to provide a challenging benchmark for end-to-end conversational question answering that includes the individual subtasks of question rewriting, passage retrieval and reading comprehension.\n\nThe task in QReCC is to find answers to conversational questions within a collection of 10M web pages split into 54M passages. Answers to questions in the same conversation may be distributed across several web pages.\n\nThe passage collection should be downloaded from Zenodo (URL)", "### Supported Tasks and Leaderboards\n\n'question-answering'", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances\n\nAn example from the data set looks as follows:", "### Data Splits\n\n- train: 63501\n- test: 16451", "## Dataset Creation", "### Source Data\n\n- QuAC\n- TREC CAsT\n- Natural Questions", "## Additional Information", "### Licensing Information\n\nCC BY-SA 3.0" ]
98c2df63345816421f5571ce53c20a6336166768
# NER for Icelandic - MIM-GOLD-NER splits ## MIM-GOLD-NER The original MIM-GOLD-NER data is found at http://hdl.handle.net/20.500.12537/42 This repository packages the data for use with the Datasets library from hugginface. ## Old splits *This is no longer in use.* At the time of creation, the original data did not have train, dev and test splits. `create_splits.py` was used to create temporary splits.
svanhvit/icelandic-ner-MIM-GOLD-NER
[ "region:us" ]
2022-03-02T23:29:22+00:00
{}
2021-10-08T10:39:45+00:00
[]
[]
TAGS #region-us
# NER for Icelandic - MIM-GOLD-NER splits ## MIM-GOLD-NER The original MIM-GOLD-NER data is found at URL This repository packages the data for use with the Datasets library from hugginface. ## Old splits *This is no longer in use.* At the time of creation, the original data did not have train, dev and test splits. 'create_splits.py' was used to create temporary splits.
[ "# NER for Icelandic - MIM-GOLD-NER splits", "## MIM-GOLD-NER\n\nThe original MIM-GOLD-NER data is found at URL \n\nThis repository packages the data for use with the Datasets library from hugginface.", "## Old splits\n\n*This is no longer in use.*\n\nAt the time of creation, the original data did not have train, dev and test splits. 'create_splits.py' was used to create temporary splits." ]
[ "TAGS\n#region-us \n", "# NER for Icelandic - MIM-GOLD-NER splits", "## MIM-GOLD-NER\n\nThe original MIM-GOLD-NER data is found at URL \n\nThis repository packages the data for use with the Datasets library from hugginface.", "## Old splits\n\n*This is no longer in use.*\n\nAt the time of creation, the original data did not have train, dev and test splits. 'create_splits.py' was used to create temporary splits." ]
16d7a159dc46c84ba94dfe523233cabfc39df5db
## Dataset Description - **Homepage:** [SCROLLS](https://www.scrolls-benchmark.com/) - **Repository:** [SCROLLS Github repository](https://github.com/tau-nlp/scrolls) - **Paper:** [SCROLLS: Standardized CompaRison Over Long Language Sequences ](https://arxiv.org/pdf/2201.03533.pdf) - **Leaderboard:** [Leaderboard](https://www.scrolls-benchmark.com/leaderboard) - **Point of Contact:** [[email protected]]([email protected]) # Dataset Card for SCROLLS ## Overview SCROLLS is a suite of datasets that require synthesizing information over long texts. The benchmark includes seven natural language tasks across multiple domains, including summarization, question answering, and natural language inference. ## Leaderboard The SCROLLS benchmark leaderboard can be found [here](https://www.scrolls-benchmark.com/leaderboard). ## Tasks SCROLLS comprises the following tasks: #### GovReport ([Huang et al., 2021](https://arxiv.org/pdf/2104.02112.pdf)) GovReport is a summarization dataset of reports addressing various national policy issues published by the Congressional Research Service and the U.S. Government Accountability Office, where each document is paired with a hand-written executive summary. The reports and their summaries are longer than their equivalents in other popular long-document summarization datasets; for example, GovReport's documents are approximately 1.5 and 2.5 times longer than the documents in Arxiv and PubMed, respectively. #### SummScreenFD ([Chen et al., 2021](https://arxiv.org/pdf/2104.07091.pdf)) SummScreenFD is a summarization dataset in the domain of TV shows (e.g. Friends, Game of Thrones). Given a transcript of a specific episode, the goal is to produce the episode's recap. The original dataset is divided into two complementary subsets, based on the source of its community contributed transcripts. For SCROLLS, we use the ForeverDreaming (FD) subset, as it incorporates 88 different shows, making it a more diverse alternative to the TV MegaSite (TMS) subset, which has only 10 shows. Community-authored recaps for the ForeverDreaming transcripts were collected from English Wikipedia and TVMaze. #### QMSum ([Zhong et al., 2021](https://arxiv.org/pdf/2104.05938.pdf)) QMSum is a query-based summarization dataset, consisting of 232 meetings transcripts from multiple domains. The corpus covers academic group meetings at the International Computer Science Institute and their summaries, industrial product meetings for designing a remote control, and committee meetings of the Welsh and Canadian Parliaments, dealing with a variety of public policy issues. Annotators were tasked with writing queries about the broad contents of the meetings, as well as specific questions about certain topics or decisions, while ensuring that the relevant text for answering each query spans at least 200 words or 10 turns. #### NarrativeQA ([Kočiský et al., 2018](https://arxiv.org/pdf/1712.07040.pdf)) NarrativeQA (Kočiský et al., 2021) is an established question answering dataset over entire books from Project Gutenberg and movie scripts from different websites. Annotators were given summaries of the books and scripts obtained from Wikipedia, and asked to generate question-answer pairs, resulting in about 30 questions and answers for each of the 1,567 books and scripts. They were encouraged to use their own words rather then copying, and avoid asking yes/no questions or ones about the cast. Each question was then answered by an additional annotator, providing each question with two reference answers (unless both answers are identical). #### Qasper ([Dasigi et al., 2021](https://arxiv.org/pdf/2105.03011.pdf)) Qasper is a question answering dataset over NLP papers filtered from the Semantic Scholar Open Research Corpus (S2ORC). Questions were written by NLP practitioners after reading only the title and abstract of the papers, while another set of NLP practitioners annotated the answers given the entire document. Qasper contains abstractive, extractive, and yes/no questions, as well as unanswerable ones. #### QuALITY ([Pang et al., 2021](https://arxiv.org/pdf/2112.08608.pdf)) QuALITY is a multiple-choice question answering dataset over articles and stories sourced from Project Gutenberg, the Open American National Corpus, and more. Experienced writers wrote questions and distractors, and were incentivized to write answerable, unambiguous questions such that in order to correctly answer them, human annotators must read large portions of the given document. Reference answers were then calculated using the majority vote between of the annotators and writer's answers. To measure the difficulty of their questions, Pang et al. conducted a speed validation process, where another set of annotators were asked to answer questions given only a short period of time to skim through the document. As a result, 50% of the questions in QuALITY are labeled as hard, i.e. the majority of the annotators in the speed validation setting chose the wrong answer. #### ContractNLI ([Koreeda and Manning, 2021](https://arxiv.org/pdf/2110.01799.pdf)) Contract NLI is a natural language inference dataset in the legal domain. Given a non-disclosure agreement (the premise), the task is to predict whether a particular legal statement (the hypothesis) is entailed, not entailed (neutral), or cannot be entailed (contradiction) from the contract. The NDAs were manually picked after simple filtering from the Electronic Data Gathering, Analysis, and Retrieval system (EDGAR) and Google. The dataset contains a total of 607 contracts and 17 unique hypotheses, which were combined to produce the dataset's 10,319 examples. ## Data Fields All the datasets in the benchmark are in the same input-output format - `input`: a `string` feature. The input document. - `output`: a `string` feature. The target. - `id`: a `string` feature. Unique per input. - `pid`: a `string` feature. Unique per input-output pair (can differ from 'id' in NarrativeQA and Qasper, where there is more then one valid target). ## Citation If you use the SCROLLS data, **please make sure to cite all of the original dataset papers.** [[bibtex](https://scrolls-tau.s3.us-east-2.amazonaws.com/scrolls_datasets.bib)] ``` @inproceedings{shaham-etal-2022-scrolls, title = "{SCROLLS}: Standardized {C}ompa{R}ison Over Long Language Sequences", author = "Shaham, Uri and Segal, Elad and Ivgi, Maor and Efrat, Avia and Yoran, Ori and Haviv, Adi and Gupta, Ankit and Xiong, Wenhan and Geva, Mor and Berant, Jonathan and Levy, Omer", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, United Arab Emirates", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.emnlp-main.823", pages = "12007--12021", } ```
tau/scrolls
[ "task_categories:question-answering", "task_categories:summarization", "task_categories:text-generation", "task_ids:multiple-choice-qa", "task_ids:natural-language-inference", "language:en", "query-based-summarization", "long-texts", "arxiv:2201.03533", "arxiv:2104.02112", "arxiv:2104.07091", "arxiv:2104.05938", "arxiv:1712.07040", "arxiv:2105.03011", "arxiv:2112.08608", "arxiv:2110.01799", "region:us" ]
2022-03-02T23:29:22+00:00
{"language": ["en"], "task_categories": ["question-answering", "summarization", "text-generation"], "task_ids": ["multiple-choice-qa", "natural-language-inference"], "paperswithcode_id": "scrolls", "configs": ["gov_report", "summ_screen_fd", "qmsum", "qasper", "narrative_qa", "quality", "contract_nli"], "tags": ["query-based-summarization", "long-texts"]}
2024-01-12T09:30:24+00:00
[ "2201.03533", "2104.02112", "2104.07091", "2104.05938", "1712.07040", "2105.03011", "2112.08608", "2110.01799" ]
[ "en" ]
TAGS #task_categories-question-answering #task_categories-summarization #task_categories-text-generation #task_ids-multiple-choice-qa #task_ids-natural-language-inference #language-English #query-based-summarization #long-texts #arxiv-2201.03533 #arxiv-2104.02112 #arxiv-2104.07091 #arxiv-2104.05938 #arxiv-1712.07040 #arxiv-2105.03011 #arxiv-2112.08608 #arxiv-2110.01799 #region-us
## Dataset Description - Homepage: SCROLLS - Repository: SCROLLS Github repository - Paper: SCROLLS: Standardized CompaRison Over Long Language Sequences - Leaderboard: Leaderboard - Point of Contact: scrolls-benchmark-contact@URL # Dataset Card for SCROLLS ## Overview SCROLLS is a suite of datasets that require synthesizing information over long texts. The benchmark includes seven natural language tasks across multiple domains, including summarization, question answering, and natural language inference. ## Leaderboard The SCROLLS benchmark leaderboard can be found here. ## Tasks SCROLLS comprises the following tasks: #### GovReport (Huang et al., 2021) GovReport is a summarization dataset of reports addressing various national policy issues published by the Congressional Research Service and the U.S. Government Accountability Office, where each document is paired with a hand-written executive summary. The reports and their summaries are longer than their equivalents in other popular long-document summarization datasets; for example, GovReport's documents are approximately 1.5 and 2.5 times longer than the documents in Arxiv and PubMed, respectively. #### SummScreenFD (Chen et al., 2021) SummScreenFD is a summarization dataset in the domain of TV shows (e.g. Friends, Game of Thrones). Given a transcript of a specific episode, the goal is to produce the episode's recap. The original dataset is divided into two complementary subsets, based on the source of its community contributed transcripts. For SCROLLS, we use the ForeverDreaming (FD) subset, as it incorporates 88 different shows, making it a more diverse alternative to the TV MegaSite (TMS) subset, which has only 10 shows. Community-authored recaps for the ForeverDreaming transcripts were collected from English Wikipedia and TVMaze. #### QMSum (Zhong et al., 2021) QMSum is a query-based summarization dataset, consisting of 232 meetings transcripts from multiple domains. The corpus covers academic group meetings at the International Computer Science Institute and their summaries, industrial product meetings for designing a remote control, and committee meetings of the Welsh and Canadian Parliaments, dealing with a variety of public policy issues. Annotators were tasked with writing queries about the broad contents of the meetings, as well as specific questions about certain topics or decisions, while ensuring that the relevant text for answering each query spans at least 200 words or 10 turns. #### NarrativeQA (Kočiský et al., 2018) NarrativeQA (Kočiský et al., 2021) is an established question answering dataset over entire books from Project Gutenberg and movie scripts from different websites. Annotators were given summaries of the books and scripts obtained from Wikipedia, and asked to generate question-answer pairs, resulting in about 30 questions and answers for each of the 1,567 books and scripts. They were encouraged to use their own words rather then copying, and avoid asking yes/no questions or ones about the cast. Each question was then answered by an additional annotator, providing each question with two reference answers (unless both answers are identical). #### Qasper (Dasigi et al., 2021) Qasper is a question answering dataset over NLP papers filtered from the Semantic Scholar Open Research Corpus (S2ORC). Questions were written by NLP practitioners after reading only the title and abstract of the papers, while another set of NLP practitioners annotated the answers given the entire document. Qasper contains abstractive, extractive, and yes/no questions, as well as unanswerable ones. #### QuALITY (Pang et al., 2021) QuALITY is a multiple-choice question answering dataset over articles and stories sourced from Project Gutenberg, the Open American National Corpus, and more. Experienced writers wrote questions and distractors, and were incentivized to write answerable, unambiguous questions such that in order to correctly answer them, human annotators must read large portions of the given document. Reference answers were then calculated using the majority vote between of the annotators and writer's answers. To measure the difficulty of their questions, Pang et al. conducted a speed validation process, where another set of annotators were asked to answer questions given only a short period of time to skim through the document. As a result, 50% of the questions in QuALITY are labeled as hard, i.e. the majority of the annotators in the speed validation setting chose the wrong answer. #### ContractNLI (Koreeda and Manning, 2021) Contract NLI is a natural language inference dataset in the legal domain. Given a non-disclosure agreement (the premise), the task is to predict whether a particular legal statement (the hypothesis) is entailed, not entailed (neutral), or cannot be entailed (contradiction) from the contract. The NDAs were manually picked after simple filtering from the Electronic Data Gathering, Analysis, and Retrieval system (EDGAR) and Google. The dataset contains a total of 607 contracts and 17 unique hypotheses, which were combined to produce the dataset's 10,319 examples. ## Data Fields All the datasets in the benchmark are in the same input-output format - 'input': a 'string' feature. The input document. - 'output': a 'string' feature. The target. - 'id': a 'string' feature. Unique per input. - 'pid': a 'string' feature. Unique per input-output pair (can differ from 'id' in NarrativeQA and Qasper, where there is more then one valid target). If you use the SCROLLS data, please make sure to cite all of the original dataset papers. [bibtex]
[ "## Dataset Description\n\n- Homepage: SCROLLS\n- Repository: SCROLLS Github repository\n- Paper: SCROLLS: Standardized CompaRison Over Long Language Sequences\n\n- Leaderboard: Leaderboard\n- Point of Contact: scrolls-benchmark-contact@URL", "# Dataset Card for SCROLLS", "## Overview\nSCROLLS is a suite of datasets that require synthesizing information over long texts. The benchmark includes seven natural language tasks across multiple domains, including summarization, question answering, and natural language inference.", "## Leaderboard\nThe SCROLLS benchmark leaderboard can be found here.", "## Tasks\nSCROLLS comprises the following tasks:", "#### GovReport (Huang et al., 2021)\nGovReport is a summarization dataset of reports addressing various national policy issues published by the \nCongressional Research Service and the U.S. Government Accountability Office, where each document is paired with a hand-written executive summary.\nThe reports and their summaries are longer than their equivalents in other popular long-document summarization datasets; \nfor example, GovReport's documents are approximately 1.5 and 2.5 times longer than the documents in Arxiv and PubMed, respectively.", "#### SummScreenFD (Chen et al., 2021)\nSummScreenFD is a summarization dataset in the domain of TV shows (e.g. Friends, Game of Thrones).\nGiven a transcript of a specific episode, the goal is to produce the episode's recap.\nThe original dataset is divided into two complementary subsets, based on the source of its community contributed transcripts. \nFor SCROLLS, we use the ForeverDreaming (FD) subset, as it incorporates 88 different shows, \nmaking it a more diverse alternative to the TV MegaSite (TMS) subset, which has only 10 shows. \nCommunity-authored recaps for the ForeverDreaming transcripts were collected from English Wikipedia and TVMaze.", "#### QMSum (Zhong et al., 2021)\nQMSum is a query-based summarization dataset, consisting of 232 meetings transcripts from multiple domains. \nThe corpus covers academic group meetings at the International Computer Science Institute and their summaries, industrial product meetings for designing a remote control, \nand committee meetings of the Welsh and Canadian Parliaments, dealing with a variety of public policy issues.\nAnnotators were tasked with writing queries about the broad contents of the meetings, as well as specific questions about certain topics or decisions, \nwhile ensuring that the relevant text for answering each query spans at least 200 words or 10 turns.", "#### NarrativeQA (Kočiský et al., 2018)\nNarrativeQA (Kočiský et al., 2021) is an established question answering dataset over entire books from Project Gutenberg and movie scripts from different websites.\nAnnotators were given summaries of the books and scripts obtained from Wikipedia, and asked to generate question-answer pairs, \nresulting in about 30 questions and answers for each of the 1,567 books and scripts.\nThey were encouraged to use their own words rather then copying, and avoid asking yes/no questions or ones about the cast.\nEach question was then answered by an additional annotator, providing each question with two reference answers (unless both answers are identical).", "#### Qasper (Dasigi et al., 2021)\nQasper is a question answering dataset over NLP papers filtered from the Semantic Scholar Open Research Corpus (S2ORC).\nQuestions were written by NLP practitioners after reading only the title and abstract of the papers, \nwhile another set of NLP practitioners annotated the answers given the entire document.\nQasper contains abstractive, extractive, and yes/no questions, as well as unanswerable ones.", "#### QuALITY (Pang et al., 2021)\nQuALITY is a multiple-choice question answering dataset over articles and stories sourced from Project Gutenberg, \nthe Open American National Corpus, and more.\nExperienced writers wrote questions and distractors, and were incentivized to write answerable, unambiguous questions such that in order to correctly answer them, \nhuman annotators must read large portions of the given document. \nReference answers were then calculated using the majority vote between of the annotators and writer's answers.\nTo measure the difficulty of their questions, Pang et al. conducted a speed validation process, \nwhere another set of annotators were asked to answer questions given only a short period of time to skim through the document.\nAs a result, 50% of the questions in QuALITY are labeled as hard, i.e. the majority of the annotators in the speed validation setting chose the wrong answer.", "#### ContractNLI (Koreeda and Manning, 2021)\nContract NLI is a natural language inference dataset in the legal domain.\nGiven a non-disclosure agreement (the premise), the task is to predict whether a particular legal statement (the hypothesis) is entailed, not entailed (neutral), or cannot be entailed (contradiction) from the contract.\nThe NDAs were manually picked after simple filtering from the Electronic Data Gathering, Analysis, and Retrieval system (EDGAR) and Google.\nThe dataset contains a total of 607 contracts and 17 unique hypotheses, which were combined to produce the dataset's 10,319 examples.", "## Data Fields\n\nAll the datasets in the benchmark are in the same input-output format\n\n- 'input': a 'string' feature. The input document.\n- 'output': a 'string' feature. The target.\n- 'id': a 'string' feature. Unique per input.\n- 'pid': a 'string' feature. Unique per input-output pair (can differ from 'id' in NarrativeQA and Qasper, where there is more then one valid target).\n\nIf you use the SCROLLS data, please make sure to cite all of the original dataset papers. [bibtex]" ]
[ "TAGS\n#task_categories-question-answering #task_categories-summarization #task_categories-text-generation #task_ids-multiple-choice-qa #task_ids-natural-language-inference #language-English #query-based-summarization #long-texts #arxiv-2201.03533 #arxiv-2104.02112 #arxiv-2104.07091 #arxiv-2104.05938 #arxiv-1712.07040 #arxiv-2105.03011 #arxiv-2112.08608 #arxiv-2110.01799 #region-us \n", "## Dataset Description\n\n- Homepage: SCROLLS\n- Repository: SCROLLS Github repository\n- Paper: SCROLLS: Standardized CompaRison Over Long Language Sequences\n\n- Leaderboard: Leaderboard\n- Point of Contact: scrolls-benchmark-contact@URL", "# Dataset Card for SCROLLS", "## Overview\nSCROLLS is a suite of datasets that require synthesizing information over long texts. The benchmark includes seven natural language tasks across multiple domains, including summarization, question answering, and natural language inference.", "## Leaderboard\nThe SCROLLS benchmark leaderboard can be found here.", "## Tasks\nSCROLLS comprises the following tasks:", "#### GovReport (Huang et al., 2021)\nGovReport is a summarization dataset of reports addressing various national policy issues published by the \nCongressional Research Service and the U.S. Government Accountability Office, where each document is paired with a hand-written executive summary.\nThe reports and their summaries are longer than their equivalents in other popular long-document summarization datasets; \nfor example, GovReport's documents are approximately 1.5 and 2.5 times longer than the documents in Arxiv and PubMed, respectively.", "#### SummScreenFD (Chen et al., 2021)\nSummScreenFD is a summarization dataset in the domain of TV shows (e.g. Friends, Game of Thrones).\nGiven a transcript of a specific episode, the goal is to produce the episode's recap.\nThe original dataset is divided into two complementary subsets, based on the source of its community contributed transcripts. \nFor SCROLLS, we use the ForeverDreaming (FD) subset, as it incorporates 88 different shows, \nmaking it a more diverse alternative to the TV MegaSite (TMS) subset, which has only 10 shows. \nCommunity-authored recaps for the ForeverDreaming transcripts were collected from English Wikipedia and TVMaze.", "#### QMSum (Zhong et al., 2021)\nQMSum is a query-based summarization dataset, consisting of 232 meetings transcripts from multiple domains. \nThe corpus covers academic group meetings at the International Computer Science Institute and their summaries, industrial product meetings for designing a remote control, \nand committee meetings of the Welsh and Canadian Parliaments, dealing with a variety of public policy issues.\nAnnotators were tasked with writing queries about the broad contents of the meetings, as well as specific questions about certain topics or decisions, \nwhile ensuring that the relevant text for answering each query spans at least 200 words or 10 turns.", "#### NarrativeQA (Kočiský et al., 2018)\nNarrativeQA (Kočiský et al., 2021) is an established question answering dataset over entire books from Project Gutenberg and movie scripts from different websites.\nAnnotators were given summaries of the books and scripts obtained from Wikipedia, and asked to generate question-answer pairs, \nresulting in about 30 questions and answers for each of the 1,567 books and scripts.\nThey were encouraged to use their own words rather then copying, and avoid asking yes/no questions or ones about the cast.\nEach question was then answered by an additional annotator, providing each question with two reference answers (unless both answers are identical).", "#### Qasper (Dasigi et al., 2021)\nQasper is a question answering dataset over NLP papers filtered from the Semantic Scholar Open Research Corpus (S2ORC).\nQuestions were written by NLP practitioners after reading only the title and abstract of the papers, \nwhile another set of NLP practitioners annotated the answers given the entire document.\nQasper contains abstractive, extractive, and yes/no questions, as well as unanswerable ones.", "#### QuALITY (Pang et al., 2021)\nQuALITY is a multiple-choice question answering dataset over articles and stories sourced from Project Gutenberg, \nthe Open American National Corpus, and more.\nExperienced writers wrote questions and distractors, and were incentivized to write answerable, unambiguous questions such that in order to correctly answer them, \nhuman annotators must read large portions of the given document. \nReference answers were then calculated using the majority vote between of the annotators and writer's answers.\nTo measure the difficulty of their questions, Pang et al. conducted a speed validation process, \nwhere another set of annotators were asked to answer questions given only a short period of time to skim through the document.\nAs a result, 50% of the questions in QuALITY are labeled as hard, i.e. the majority of the annotators in the speed validation setting chose the wrong answer.", "#### ContractNLI (Koreeda and Manning, 2021)\nContract NLI is a natural language inference dataset in the legal domain.\nGiven a non-disclosure agreement (the premise), the task is to predict whether a particular legal statement (the hypothesis) is entailed, not entailed (neutral), or cannot be entailed (contradiction) from the contract.\nThe NDAs were manually picked after simple filtering from the Electronic Data Gathering, Analysis, and Retrieval system (EDGAR) and Google.\nThe dataset contains a total of 607 contracts and 17 unique hypotheses, which were combined to produce the dataset's 10,319 examples.", "## Data Fields\n\nAll the datasets in the benchmark are in the same input-output format\n\n- 'input': a 'string' feature. The input document.\n- 'output': a 'string' feature. The target.\n- 'id': a 'string' feature. Unique per input.\n- 'pid': a 'string' feature. Unique per input-output pair (can differ from 'id' in NarrativeQA and Qasper, where there is more then one valid target).\n\nIf you use the SCROLLS data, please make sure to cite all of the original dataset papers. [bibtex]" ]
cf9350c25cc6c7bdf1ef3b894735f5dbd1017b17
test111111111 222222
testOrganization01/test05
[ "region:us" ]
2022-03-02T23:29:22+00:00
{}
2021-10-27T05:40:44+00:00
[]
[]
TAGS #region-us
test111111111 222222
[]
[ "TAGS\n#region-us \n" ]
5edaf8a0223732d1f92e92ddcf96fe0e9175a502
# MOLD - {M}arathi {O}ffensive {L}anguage {D}ataset The {M}arathi {O}ffensive {L}anguage {D}ataset (MOLD) contains a collection of 2500 annotated Marathi tweets. The files included are: ``` MOLD │ README.md └───data │ MOLD_train.csv │ MOLD_test.csv ``` - `MOLD_train.csv`: contains 1,875 annotated tweets for the training set. - `MOLD_test.csv`: contains 625 annotated tweets for the test set. The dataset was annotated using crowdsourcing. The gold labels were assigned taking the agreement of six annotators into consideration. No correction has been carried out on the crowdsourcing annotations. Each instance in MOLD has been annotated as offensive or not_offensive ## Citation If you used MOLD, please refer to this paper: ```bash @InProceedings{mold, author = {Gaikwad, Saurabh and Ranasinghe, Tharindu and Zampieri, Marcos and Homan, Christopher M.}, title = {Cross-lingual Offensive Language Identification for Low Resource Languages: The Case of Marathi}, booktitle = {Proceedings of RANLP}, year = {2021} } ```
tharindu/MOLD
[ "region:us" ]
2022-03-02T23:29:22+00:00
{}
2021-09-12T18:25:26+00:00
[]
[]
TAGS #region-us
# MOLD - {M}arathi {O}ffensive {L}anguage {D}ataset The {M}arathi {O}ffensive {L}anguage {D}ataset (MOLD) contains a collection of 2500 annotated Marathi tweets. The files included are: - 'MOLD_train.csv': contains 1,875 annotated tweets for the training set. - 'MOLD_test.csv': contains 625 annotated tweets for the test set. The dataset was annotated using crowdsourcing. The gold labels were assigned taking the agreement of six annotators into consideration. No correction has been carried out on the crowdsourcing annotations. Each instance in MOLD has been annotated as offensive or not_offensive If you used MOLD, please refer to this paper:
[ "# MOLD - {M}arathi {O}ffensive {L}anguage {D}ataset\n\nThe {M}arathi {O}ffensive {L}anguage {D}ataset (MOLD) contains a collection of 2500 annotated Marathi tweets.\n\nThe files included are: \n\n- 'MOLD_train.csv': contains 1,875 annotated tweets for the training set. \n- 'MOLD_test.csv': contains 625 annotated tweets for the test set. \n\n\nThe dataset was annotated using crowdsourcing. The gold labels were assigned taking the agreement of six annotators into consideration. No correction has been carried out on the crowdsourcing annotations. \nEach instance in MOLD has been annotated as offensive or not_offensive\n\n\n\n\nIf you used MOLD, please refer to this paper:" ]
[ "TAGS\n#region-us \n", "# MOLD - {M}arathi {O}ffensive {L}anguage {D}ataset\n\nThe {M}arathi {O}ffensive {L}anguage {D}ataset (MOLD) contains a collection of 2500 annotated Marathi tweets.\n\nThe files included are: \n\n- 'MOLD_train.csv': contains 1,875 annotated tweets for the training set. \n- 'MOLD_test.csv': contains 625 annotated tweets for the test set. \n\n\nThe dataset was annotated using crowdsourcing. The gold labels were assigned taking the agreement of six annotators into consideration. No correction has been carried out on the crowdsourcing annotations. \nEach instance in MOLD has been annotated as offensive or not_offensive\n\n\n\n\nIf you used MOLD, please refer to this paper:" ]
7d6103712341593ad655569c9ec37c669a691a83
# SOLID: A Large-Scale Semi-Supervised Dataset for Offensive Language Identification The widespread use of offensive content in social media has led to an abundance of research in detecting language such as hate speech, cyberbullying, and cyber-aggression. Recent work presented the OLID dataset, which follows a taxonomy for offensive language identification that provides meaningful information for understanding the type and the target of offensive messages. However, it is limited in size and it might be biased towards offensive language as it was collected using keywords. In this work, we present SOLID, an expanded dataset, where the tweets were collected in a more principled manner. SOLID contains over nine million English tweets labelled in a semisupervised fashion. If you are using this dataset, please cite the following paper. ```bibtex @inproceedings{rosenthal-etal-2021-solid, title = "{SOLID}: A Large-Scale Semi-Supervised Dataset for Offensive Language Identification", author = "Rosenthal, Sara and Atanasova, Pepa and Karadzhov, Georgi and Zampieri, Marcos and Nakov, Preslav", booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.findings-acl.80", doi = "10.18653/v1/2021.findings-acl.80", pages = "915--928", } ```
tharindu/SOLID
[ "region:us" ]
2022-03-02T23:29:22+00:00
{}
2022-01-03T13:34:59+00:00
[]
[]
TAGS #region-us
# SOLID: A Large-Scale Semi-Supervised Dataset for Offensive Language Identification The widespread use of offensive content in social media has led to an abundance of research in detecting language such as hate speech, cyberbullying, and cyber-aggression. Recent work presented the OLID dataset, which follows a taxonomy for offensive language identification that provides meaningful information for understanding the type and the target of offensive messages. However, it is limited in size and it might be biased towards offensive language as it was collected using keywords. In this work, we present SOLID, an expanded dataset, where the tweets were collected in a more principled manner. SOLID contains over nine million English tweets labelled in a semisupervised fashion. If you are using this dataset, please cite the following paper.
[ "# SOLID: A Large-Scale Semi-Supervised Dataset for Offensive Language Identification\n\nThe widespread use of offensive content in social media has led to an abundance of research in detecting language such as hate speech, cyberbullying, and cyber-aggression. Recent work presented the OLID dataset, which follows a taxonomy for offensive language identification that provides meaningful information for understanding the type and the target of offensive messages. However, it is limited in size and it might be biased towards offensive language as it was collected using keywords. In this work, we present SOLID, an expanded dataset, where the tweets were collected in a more principled manner. SOLID contains over nine million English tweets labelled in a semisupervised fashion.\n\nIf you are using this dataset, please cite the following paper." ]
[ "TAGS\n#region-us \n", "# SOLID: A Large-Scale Semi-Supervised Dataset for Offensive Language Identification\n\nThe widespread use of offensive content in social media has led to an abundance of research in detecting language such as hate speech, cyberbullying, and cyber-aggression. Recent work presented the OLID dataset, which follows a taxonomy for offensive language identification that provides meaningful information for understanding the type and the target of offensive messages. However, it is limited in size and it might be biased towards offensive language as it was collected using keywords. In this work, we present SOLID, an expanded dataset, where the tweets were collected in a more principled manner. SOLID contains over nine million English tweets labelled in a semisupervised fashion.\n\nIf you are using this dataset, please cite the following paper." ]
fc01af8abe76468122cdc6a04262c22805bda2f6
Argumentation Annotated Student Peer Reviews Corpus (AASPRC) version 1.0 ----------------------------------------------------- Free and full access: https://github.com/thiemowa/argumentative_student_peer_reviews The corpus contains 1000 persuasive student peer reviews about business model feedbacks annotated for their argumentative components and argumentative relations. The folder contains the following files: 1. guideline.pdf: the annotation guidelines used in this study 2. Corpus.zip: the corpus including the txt files and the ann (annotation) files for each student review For annotating the texts, we used the brat annotation tool (version 1.3 "Crunchy Frog"), which can be downloaded from http://brat.nlplab.org Citation -------- If you use the data, cite the following publication T. Wambsganss, C. Niklaus, M. Söllner, S. Handschuh and J. M. Leimeister, “A Corpus for Argumentative Writing Support in German” In: 28th International Conference on Computational Linguistics (Coling), 2020.
thiemowa/argumentationreviewcorpus
[ "region:us" ]
2022-03-02T23:29:22+00:00
{}
2021-05-18T12:39:41+00:00
[]
[]
TAGS #region-us
Argumentation Annotated Student Peer Reviews Corpus (AASPRC) version 1.0 ----------------------------------------------------- Free and full access: URL The corpus contains 1000 persuasive student peer reviews about business model feedbacks annotated for their argumentative components and argumentative relations. The folder contains the following files: 1. URL: the annotation guidelines used in this study 2. URL: the corpus including the txt files and the ann (annotation) files for each student review For annotating the texts, we used the brat annotation tool (version 1.3 "Crunchy Frog"), which can be downloaded from URL Citation -------- If you use the data, cite the following publication T. Wambsganss, C. Niklaus, M. Söllner, S. Handschuh and J. M. Leimeister, “A Corpus for Argumentative Writing Support in German” In: 28th International Conference on Computational Linguistics (Coling), 2020.
[]
[ "TAGS\n#region-us \n" ]
6ca80725a337e86cd90878abf64805b181d75854
Empathy Annotated Student Peer Reviews Corpus (EASPRC) version 1.0 ----------------------------------------------------- Free and full access: https://github.com/thiemowa/empathy_annotated_peer_reviews The corpus contains 500 student peer reviews about business model feedbacks annotated for their cognitive and emotional empathy levels based on three types of review components (strength, weakness and suggestions for improvement). The folder contains the following files: 1. guideline.pdf: the annotation guidelines used in this study 2. Corpus.zip: the corpus including the txt files and the ann (annotation) files for each student review For annotating the texts, we used the tagtog annotation tool (https://www.tagtog.net/). Citation -------- If you use the data, cite the following publication T. Wambsganss, C. Niklaus, M. Söllner, S. Handschuh and J. M. Leimeister, “Supporting Cognitive and Emotional Empathic Writing of Students” In: _The Joint Conference of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing_
thiemowa/empathyreviewcorpus
[ "region:us" ]
2022-03-02T23:29:22+00:00
{}
2021-05-18T12:39:12+00:00
[]
[]
TAGS #region-us
Empathy Annotated Student Peer Reviews Corpus (EASPRC) version 1.0 ----------------------------------------------------- Free and full access: URL The corpus contains 500 student peer reviews about business model feedbacks annotated for their cognitive and emotional empathy levels based on three types of review components (strength, weakness and suggestions for improvement). The folder contains the following files: 1. URL: the annotation guidelines used in this study 2. URL: the corpus including the txt files and the ann (annotation) files for each student review For annotating the texts, we used the tagtog annotation tool (URL Citation -------- If you use the data, cite the following publication T. Wambsganss, C. Niklaus, M. Söllner, S. Handschuh and J. M. Leimeister, “Supporting Cognitive and Emotional Empathic Writing of Students” In: _The Joint Conference of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing_
[]
[ "TAGS\n#region-us \n" ]
99f7693ac2ad86604d60f1f3dafdeea1d4f6ba0a
Dataset # My very good dataset This dataset was carefully crafted in my home with a lot of coffee By thomwolf
thomwolf/very-good-dataset
[ "region:us" ]
2022-03-02T23:29:22+00:00
{}
2021-09-17T11:35:43+00:00
[]
[]
TAGS #region-us
Dataset # My very good dataset This dataset was carefully crafted in my home with a lot of coffee By thomwolf
[ "# My very good dataset\n\nThis dataset was carefully crafted in my home with a lot of coffee\n\nBy thomwolf" ]
[ "TAGS\n#region-us \n", "# My very good dataset\n\nThis dataset was carefully crafted in my home with a lot of coffee\n\nBy thomwolf" ]
ed27a8d299e06b62b812ea4538e74883635e531a
# My very good dataset This dataset was carefully crafted in my home with a lot of coffee By thomwolf
thomwolf/very-test-dataset-2
[ "region:us" ]
2022-03-02T23:29:22+00:00
{}
2021-09-17T11:28:45+00:00
[]
[]
TAGS #region-us
# My very good dataset This dataset was carefully crafted in my home with a lot of coffee By thomwolf
[ "# My very good dataset\n\nThis dataset was carefully crafted in my home with a lot of coffee\n\nBy thomwolf" ]
[ "TAGS\n#region-us \n", "# My very good dataset\n\nThis dataset was carefully crafted in my home with a lot of coffee\n\nBy thomwolf" ]
fe8bdba33903ab9490da4170b1159a3e8e8860bc
# My great dataset
thomwolf/very-test-dataset
[ "region:us" ]
2022-03-02T23:29:22+00:00
{}
2021-09-17T11:11:26+00:00
[]
[]
TAGS #region-us
# My great dataset
[ "# My great dataset" ]
[ "TAGS\n#region-us \n", "# My great dataset" ]
37efd20748c667376df719d997b83ac9ecf1c3e6
临时文件, 当作网盘用.
tianxing1994/temp
[ "region:us" ]
2022-03-02T23:29:22+00:00
{}
2021-07-07T12:08:08+00:00
[]
[]
TAGS #region-us
临时文件, 当作网盘用.
[]
[ "TAGS\n#region-us \n" ]
a472740248f3c7ff1b6f2cb2938215d9f2df4c11
# Dataset Card for GitHub Issues ## Dataset Summary GitHub Issues is a dataset consisting of GitHub issues and pull requests associated with the 🤗 Datasets repository. It is intended for educational purposes and can be used for semantic search or multilabel text classification. The contents of each GitHub issue are in English and concern the domain of datasets for NLP, computer vision, and beyond.
toddmorrill/github-issues
[ "task_categories:text-classification", "task_categories:text-retrieval", "task_ids:multi-class-classification", "task_ids:multi-label-classification", "task_ids:document-retrieval", "annotations_creators:no-annotation", "multilinguality:monolingual", "size_categories:unknown", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["no-annotation"], "language_creators": [], "language": ["'en-US'"], "license": [], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": [], "task_categories": ["text-classification", "text-retrieval"], "task_ids": ["multi-class-classification", "multi-label-classification", "document-retrieval"], "pretty_name": "Hugging Face Github Issues"}
2022-10-25T08:56:49+00:00
[]
[ "'en-US'" ]
TAGS #task_categories-text-classification #task_categories-text-retrieval #task_ids-multi-class-classification #task_ids-multi-label-classification #task_ids-document-retrieval #annotations_creators-no-annotation #multilinguality-monolingual #size_categories-unknown #region-us
# Dataset Card for GitHub Issues ## Dataset Summary GitHub Issues is a dataset consisting of GitHub issues and pull requests associated with the Datasets repository. It is intended for educational purposes and can be used for semantic search or multilabel text classification. The contents of each GitHub issue are in English and concern the domain of datasets for NLP, computer vision, and beyond.
[ "# Dataset Card for GitHub Issues", "## Dataset Summary\nGitHub Issues is a dataset consisting of GitHub issues and pull requests associated with the Datasets repository. It is intended for educational purposes and can be used for semantic search or multilabel text classification. The contents of each GitHub issue are in English and concern the domain of datasets for NLP, computer vision, and beyond." ]
[ "TAGS\n#task_categories-text-classification #task_categories-text-retrieval #task_ids-multi-class-classification #task_ids-multi-label-classification #task_ids-document-retrieval #annotations_creators-no-annotation #multilinguality-monolingual #size_categories-unknown #region-us \n", "# Dataset Card for GitHub Issues", "## Dataset Summary\nGitHub Issues is a dataset consisting of GitHub issues and pull requests associated with the Datasets repository. It is intended for educational purposes and can be used for semantic search or multilabel text classification. The contents of each GitHub issue are in English and concern the domain of datasets for NLP, computer vision, and beyond." ]
20278feeecc9421d90704ea04ce2e455e76e3ba9
# Dataset Card for CrowdSpeech ## Dataset Description - **Repository:** [GitHub](https://github.com/Toloka/CrowdSpeech) - **Paper:** [Paper](https://openreview.net/forum?id=3_hgF1NAXU7) - **Point of Contact:** [email protected] ### Dataset Summary CrowdSpeech is the first publicly available large-scale dataset of crowdsourced audio transcriptions. The dataset was constructed by annotation [LibriSpeech](https://www.openslr.org/12) on [Toloka crowdsourcing platform](https://toloka.ai). CrowdSpeech consists of 22K instances having around 155K annotations obtained from crowd workers. ### Supported Tasks and Leaderboards Aggregation of crowd transcriptions. ### Languages English ## Dataset Structure ### Data Instances A data instance contains a url to the audio recording, a list of transcriptions along with the corresponding performers identifiers and ground truth. For each data instance, seven crowdsourced transcriptions are provided. ``` {'task': 'https://tlk.s3.yandex.net/annotation_tasks/librispeech/train-clean/0.mp3', 'transcriptions': "had laid before her a pair of alternatives now of course you're completely your own mistress and are as free as the bird on the bough i don't mean you were not so before but you're at present on a different footing | had laid before her a pair of alternatives now of course you are completely your own mistress and are as free as the bird on the bowl i don't mean you were not so before but you were present on a different footing | had laid before her a pair of alternatives now of course you're completely your own mistress and are as free as the bird on the bow i don't mean you are not so before but you're at present on a different footing | had laid before her a pair of alternatives now of course you're completely your own mistress and are as free as the bird on the bow i don't mean you are not so before but you're at present on a different footing | laid before her a pair of alternativesnow of course you're completely your own mistress and are as free as the bird on the bow i don't mean you're not so before but you're at present on a different footing | had laid before her a peril alternatives now of course your completely your own mistress and as free as a bird as the back bowl i don't mean you were not so before but you are present on a different footing | a lady before her a pair of alternatives now of course you're completely your own mistress and rs free as the bird on the ball i don't need you or not so before but you're at present on a different footing", 'performers': '1154 | 3449 | 3097 | 461 | 3519 | 920 | 3660', 'gt': "had laid before her a pair of alternatives now of course you're completely your own mistress and are as free as the bird on the bough i don't mean you were not so before but you're at present on a different footing"} ``` ### Data Fields * task: a string containing a url of the audio recording * transcriptions: a list of the crowdsourced transcriptions separated by '|' * performers: the corresponding performers' identifiers. * gt: ground truth transcription ### Data Splits There are five splits in the data: train, test, test.other, dev.clean and dev.other. Splits train, test and dev.clean correspond to *clean* part of LibriSpeech that contains audio recordings of higher quality with accents of the speaker being closer to the US English. Splits dev.other and test.other correspond to *other* part of LibriSpeech with the recordings more challenging for recognition. The audio recordings are gender-balanced. ## Dataset Creation ### Source Data [LibriSpeech](https://www.openslr.org/12) is a corpus of approximately 1000 hours of 16kHz read English speech. ### Annotations Annotation was done on [Toloka crowdsourcing platform](https://toloka.ai) with overlap of 7 (that is, each task was performed by 7 annotators). Only annotators who self-reported the knowledge of English had access to the annotation task. Additionally, annotators had to pass *Entrance Exam*. For this, we ask all incoming eligible workers to annotate ten audio recordings. We then compute our target metric — Word Error Rate (WER) — on these recordings and accept to the main task all workers who achieve WER of 40% or less (the smaller the value of the metric, the higher the quality of annotation). The Toloka crowdsourcing platform associates workers with unique identifiers and returns these identifiers to the requester. To further protect the data, we additionally encode each identifier with an integer that is eventually reported in our released datasets. See more details in the [paper](https://arxiv.org/pdf/2107.01091.pdf). ### Citation Information ``` @inproceedings{CrowdSpeech, author = {Pavlichenko, Nikita and Stelmakh, Ivan and Ustalov, Dmitry}, title = {{CrowdSpeech and Vox~DIY: Benchmark Dataset for Crowdsourced Audio Transcription}}, year = {2021}, booktitle = {Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks}, eprint = {2107.01091}, eprinttype = {arxiv}, eprintclass = {cs.SD}, url = {https://openreview.net/forum?id=3_hgF1NAXU7}, language = {english}, pubstate = {forthcoming}, } ```
toloka/CrowdSpeech
[ "task_categories:summarization", "task_categories:automatic-speech-recognition", "task_categories:text2text-generation", "annotations_creators:found", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:unknown", "source_datasets:original", "language:en", "license:cc-by-4.0", "conditional-text-generation", "stuctured-to-text", "speech-recognition", "arxiv:2107.01091", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["found"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["summarization", "automatic-speech-recognition", "text2text-generation"], "task_ids": [], "paperswithcode_id": "crowdspeech", "pretty_name": "CrowdSpeech", "language_bcp47": ["en-US"], "tags": ["conditional-text-generation", "stuctured-to-text", "speech-recognition"]}
2022-12-06T15:24:36+00:00
[ "2107.01091" ]
[ "en" ]
TAGS #task_categories-summarization #task_categories-automatic-speech-recognition #task_categories-text2text-generation #annotations_creators-found #language_creators-crowdsourced #multilinguality-monolingual #size_categories-unknown #source_datasets-original #language-English #license-cc-by-4.0 #conditional-text-generation #stuctured-to-text #speech-recognition #arxiv-2107.01091 #region-us
# Dataset Card for CrowdSpeech ## Dataset Description - Repository: GitHub - Paper: Paper - Point of Contact: research@URL ### Dataset Summary CrowdSpeech is the first publicly available large-scale dataset of crowdsourced audio transcriptions. The dataset was constructed by annotation LibriSpeech on Toloka crowdsourcing platform. CrowdSpeech consists of 22K instances having around 155K annotations obtained from crowd workers. ### Supported Tasks and Leaderboards Aggregation of crowd transcriptions. ### Languages English ## Dataset Structure ### Data Instances A data instance contains a url to the audio recording, a list of transcriptions along with the corresponding performers identifiers and ground truth. For each data instance, seven crowdsourced transcriptions are provided. ### Data Fields * task: a string containing a url of the audio recording * transcriptions: a list of the crowdsourced transcriptions separated by '|' * performers: the corresponding performers' identifiers. * gt: ground truth transcription ### Data Splits There are five splits in the data: train, test, URL, URL and URL. Splits train, test and URL correspond to *clean* part of LibriSpeech that contains audio recordings of higher quality with accents of the speaker being closer to the US English. Splits URL and URL correspond to *other* part of LibriSpeech with the recordings more challenging for recognition. The audio recordings are gender-balanced. ## Dataset Creation ### Source Data LibriSpeech is a corpus of approximately 1000 hours of 16kHz read English speech. ### Annotations Annotation was done on Toloka crowdsourcing platform with overlap of 7 (that is, each task was performed by 7 annotators). Only annotators who self-reported the knowledge of English had access to the annotation task. Additionally, annotators had to pass *Entrance Exam*. For this, we ask all incoming eligible workers to annotate ten audio recordings. We then compute our target metric — Word Error Rate (WER) — on these recordings and accept to the main task all workers who achieve WER of 40% or less (the smaller the value of the metric, the higher the quality of annotation). The Toloka crowdsourcing platform associates workers with unique identifiers and returns these identifiers to the requester. To further protect the data, we additionally encode each identifier with an integer that is eventually reported in our released datasets. See more details in the paper.
[ "# Dataset Card for CrowdSpeech", "## Dataset Description\n- Repository: GitHub\n- Paper: Paper\n- Point of Contact: research@URL", "### Dataset Summary\n\nCrowdSpeech is the first publicly available large-scale dataset of crowdsourced audio transcriptions.\nThe dataset was constructed by annotation LibriSpeech on Toloka crowdsourcing platform.\nCrowdSpeech consists of 22K instances having around 155K annotations obtained from crowd workers.", "### Supported Tasks and Leaderboards\nAggregation of crowd transcriptions.", "### Languages\nEnglish", "## Dataset Structure", "### Data Instances\n\nA data instance contains a url to the audio recording, a list of transcriptions along with the corresponding performers identifiers and ground truth.\nFor each data instance, seven crowdsourced transcriptions are provided.", "### Data Fields\n\n* task: a string containing a url of the audio recording\n* transcriptions: a list of the crowdsourced transcriptions separated by '|'\n* performers: the corresponding performers' identifiers.\n* gt: ground truth transcription", "### Data Splits\n\nThere are five splits in the data: train, test, URL, URL and URL.\nSplits train, test and URL correspond to *clean* part of LibriSpeech that contains audio recordings of higher quality with accents \nof the speaker being closer to the US English. Splits URL and URL correspond to *other* part of LibriSpeech with \nthe recordings more challenging for recognition. The audio recordings are gender-balanced.", "## Dataset Creation", "### Source Data\n\nLibriSpeech is a corpus of approximately 1000 hours of 16kHz read English speech.", "### Annotations\n\nAnnotation was done on Toloka crowdsourcing platform with overlap of 7 (that is, each task was performed by 7 annotators).\n\nOnly annotators who self-reported the knowledge of English had access to the annotation task.\nAdditionally, annotators had to pass *Entrance Exam*. For this, we ask all incoming eligible workers to annotate ten audio\nrecordings. We then compute our target metric — Word Error Rate (WER) — on these recordings and accept to the main task all workers \nwho achieve WER of 40% or less (the smaller the value of the metric, the higher the quality of annotation).\n\nThe Toloka crowdsourcing platform associates workers with unique identifiers and returns these identifiers to the requester. \nTo further protect the data, we additionally encode each identifier with an integer that is eventually reported in our released datasets.\n\nSee more details in the paper." ]
[ "TAGS\n#task_categories-summarization #task_categories-automatic-speech-recognition #task_categories-text2text-generation #annotations_creators-found #language_creators-crowdsourced #multilinguality-monolingual #size_categories-unknown #source_datasets-original #language-English #license-cc-by-4.0 #conditional-text-generation #stuctured-to-text #speech-recognition #arxiv-2107.01091 #region-us \n", "# Dataset Card for CrowdSpeech", "## Dataset Description\n- Repository: GitHub\n- Paper: Paper\n- Point of Contact: research@URL", "### Dataset Summary\n\nCrowdSpeech is the first publicly available large-scale dataset of crowdsourced audio transcriptions.\nThe dataset was constructed by annotation LibriSpeech on Toloka crowdsourcing platform.\nCrowdSpeech consists of 22K instances having around 155K annotations obtained from crowd workers.", "### Supported Tasks and Leaderboards\nAggregation of crowd transcriptions.", "### Languages\nEnglish", "## Dataset Structure", "### Data Instances\n\nA data instance contains a url to the audio recording, a list of transcriptions along with the corresponding performers identifiers and ground truth.\nFor each data instance, seven crowdsourced transcriptions are provided.", "### Data Fields\n\n* task: a string containing a url of the audio recording\n* transcriptions: a list of the crowdsourced transcriptions separated by '|'\n* performers: the corresponding performers' identifiers.\n* gt: ground truth transcription", "### Data Splits\n\nThere are five splits in the data: train, test, URL, URL and URL.\nSplits train, test and URL correspond to *clean* part of LibriSpeech that contains audio recordings of higher quality with accents \nof the speaker being closer to the US English. Splits URL and URL correspond to *other* part of LibriSpeech with \nthe recordings more challenging for recognition. The audio recordings are gender-balanced.", "## Dataset Creation", "### Source Data\n\nLibriSpeech is a corpus of approximately 1000 hours of 16kHz read English speech.", "### Annotations\n\nAnnotation was done on Toloka crowdsourcing platform with overlap of 7 (that is, each task was performed by 7 annotators).\n\nOnly annotators who self-reported the knowledge of English had access to the annotation task.\nAdditionally, annotators had to pass *Entrance Exam*. For this, we ask all incoming eligible workers to annotate ten audio\nrecordings. We then compute our target metric — Word Error Rate (WER) — on these recordings and accept to the main task all workers \nwho achieve WER of 40% or less (the smaller the value of the metric, the higher the quality of annotation).\n\nThe Toloka crowdsourcing platform associates workers with unique identifiers and returns these identifiers to the requester. \nTo further protect the data, we additionally encode each identifier with an integer that is eventually reported in our released datasets.\n\nSee more details in the paper." ]
0849275e6db59dd3b66c68ca63d848d55cd897f8
# Dataset Card for VoxDIY RusNews ## Dataset Description - **Repository:** [GitHub](https://github.com/Toloka/CrowdSpeech) - **Paper:** [Paper](https://openreview.net/forum?id=3_hgF1NAXU7) - **Point of Contact:** [email protected] ### Dataset Summary VoxDIY RusNews is the first publicly available large-scale dataset of crowdsourced audio transcriptions in Russian language. The dataset was constructed by annotating audio recordings of Russian sentences from news domain on [Toloka crowdsourcing platform](https://toloka.ai). VoxDIY RusNews consists of 3091 instances having around 21K annotations obtained from crowd workers. ### Supported Tasks and Leaderboards Aggregation of crowd transcriptions. ### Languages Russian ## Dataset Structure ### Data Instances A data instance contains a url to the audio recording, a list of transcriptions along with the corresponding performers identifiers and ground truth. For each data instance, seven crowdsourced transcriptions are provided. ``` {'task': 'https://tlk.s3.yandex.net/annotation_tasks/russian/1003.wav', 'transcriptions': 'в список так же попали мэрлин монро джон ленон и альберт эйнштейн | в список также попали мерлин монро джон ленон и альберт энштейн | в список также попали мерилин монро джон леннон и альберт энтштейн | в список также попали мэрилин монро джон леннон и альберт эпштейн | в список также попали мэрилин монро джон леннон и альберт эйнштейн | в список так же попали мерелин монро джон ленон и альберт нштейн | в список также попали мэрилин монро джон леннон и альберт эйнштейн', 'performers': '1743 | 784 | 1014 | 1572 | 744 | 2187 | 1208', 'gt': 'в список также попали мэрилин монро джон леннон и альберт эйнштейн'} ``` ### Data Fields * task: a string containing a url of the audio recording * transcriptions: a list of the crowdsourced transcriptions separated by '|' * performers: the corresponding performers' identifiers. * gt: ground truth transcription ## Dataset Creation ### Source Data The audio recordings were obtained using a [speech synthesis tool](https://cloud.yandex.com/en-ru/services/speechkit). The source sentences come from the Russian test set of the machine translation shared task executed as a part of the Eights and Ninth Workshops on Statistical Machine Translation ([WMT 2013](https://www.statmt.org/wmt13/) and [WMT 2014](https://www.statmt.org/wmt14/)). ### Annotations Annotation was done on [Toloka crowdsourcing platform](https://toloka.ai) with overlap of 7 (that is, each task was performed by 7 annotators). Only annotators who self-reported the knowledge of Russian had access to the annotation task. Additionally, annotators had to pass *Entrance Exam*. For this, we ask all incoming eligible workers to annotate ten audio recordings. We then compute our target metric — Word Error Rate (WER) — on these recordings and accept to the main task all workers who achieve WER of 40% or less (the smaller the value of the metric, the higher the quality of annotation). The Toloka crowdsourcing platform associates workers with unique identifiers and returns these identifiers to the requester. To further protect the data, we additionally encode each identifier with an integer that is eventually reported in our released datasets. See more details in the [paper](https://arxiv.org/pdf/2107.01091.pdf). ### Citation Information ``` @inproceedings{CrowdSpeech, author = {Pavlichenko, Nikita and Stelmakh, Ivan and Ustalov, Dmitry}, title = {{CrowdSpeech and Vox~DIY: Benchmark Dataset for Crowdsourced Audio Transcription}}, year = {2021}, booktitle = {Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks}, eprint = {2107.01091}, eprinttype = {arxiv}, eprintclass = {cs.SD}, url = {https://openreview.net/forum?id=3_hgF1NAXU7}, language = {english}, pubstate = {forthcoming}, } ```
toloka/VoxDIY-RusNews
[ "task_categories:summarization", "task_categories:automatic-speech-recognition", "task_categories:text2text-generation", "annotations_creators:found", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:unknown", "source_datasets:original", "language:ru", "license:cc-by-4.0", "conditional-text-generation", "stuctured-to-text", "speech-recognition", "arxiv:2107.01091", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["found"], "language_creators": ["crowdsourced"], "language": ["ru"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["summarization", "automatic-speech-recognition", "text2text-generation"], "task_ids": [], "pretty_name": "VoxDIY RusNews", "language_bcp47": ["ru-RU"], "tags": ["conditional-text-generation", "stuctured-to-text", "speech-recognition"]}
2022-12-06T15:24:30+00:00
[ "2107.01091" ]
[ "ru" ]
TAGS #task_categories-summarization #task_categories-automatic-speech-recognition #task_categories-text2text-generation #annotations_creators-found #language_creators-crowdsourced #multilinguality-monolingual #size_categories-unknown #source_datasets-original #language-Russian #license-cc-by-4.0 #conditional-text-generation #stuctured-to-text #speech-recognition #arxiv-2107.01091 #region-us
# Dataset Card for VoxDIY RusNews ## Dataset Description - Repository: GitHub - Paper: Paper - Point of Contact: research@URL ### Dataset Summary VoxDIY RusNews is the first publicly available large-scale dataset of crowdsourced audio transcriptions in Russian language. The dataset was constructed by annotating audio recordings of Russian sentences from news domain on Toloka crowdsourcing platform. VoxDIY RusNews consists of 3091 instances having around 21K annotations obtained from crowd workers. ### Supported Tasks and Leaderboards Aggregation of crowd transcriptions. ### Languages Russian ## Dataset Structure ### Data Instances A data instance contains a url to the audio recording, a list of transcriptions along with the corresponding performers identifiers and ground truth. For each data instance, seven crowdsourced transcriptions are provided. ### Data Fields * task: a string containing a url of the audio recording * transcriptions: a list of the crowdsourced transcriptions separated by '|' * performers: the corresponding performers' identifiers. * gt: ground truth transcription ## Dataset Creation ### Source Data The audio recordings were obtained using a speech synthesis tool. The source sentences come from the Russian test set of the machine translation shared task executed as a part of the Eights and Ninth Workshops on Statistical Machine Translation (WMT 2013 and WMT 2014). ### Annotations Annotation was done on Toloka crowdsourcing platform with overlap of 7 (that is, each task was performed by 7 annotators). Only annotators who self-reported the knowledge of Russian had access to the annotation task. Additionally, annotators had to pass *Entrance Exam*. For this, we ask all incoming eligible workers to annotate ten audio recordings. We then compute our target metric — Word Error Rate (WER) — on these recordings and accept to the main task all workers who achieve WER of 40% or less (the smaller the value of the metric, the higher the quality of annotation). The Toloka crowdsourcing platform associates workers with unique identifiers and returns these identifiers to the requester. To further protect the data, we additionally encode each identifier with an integer that is eventually reported in our released datasets. See more details in the paper.
[ "# Dataset Card for VoxDIY RusNews", "## Dataset Description\n- Repository: GitHub\n- Paper: Paper\n- Point of Contact: research@URL", "### Dataset Summary\n\nVoxDIY RusNews is the first publicly available large-scale dataset of crowdsourced audio transcriptions in Russian language.\nThe dataset was constructed by annotating audio recordings of Russian sentences from news domain on Toloka crowdsourcing platform.\nVoxDIY RusNews consists of 3091 instances having around 21K annotations obtained from crowd workers.", "### Supported Tasks and Leaderboards\nAggregation of crowd transcriptions.", "### Languages\nRussian", "## Dataset Structure", "### Data Instances\n\nA data instance contains a url to the audio recording, a list of transcriptions along with the corresponding performers identifiers and \nground truth. For each data instance, seven crowdsourced transcriptions are provided.", "### Data Fields\n\n* task: a string containing a url of the audio recording\n* transcriptions: a list of the crowdsourced transcriptions separated by '|'\n* performers: the corresponding performers' identifiers.\n* gt: ground truth transcription", "## Dataset Creation", "### Source Data\n\nThe audio recordings were obtained using a speech synthesis tool.\nThe source sentences come from the Russian test set of the machine translation shared task executed as a part of the \nEights and Ninth Workshops on Statistical Machine Translation (WMT 2013 and WMT 2014).", "### Annotations\n\nAnnotation was done on Toloka crowdsourcing platform with overlap of 7 (that is, each task was performed by 7 annotators).\n\nOnly annotators who self-reported the knowledge of Russian had access to the annotation task.\nAdditionally, annotators had to pass *Entrance Exam*. For this, we ask all incoming eligible workers to annotate ten audio\nrecordings. We then compute our target metric — Word Error Rate (WER) — on these recordings and accept to the main task all workers \nwho achieve WER of 40% or less (the smaller the value of the metric, the higher the quality of annotation).\n\nThe Toloka crowdsourcing platform associates workers with unique identifiers and returns these identifiers to the requester. \nTo further protect the data, we additionally encode each identifier with an integer that is eventually reported in our released datasets.\n\nSee more details in the paper." ]
[ "TAGS\n#task_categories-summarization #task_categories-automatic-speech-recognition #task_categories-text2text-generation #annotations_creators-found #language_creators-crowdsourced #multilinguality-monolingual #size_categories-unknown #source_datasets-original #language-Russian #license-cc-by-4.0 #conditional-text-generation #stuctured-to-text #speech-recognition #arxiv-2107.01091 #region-us \n", "# Dataset Card for VoxDIY RusNews", "## Dataset Description\n- Repository: GitHub\n- Paper: Paper\n- Point of Contact: research@URL", "### Dataset Summary\n\nVoxDIY RusNews is the first publicly available large-scale dataset of crowdsourced audio transcriptions in Russian language.\nThe dataset was constructed by annotating audio recordings of Russian sentences from news domain on Toloka crowdsourcing platform.\nVoxDIY RusNews consists of 3091 instances having around 21K annotations obtained from crowd workers.", "### Supported Tasks and Leaderboards\nAggregation of crowd transcriptions.", "### Languages\nRussian", "## Dataset Structure", "### Data Instances\n\nA data instance contains a url to the audio recording, a list of transcriptions along with the corresponding performers identifiers and \nground truth. For each data instance, seven crowdsourced transcriptions are provided.", "### Data Fields\n\n* task: a string containing a url of the audio recording\n* transcriptions: a list of the crowdsourced transcriptions separated by '|'\n* performers: the corresponding performers' identifiers.\n* gt: ground truth transcription", "## Dataset Creation", "### Source Data\n\nThe audio recordings were obtained using a speech synthesis tool.\nThe source sentences come from the Russian test set of the machine translation shared task executed as a part of the \nEights and Ninth Workshops on Statistical Machine Translation (WMT 2013 and WMT 2014).", "### Annotations\n\nAnnotation was done on Toloka crowdsourcing platform with overlap of 7 (that is, each task was performed by 7 annotators).\n\nOnly annotators who self-reported the knowledge of Russian had access to the annotation task.\nAdditionally, annotators had to pass *Entrance Exam*. For this, we ask all incoming eligible workers to annotate ten audio\nrecordings. We then compute our target metric — Word Error Rate (WER) — on these recordings and accept to the main task all workers \nwho achieve WER of 40% or less (the smaller the value of the metric, the higher the quality of annotation).\n\nThe Toloka crowdsourcing platform associates workers with unique identifiers and returns these identifiers to the requester. \nTo further protect the data, we additionally encode each identifier with an integer that is eventually reported in our released datasets.\n\nSee more details in the paper." ]
5f1671c0b6836490847ec9e06fefd00f24a8a794
[Needs More Information] # Dataset Card for common_voice ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://commonvoice.mozilla.org/en/datasets - **Repository:** https://github.com/common-voice/common-voice - **Paper:** https://commonvoice.mozilla.org/en/datasets - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Needs More Information] ### Dataset Summary The Common Voice dataset consists of a unique MP3 and corresponding text file. Many of the 9,283 recorded hours in the dataset also include demographic metadata like age, sex, and accent that can help train the accuracy of speech recognition engines. The dataset currently consists of 7,335 validated hours in 60 languages, but were always adding more voices and languages. Take a look at our Languages page to request a language or start contributing. ### Supported Tasks and Leaderboards [Needs More Information] ### Languages English ## Dataset Structure ### Data Instances [Needs More Information] ### Data Fields [Needs More Information] ### Data Splits [Needs More Information] ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset. ## Considerations for Using the Data ### Social Impact of Dataset The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset. ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed]
tommy19970714/common_voice
[ "region:us" ]
2022-03-02T23:29:22+00:00
{}
2021-02-27T06:51:27+00:00
[]
[]
TAGS #region-us
# Dataset Card for common_voice ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: URL - Repository: URL - Paper: URL - Leaderboard: - Point of Contact: ### Dataset Summary The Common Voice dataset consists of a unique MP3 and corresponding text file. Many of the 9,283 recorded hours in the dataset also include demographic metadata like age, sex, and accent that can help train the accuracy of speech recognition engines. The dataset currently consists of 7,335 validated hours in 60 languages, but were always adding more voices and languages. Take a look at our Languages page to request a language or start contributing. ### Supported Tasks and Leaderboards ### Languages English ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset. ## Considerations for Using the Data ### Social Impact of Dataset The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset. ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information
[ "# Dataset Card for common_voice", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: \n- Point of Contact:", "### Dataset Summary\n\nThe Common Voice dataset consists of a unique MP3 and corresponding text file. Many of the 9,283 recorded hours in the dataset also include demographic metadata like age, sex, and accent that can help train the accuracy of speech recognition engines.\n\nThe dataset currently consists of 7,335 validated hours in 60 languages, but were always adding more voices and languages. Take a look at our Languages page to request a language or start contributing.", "### Supported Tasks and Leaderboards", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\nThe dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThe dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information" ]
[ "TAGS\n#region-us \n", "# Dataset Card for common_voice", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: \n- Point of Contact:", "### Dataset Summary\n\nThe Common Voice dataset consists of a unique MP3 and corresponding text file. Many of the 9,283 recorded hours in the dataset also include demographic metadata like age, sex, and accent that can help train the accuracy of speech recognition engines.\n\nThe dataset currently consists of 7,335 validated hours in 60 languages, but were always adding more voices and languages. Take a look at our Languages page to request a language or start contributing.", "### Supported Tasks and Leaderboards", "### Languages\n\nEnglish", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\nThe dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThe dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information" ]
0933803eb0f5956b2da9d2d7b6805fa31b18a6c8
# CodeParrot Dataset This is the train split of the CodeParrot dataset. It contains Python files used to train the code generation model in Chapter 10: Training Transformers from Scratch in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/10_transformers-from-scratch.ipynb). See the [full dataset](https://huggingface.co/datasets/transformersbook/codeparrot) for more information.
transformersbook/codeparrot-train
[ "region:us" ]
2022-03-02T23:29:22+00:00
{}
2022-02-05T16:23:03+00:00
[]
[]
TAGS #region-us
# CodeParrot Dataset This is the train split of the CodeParrot dataset. It contains Python files used to train the code generation model in Chapter 10: Training Transformers from Scratch in the NLP with Transformers book. You can find the full code in the accompanying Github repository. See the full dataset for more information.
[ "# CodeParrot Dataset \n\nThis is the train split of the CodeParrot dataset. It contains Python files used to train the code generation model in Chapter 10: Training Transformers from Scratch in the NLP with Transformers book. You can find the full code in the accompanying Github repository.\n\nSee the full dataset for more information." ]
[ "TAGS\n#region-us \n", "# CodeParrot Dataset \n\nThis is the train split of the CodeParrot dataset. It contains Python files used to train the code generation model in Chapter 10: Training Transformers from Scratch in the NLP with Transformers book. You can find the full code in the accompanying Github repository.\n\nSee the full dataset for more information." ]
08cfe185552c1c9c8880d696862feb52506458fc
# CodeParrot Dataset This is the validation split of the CodeParrot dataset. It contains Python files used to train the code generation model in Chapter 10: Training Transformers from Scratch in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/10_transformers-from-scratch.ipynb). See the [full dataset](https://huggingface.co/datasets/transformersbook/codeparrot) for more information.
transformersbook/codeparrot-valid
[ "region:us" ]
2022-03-02T23:29:22+00:00
{}
2022-02-05T16:23:18+00:00
[]
[]
TAGS #region-us
# CodeParrot Dataset This is the validation split of the CodeParrot dataset. It contains Python files used to train the code generation model in Chapter 10: Training Transformers from Scratch in the NLP with Transformers book. You can find the full code in the accompanying Github repository. See the full dataset for more information.
[ "# CodeParrot Dataset \n\nThis is the validation split of the CodeParrot dataset. It contains Python files used to train the code generation model in Chapter 10: Training Transformers from Scratch in the NLP with Transformers book. You can find the full code in the accompanying Github repository.\n\nSee the full dataset for more information." ]
[ "TAGS\n#region-us \n", "# CodeParrot Dataset \n\nThis is the validation split of the CodeParrot dataset. It contains Python files used to train the code generation model in Chapter 10: Training Transformers from Scratch in the NLP with Transformers book. You can find the full code in the accompanying Github repository.\n\nSee the full dataset for more information." ]
1525880546992f12c04c5ae4cf5c4d1e80ca04a4
# CodeParrot 🦜 Dataset ## What is it? This is the full CodeParrot dataset. It contains Python files used to train the code generation model in Chapter 10: Training Transformers from Scratch in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/10_transformers-from-scratch.ipynb). ## Creation It was created with the GitHub dataset available via Google's BigQuery. It contains approximately 22 million Python files and is 180 GB (50 GB compressed) big. The SQL query to create the dataset is the following: ```sql SELECT f.repo_name, f.path, c.copies, c.size, c.content, l.license FROM `bigquery-public-data.github_repos.files` AS f JOIN `bigquery-public-data.github_repos.contents` AS c ON f.id = c.id JOIN `bigquery-public-data.github_repos.licenses` AS l ON f.repo_name = l.repo_name WHERE NOT c.binary AND ((f.path LIKE '%.py') AND (c.size BETWEEN 1024 AND 1048575)) ``` ## Duplication Note that about 70% of the dataset is duplicated. If you use the dataset make sure to deal with them appropriately. See [codeparrot-clean](https://huggingface.co/datasets/lvwerra/codeparrot-clean) for a deduplicated version of this dataset.
transformersbook/codeparrot
[ "python", "code", "region:us" ]
2022-03-02T23:29:22+00:00
{"tags": ["python", "code"]}
2022-02-05T16:15:40+00:00
[]
[]
TAGS #python #code #region-us
# CodeParrot Dataset ## What is it? This is the full CodeParrot dataset. It contains Python files used to train the code generation model in Chapter 10: Training Transformers from Scratch in the NLP with Transformers book. You can find the full code in the accompanying Github repository. ## Creation It was created with the GitHub dataset available via Google's BigQuery. It contains approximately 22 million Python files and is 180 GB (50 GB compressed) big. The SQL query to create the dataset is the following: ## Duplication Note that about 70% of the dataset is duplicated. If you use the dataset make sure to deal with them appropriately. See codeparrot-clean for a deduplicated version of this dataset.
[ "# CodeParrot Dataset", "## What is it?\n\nThis is the full CodeParrot dataset. It contains Python files used to train the code generation model in Chapter 10: Training Transformers from Scratch in the NLP with Transformers book. You can find the full code in the accompanying Github repository.", "## Creation\n\nIt was created with the GitHub dataset available via Google's BigQuery. It contains approximately 22 million Python files and is 180 GB (50 GB compressed) big. The SQL query to create the dataset is the following:", "## Duplication\nNote that about 70% of the dataset is duplicated. If you use the dataset make sure to deal with them appropriately. See codeparrot-clean for a deduplicated version of this dataset." ]
[ "TAGS\n#python #code #region-us \n", "# CodeParrot Dataset", "## What is it?\n\nThis is the full CodeParrot dataset. It contains Python files used to train the code generation model in Chapter 10: Training Transformers from Scratch in the NLP with Transformers book. You can find the full code in the accompanying Github repository.", "## Creation\n\nIt was created with the GitHub dataset available via Google's BigQuery. It contains approximately 22 million Python files and is 180 GB (50 GB compressed) big. The SQL query to create the dataset is the following:", "## Duplication\nNote that about 70% of the dataset is duplicated. If you use the dataset make sure to deal with them appropriately. See codeparrot-clean for a deduplicated version of this dataset." ]
940fa0c860325e199aeaa8d6dbfceec69f13a7e8
# Dataset Card for [TuringBench] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/TuringBench/TuringBench - **Repository:** https://github.com/TuringBench/TuringBench - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@TuringBench](https://github.com/TuringBench) for adding this dataset.
turingbench/TuringBench
[ "task_categories:text-classification", "task_ids:multi-class-classification", "annotations_creators:found", "language_creators:found", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:unknown", "source_datasets:original", "language:en", "license:apache-2.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["found"], "language_creators": ["found", "machine-generated"], "language": ["en"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification"]}
2022-10-25T08:56:51+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-multi-class-classification #annotations_creators-found #language_creators-found #language_creators-machine-generated #multilinguality-monolingual #size_categories-unknown #source_datasets-original #language-English #license-apache-2.0 #region-us
# Dataset Card for [TuringBench] ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: URL - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions Thanks to @TuringBench for adding this dataset.
[ "# Dataset Card for [TuringBench]", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @TuringBench for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-multi-class-classification #annotations_creators-found #language_creators-found #language_creators-machine-generated #multilinguality-monolingual #size_categories-unknown #source_datasets-original #language-English #license-apache-2.0 #region-us \n", "# Dataset Card for [TuringBench]", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @TuringBench for adding this dataset." ]
253bb695d8e78f96a95589f19ba54bddbd080c8a
https://teacher.desmos.com/activitybuilder/teacherguide/6040a7a7da1fce0c129ae5be https://teacher.desmos.com/activitybuilder/teacherguide/604249659240440d25a27d0c https://teacher.desmos.com/activitybuilder/teacherguide/604249a365ecd40d30b4ad18 https://teacher.desmos.com/activitybuilder/teacherguide/604249e2cfb0a20d51e13768 https://teacher.desmos.com/activitybuilder/teacherguide/60424a1c9240440d25a27e22 https://teacher.desmos.com/activitybuilder/teacherguide/60424a58cefbd00d5da96390 https://teacher.desmos.com/activitybuilder/teacherguide/60424a90229a7d0cfb807295 https://teacher.desmos.com/activitybuilder/teacherguide/60424ad532e0730c4bdcbbab https://teacher.desmos.com/activitybuilder/teacherguide/60424b0f1d780b0b7395f36d https://teacher.desmos.com/activitybuilder/teacherguide/60424c01534b110d262d4d46 https://teacher.desmos.com/activitybuilder/teacherguide/60424c47969a440d13c62ffb https://teacher.desmos.com/activitybuilder/teacherguide/60424cd7f17f6b0d4550c269 https://teacher.desmos.com/activitybuilder/teacherguide/60424d0dcfb0a20d51e13c97 https://teacher.desmos.com/activitybuilder/teacherguide/60424d5796540a0cf95ff215 https://teacher.desmos.com/activitybuilder/teacherguide/60424d9163a2220bc4c8f2be https://teacher.desmos.com/activitybuilder/teacherguide/60424e030d98a80d53856ab2 https://teacher.desmos.com/activitybuilder/teacherguide/60424e37ed488c0cfbbaab2f
uasoyasser/rgfes
[ "region:us" ]
2022-03-02T23:29:22+00:00
{}
2021-03-05T15:42:19+00:00
[]
[]
TAGS #region-us
URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL
[]
[ "TAGS\n#region-us \n" ]
c65a95699c29e1374686b4bfe041dd3d31e89a24
## Dataset Description - **Homepage:** http://hatespeech.berkeley.edu - **Paper:** https://arxiv.org/abs/2009.10277 # Dataset card for _Measuring Hate Speech_ This is a public release of the dataset described in Kennedy et al. (2020) and Sachdeva et al. (2022), consisting of 39,565 comments annotated by 7,912 annotators, for 135,556 combined rows. The primary outcome variable is the "hate speech score" but the 10 constituent ordinal labels (sentiment, (dis)respect, insult, humiliation, inferior status, violence, dehumanization, genocide, attack/defense, hate speech benchmark) can also be treated as outcomes. Includes 8 target identity groups (race/ethnicity, religion, national origin/citizenship, gender, sexual orientation, age, disability, political ideology) and 42 target identity subgroups, as well as 6 annotator demographics and 40 subgroups. The hate speech score incorporates an IRT adjustment by estimating variation in annotator interpretation of the labeling guidelines. This dataset card is a work in progress and will be improved over time. ## Key dataset columns * hate_speech_score - continuous hate speech measure, where higher = more hateful and lower = less hateful. > 0.5 is approximately hate speech, < -1 is counter or supportive speech, and -1 to +0.5 is neutral or ambiguous. * text - lightly processed text of a social media post * comment\_id - unique ID for each comment * annotator\_id - unique ID for each annotator * sentiment - ordinal label that is combined into the continuous score * respect - ordinal label that is combined into the continuous score * insult - ordinal label that is combined into the continuous score * humiliate - ordinal label that is combined into the continuous score * status - ordinal label that is combined into the continuous score * dehumanize - ordinal label that is combined into the continuous score * violence - ordinal label that is combined into the continuous score * genocide - ordinal label that is combined into the continuous score * attack\_defend - ordinal label that is combined into the continuous score * hatespeech - ordinal label that is combined into the continuous score * annotator_severity - annotator's estimated survey interpretation bias ## Code to download The dataset can be downloaded using the following python code: ```python import datasets dataset = datasets.load_dataset('ucberkeley-dlab/measuring-hate-speech', 'binary') df = dataset['train'].to_pandas() df.describe() ``` ## Citation ``` @article{kennedy2020constructing, title={Constructing interval variables via faceted Rasch measurement and multitask deep learning: a hate speech application}, author={Kennedy, Chris J and Bacon, Geoff and Sahn, Alexander and von Vacano, Claudia}, journal={arXiv preprint arXiv:2009.10277}, year={2020} } ``` ## Contributions Dataset curated by [@ck37](https://github.com/ck37), [@pssachdeva](https://github.com/pssachdeva), et al. ## References Kennedy, C. J., Bacon, G., Sahn, A., & von Vacano, C. (2020). [Constructing interval variables via faceted Rasch measurement and multitask deep learning: a hate speech application](https://arxiv.org/abs/2009.10277). arXiv preprint arXiv:2009.10277. Pratik Sachdeva, Renata Barreto, Geoff Bacon, Alexander Sahn, Claudia von Vacano, and Chris Kennedy. 2022. [The Measuring Hate Speech Corpus: Leveraging Rasch Measurement Theory for Data Perspectivism](https://aclanthology.org/2022.nlperspectives-1.11/). In *Proceedings of the 1st Workshop on Perspectivist Approaches to NLP @LREC2022*, pages 83–94, Marseille, France. European Language Resources Association.
ucberkeley-dlab/measuring-hate-speech
[ "task_categories:text-classification", "task_ids:hate-speech-detection", "task_ids:sentiment-classification", "task_ids:multi-label-classification", "annotations_creators:crowdsourced", "multilinguality:monolingual", "source_datasets:original", "language:en", "license:cc-by-4.0", "arxiv:2009.10277", "counterspeech", "hate-speech", "text-regression", "irt", "arxiv:2009.10277", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["hate-speech-detection", "sentiment-classification", "multi-label-classification"], "pretty_name": "measuring-hate-speech", "tags": ["arxiv:2009.10277", "counterspeech", "hate-speech", "text-regression", "irt"]}
2022-11-15T15:44:31+00:00
[ "2009.10277", "2009.10277" ]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-hate-speech-detection #task_ids-sentiment-classification #task_ids-multi-label-classification #annotations_creators-crowdsourced #multilinguality-monolingual #source_datasets-original #language-English #license-cc-by-4.0 #arxiv-2009.10277 #counterspeech #hate-speech #text-regression #irt #arxiv-2009.10277 #region-us
## Dataset Description - Homepage: URL - Paper: URL # Dataset card for _Measuring Hate Speech_ This is a public release of the dataset described in Kennedy et al. (2020) and Sachdeva et al. (2022), consisting of 39,565 comments annotated by 7,912 annotators, for 135,556 combined rows. The primary outcome variable is the "hate speech score" but the 10 constituent ordinal labels (sentiment, (dis)respect, insult, humiliation, inferior status, violence, dehumanization, genocide, attack/defense, hate speech benchmark) can also be treated as outcomes. Includes 8 target identity groups (race/ethnicity, religion, national origin/citizenship, gender, sexual orientation, age, disability, political ideology) and 42 target identity subgroups, as well as 6 annotator demographics and 40 subgroups. The hate speech score incorporates an IRT adjustment by estimating variation in annotator interpretation of the labeling guidelines. This dataset card is a work in progress and will be improved over time. ## Key dataset columns * hate_speech_score - continuous hate speech measure, where higher = more hateful and lower = less hateful. > 0.5 is approximately hate speech, < -1 is counter or supportive speech, and -1 to +0.5 is neutral or ambiguous. * text - lightly processed text of a social media post * comment\_id - unique ID for each comment * annotator\_id - unique ID for each annotator * sentiment - ordinal label that is combined into the continuous score * respect - ordinal label that is combined into the continuous score * insult - ordinal label that is combined into the continuous score * humiliate - ordinal label that is combined into the continuous score * status - ordinal label that is combined into the continuous score * dehumanize - ordinal label that is combined into the continuous score * violence - ordinal label that is combined into the continuous score * genocide - ordinal label that is combined into the continuous score * attack\_defend - ordinal label that is combined into the continuous score * hatespeech - ordinal label that is combined into the continuous score * annotator_severity - annotator's estimated survey interpretation bias ## Code to download The dataset can be downloaded using the following python code: ## Contributions Dataset curated by @ck37, @pssachdeva, et al. ## References Kennedy, C. J., Bacon, G., Sahn, A., & von Vacano, C. (2020). Constructing interval variables via faceted Rasch measurement and multitask deep learning: a hate speech application. arXiv preprint arXiv:2009.10277. Pratik Sachdeva, Renata Barreto, Geoff Bacon, Alexander Sahn, Claudia von Vacano, and Chris Kennedy. 2022. The Measuring Hate Speech Corpus: Leveraging Rasch Measurement Theory for Data Perspectivism. In *Proceedings of the 1st Workshop on Perspectivist Approaches to NLP @LREC2022*, pages 83–94, Marseille, France. European Language Resources Association.
[ "## Dataset Description\n\n- Homepage: URL\n- Paper: URL", "# Dataset card for _Measuring Hate Speech_\n\nThis is a public release of the dataset described in Kennedy et al. (2020) and Sachdeva et al. (2022), consisting of 39,565 comments annotated by 7,912 annotators, for 135,556 combined rows. The primary outcome variable is the \"hate speech score\" but the 10 constituent ordinal labels (sentiment, (dis)respect, insult, humiliation, inferior status, violence, dehumanization, genocide, attack/defense, hate speech benchmark) can also be treated as outcomes. Includes 8 target identity groups (race/ethnicity, religion, national origin/citizenship, gender, sexual orientation, age, disability, political ideology) and 42 target identity subgroups, as well as 6 annotator demographics and 40 subgroups. The hate speech score incorporates an IRT adjustment by estimating variation in annotator interpretation of the labeling guidelines.\n\nThis dataset card is a work in progress and will be improved over time.", "## Key dataset columns\n\n * hate_speech_score - continuous hate speech measure, where higher = more hateful and lower = less hateful. > 0.5 is approximately hate speech, < -1 is counter or supportive speech, and -1 to +0.5 is neutral or ambiguous.\n * text - lightly processed text of a social media post\n * comment\\_id - unique ID for each comment\n * annotator\\_id - unique ID for each annotator\n * sentiment - ordinal label that is combined into the continuous score\n * respect - ordinal label that is combined into the continuous score\n * insult - ordinal label that is combined into the continuous score\n * humiliate - ordinal label that is combined into the continuous score\n * status - ordinal label that is combined into the continuous score\n * dehumanize - ordinal label that is combined into the continuous score\n * violence - ordinal label that is combined into the continuous score\n * genocide - ordinal label that is combined into the continuous score\n * attack\\_defend - ordinal label that is combined into the continuous score\n * hatespeech - ordinal label that is combined into the continuous score\n * annotator_severity - annotator's estimated survey interpretation bias", "## Code to download\n\nThe dataset can be downloaded using the following python code:", "## Contributions\n\nDataset curated by @ck37, @pssachdeva, et al.", "## References\n\nKennedy, C. J., Bacon, G., Sahn, A., & von Vacano, C. (2020). Constructing interval variables via faceted Rasch measurement and multitask deep learning: a hate speech application. arXiv preprint arXiv:2009.10277.\n\nPratik Sachdeva, Renata Barreto, Geoff Bacon, Alexander Sahn, Claudia von Vacano, and Chris Kennedy. 2022. The Measuring Hate Speech Corpus: Leveraging Rasch Measurement Theory for Data Perspectivism. In *Proceedings of the 1st Workshop on Perspectivist Approaches to NLP @LREC2022*, pages 83–94, Marseille, France. European Language Resources Association." ]
[ "TAGS\n#task_categories-text-classification #task_ids-hate-speech-detection #task_ids-sentiment-classification #task_ids-multi-label-classification #annotations_creators-crowdsourced #multilinguality-monolingual #source_datasets-original #language-English #license-cc-by-4.0 #arxiv-2009.10277 #counterspeech #hate-speech #text-regression #irt #arxiv-2009.10277 #region-us \n", "## Dataset Description\n\n- Homepage: URL\n- Paper: URL", "# Dataset card for _Measuring Hate Speech_\n\nThis is a public release of the dataset described in Kennedy et al. (2020) and Sachdeva et al. (2022), consisting of 39,565 comments annotated by 7,912 annotators, for 135,556 combined rows. The primary outcome variable is the \"hate speech score\" but the 10 constituent ordinal labels (sentiment, (dis)respect, insult, humiliation, inferior status, violence, dehumanization, genocide, attack/defense, hate speech benchmark) can also be treated as outcomes. Includes 8 target identity groups (race/ethnicity, religion, national origin/citizenship, gender, sexual orientation, age, disability, political ideology) and 42 target identity subgroups, as well as 6 annotator demographics and 40 subgroups. The hate speech score incorporates an IRT adjustment by estimating variation in annotator interpretation of the labeling guidelines.\n\nThis dataset card is a work in progress and will be improved over time.", "## Key dataset columns\n\n * hate_speech_score - continuous hate speech measure, where higher = more hateful and lower = less hateful. > 0.5 is approximately hate speech, < -1 is counter or supportive speech, and -1 to +0.5 is neutral or ambiguous.\n * text - lightly processed text of a social media post\n * comment\\_id - unique ID for each comment\n * annotator\\_id - unique ID for each annotator\n * sentiment - ordinal label that is combined into the continuous score\n * respect - ordinal label that is combined into the continuous score\n * insult - ordinal label that is combined into the continuous score\n * humiliate - ordinal label that is combined into the continuous score\n * status - ordinal label that is combined into the continuous score\n * dehumanize - ordinal label that is combined into the continuous score\n * violence - ordinal label that is combined into the continuous score\n * genocide - ordinal label that is combined into the continuous score\n * attack\\_defend - ordinal label that is combined into the continuous score\n * hatespeech - ordinal label that is combined into the continuous score\n * annotator_severity - annotator's estimated survey interpretation bias", "## Code to download\n\nThe dataset can be downloaded using the following python code:", "## Contributions\n\nDataset curated by @ck37, @pssachdeva, et al.", "## References\n\nKennedy, C. J., Bacon, G., Sahn, A., & von Vacano, C. (2020). Constructing interval variables via faceted Rasch measurement and multitask deep learning: a hate speech application. arXiv preprint arXiv:2009.10277.\n\nPratik Sachdeva, Renata Barreto, Geoff Bacon, Alexander Sahn, Claudia von Vacano, and Chris Kennedy. 2022. The Measuring Hate Speech Corpus: Leveraging Rasch Measurement Theory for Data Perspectivism. In *Proceedings of the 1st Workshop on Perspectivist Approaches to NLP @LREC2022*, pages 83–94, Marseille, France. European Language Resources Association." ]
7b56c6cb1c9c8523249f407044c838660df3811a
# Dataset Card for Vietnamese Students’ Feedback Corpus ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://sites.google.com/uit.edu.vn/uit-nlp/datasets-projects#h.p_4Brw8L-cbfTe - **Repository:** - **Paper:** [UIT-VSFC: Vietnamese Students’ Feedback Corpus for Sentiment Analysis](https://www.researchgate.net/publication/329645066_UIT-VSFC_Vietnamese_Students'_Feedback_Corpus_for_Sentiment_Analysis) - **Leaderboard:** - **Point of Contact:** ### Dataset Summary Students’ feedback is a vital resource for the interdisciplinary research involving the combining of two different research fields between sentiment analysis and education. Vietnamese Students’ Feedback Corpus (UIT-VSFC) is the resource consists of over 16,000 sentences which are human-annotated with two different tasks: sentiment-based and topic-based classifications. To assess the quality of our corpus, we measure the annotator agreements and classification evaluation on the UIT-VSFC corpus. As a result, we obtained the inter-annotator agreement of sentiments and topics with more than over 91% and 71% respectively. In addition, we built the baseline model with the Maximum Entropy classifier and achieved approximately 88% of the sentiment F1-score and over 84% of the topic F1-score. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The language of the dataset text sentence is Vietnamese (`vi`). ## Dataset Structure ### Data Instances An instance example: ``` { 'sentence': 'slide giáo trình đầy đủ .', 'sentiment': 2, 'topic': 1 } ``` ### Data Fields - `sentence` (str): Text sentence. - `sentiment`: Sentiment class, with values 0 (negative), 1 (neutral) and 2 (positive). - `topic`: Topic class, with values 0 (lecturer), 1 (training_program), 2 (facility) and 3 (others). ### Data Splits The dataset is split in train, validation and test. | | Tain | Validation | Test | |--------------------|------:|-----------:|-----:| | Number of examples | 11426 | 1583 | 3166 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Unknown. ### Citation Information ``` @InProceedings{8573337, author={Nguyen, Kiet Van and Nguyen, Vu Duc and Nguyen, Phu X. V. and Truong, Tham T. H. and Nguyen, Ngan Luu-Thuy}, booktitle={2018 10th International Conference on Knowledge and Systems Engineering (KSE)}, title={UIT-VSFC: Vietnamese Students’ Feedback Corpus for Sentiment Analysis}, year={2018}, volume={}, number={}, pages={19-24}, doi={10.1109/KSE.2018.8573337} } ``` ### Contributions Thanks to [@albertvillanova](https://github.com/albertvillanova) for adding this dataset.
uitnlp/vietnamese_students_feedback
[ "task_categories:text-classification", "task_ids:sentiment-classification", "task_ids:topic-classification", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:vi", "license:unknown", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["vi"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["sentiment-classification", "topic-classification"], "pretty_name": "Vietnamese Students\u2019 Feedback Corpus"}
2022-10-13T14:39:37+00:00
[]
[ "vi" ]
TAGS #task_categories-text-classification #task_ids-sentiment-classification #task_ids-topic-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Vietnamese #license-unknown #region-us
Dataset Card for Vietnamese Students’ Feedback Corpus ===================================================== Table of Contents ----------------- * Table of Contents * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: * Paper: UIT-VSFC: Vietnamese Students’ Feedback Corpus for Sentiment Analysis * Leaderboard: * Point of Contact: ### Dataset Summary Students’ feedback is a vital resource for the interdisciplinary research involving the combining of two different research fields between sentiment analysis and education. Vietnamese Students’ Feedback Corpus (UIT-VSFC) is the resource consists of over 16,000 sentences which are human-annotated with two different tasks: sentiment-based and topic-based classifications. To assess the quality of our corpus, we measure the annotator agreements and classification evaluation on the UIT-VSFC corpus. As a result, we obtained the inter-annotator agreement of sentiments and topics with more than over 91% and 71% respectively. In addition, we built the baseline model with the Maximum Entropy classifier and achieved approximately 88% of the sentiment F1-score and over 84% of the topic F1-score. ### Supported Tasks and Leaderboards ### Languages The language of the dataset text sentence is Vietnamese ('vi'). Dataset Structure ----------------- ### Data Instances An instance example: ### Data Fields * 'sentence' (str): Text sentence. * 'sentiment': Sentiment class, with values 0 (negative), 1 (neutral) and 2 (positive). * 'topic': Topic class, with values 0 (lecturer), 1 (training\_program), 2 (facility) and 3 (others). ### Data Splits The dataset is split in train, validation and test. Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information Unknown. ### Contributions Thanks to @albertvillanova for adding this dataset.
[ "### Dataset Summary\n\n\nStudents’ feedback is a vital resource for the interdisciplinary research involving the combining of two different\nresearch fields between sentiment analysis and education.\n\n\nVietnamese Students’ Feedback Corpus (UIT-VSFC) is the resource consists of over 16,000 sentences which are\nhuman-annotated with two different tasks: sentiment-based and topic-based classifications.\n\n\nTo assess the quality of our corpus, we measure the annotator agreements and classification evaluation on the\nUIT-VSFC corpus. As a result, we obtained the inter-annotator agreement of sentiments and topics with more than over\n91% and 71% respectively. In addition, we built the baseline model with the Maximum Entropy classifier and achieved\napproximately 88% of the sentiment F1-score and over 84% of the topic F1-score.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nThe language of the dataset text sentence is Vietnamese ('vi').\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn instance example:", "### Data Fields\n\n\n* 'sentence' (str): Text sentence.\n* 'sentiment': Sentiment class, with values 0 (negative), 1 (neutral) and 2 (positive).\n* 'topic': Topic class, with values 0 (lecturer), 1 (training\\_program), 2 (facility) and 3 (others).", "### Data Splits\n\n\nThe dataset is split in train, validation and test.\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nUnknown.", "### Contributions\n\n\nThanks to @albertvillanova for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-sentiment-classification #task_ids-topic-classification #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Vietnamese #license-unknown #region-us \n", "### Dataset Summary\n\n\nStudents’ feedback is a vital resource for the interdisciplinary research involving the combining of two different\nresearch fields between sentiment analysis and education.\n\n\nVietnamese Students’ Feedback Corpus (UIT-VSFC) is the resource consists of over 16,000 sentences which are\nhuman-annotated with two different tasks: sentiment-based and topic-based classifications.\n\n\nTo assess the quality of our corpus, we measure the annotator agreements and classification evaluation on the\nUIT-VSFC corpus. As a result, we obtained the inter-annotator agreement of sentiments and topics with more than over\n91% and 71% respectively. In addition, we built the baseline model with the Maximum Entropy classifier and achieved\napproximately 88% of the sentiment F1-score and over 84% of the topic F1-score.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nThe language of the dataset text sentence is Vietnamese ('vi').\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn instance example:", "### Data Fields\n\n\n* 'sentence' (str): Text sentence.\n* 'sentiment': Sentiment class, with values 0 (negative), 1 (neutral) and 2 (positive).\n* 'topic': Topic class, with values 0 (lecturer), 1 (training\\_program), 2 (facility) and 3 (others).", "### Data Splits\n\n\nThe dataset is split in train, validation and test.\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nUnknown.", "### Contributions\n\n\nThanks to @albertvillanova for adding this dataset." ]
8b1b4e10a63160594dcf6bd4ad38e06d1fc3bef8
# Dataset Summary **mMARCO** is a multilingual version of the [MS MARCO passage ranking dataset](https://microsoft.github.io/msmarco/). For more information, checkout our papers: * [**mMARCO: A Multilingual Version of the MS MARCO Passage Ranking Dataset**](https://arxiv.org/abs/2108.13897) * [**A cost-benefit analysis of cross-lingual transfer methods**](https://arxiv.org/abs/2105.06813) The first (deprecated) version comprises 8 languages: Chinese, French, German, Indonesian, Italian, Portuguese, Russian and Spanish. The current version included translations for Japanese, Dutch, Vietnamese, Hindi and Arabic. The current version is composed of 14 languages (including the original English version). ### Supported languages | Language name | Language code | |---------------|---------------| | English | english | | Chinese | chinese | | French | french | | German | german | | Indonesian | indonesian | | Italian | italian | | Portuguese | portuguese | | Russian | russian | | Spanish | spanish | | Arabic | arabic | | Dutch | dutch | | Hindi | hindi | | Japanese | japanese | | Vietnamese | vietnamese | # Dataset Structure You can load mMARCO dataset by choosing a specific language. We include training triples (query, positive and negative example), the translated collections of documents and queries. #### Training triples ```python >>> dataset = load_dataset('unicamp-dl/mmarco', 'english') >>> dataset['train'][1] {'query': 'what fruit is native to australia', 'positive': 'Passiflora herbertiana. A rare passion fruit native to Australia. Fruits are green-skinned, white fleshed, with an unknown edible rating. Some sources list the fruit as edible, sweet and tasty, while others list the fruits as being bitter and inedible.assiflora herbertiana. A rare passion fruit native to Australia. Fruits are green-skinned, white fleshed, with an unknown edible rating. Some sources list the fruit as edible, sweet and tasty, while others list the fruits as being bitter and inedible.', 'negative': 'The kola nut is the fruit of the kola tree, a genus (Cola) of trees that are native to the tropical rainforests of Africa.'} ``` #### Queries ```python >>> dataset = load_dataset('unicamp-dl/mmarco', 'queries-spanish') >>> dataset['train'][1] {'id': 634306, 'text': '¿Qué significa Chattel en el historial de crédito'} ``` #### Collection ```python >>> dataset = load_dataset('unicamp-dl/mmarco', 'collection-portuguese') >>> dataset['collection'][100] {'id': 100, 'text': 'Antonín Dvorák (1841-1904) Antonin Dvorak era filho de açougueiro, mas ele não seguiu o negócio de seu pai. Enquanto ajudava seu pai a meio tempo, estudou música e se formou na Escola de Órgãos de Praga em 1859.'} ``` # Citation Information ``` @misc{bonifacio2021mmarco, title={mMARCO: A Multilingual Version of MS MARCO Passage Ranking Dataset}, author={Luiz Henrique Bonifacio and Vitor Jeronymo and Hugo Queiroz Abonizio and Israel Campiotti and Marzieh Fadaee and and Roberto Lotufo and Rodrigo Nogueira}, year={2021}, eprint={2108.13897}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
unicamp-dl/mmarco
[ "arxiv:2108.13897", "arxiv:2105.06813", "region:us" ]
2022-03-02T23:29:22+00:00
{}
2024-02-05T09:48:44+00:00
[ "2108.13897", "2105.06813" ]
[]
TAGS #arxiv-2108.13897 #arxiv-2105.06813 #region-us
Dataset Summary =============== mMARCO is a multilingual version of the MS MARCO passage ranking dataset. For more information, checkout our papers: * mMARCO: A Multilingual Version of the MS MARCO Passage Ranking Dataset * A cost-benefit analysis of cross-lingual transfer methods The first (deprecated) version comprises 8 languages: Chinese, French, German, Indonesian, Italian, Portuguese, Russian and Spanish. The current version included translations for Japanese, Dutch, Vietnamese, Hindi and Arabic. The current version is composed of 14 languages (including the original English version). ### Supported languages Dataset Structure ================= You can load mMARCO dataset by choosing a specific language. We include training triples (query, positive and negative example), the translated collections of documents and queries. #### Training triples #### Queries #### Collection
[ "### Supported languages\n\n\n\nDataset Structure\n=================\n\n\nYou can load mMARCO dataset by choosing a specific language. We include training triples (query, positive and negative example), the translated collections of documents and queries.", "#### Training triples", "#### Queries", "#### Collection" ]
[ "TAGS\n#arxiv-2108.13897 #arxiv-2105.06813 #region-us \n", "### Supported languages\n\n\n\nDataset Structure\n=================\n\n\nYou can load mMARCO dataset by choosing a specific language. We include training triples (query, positive and negative example), the translated collections of documents and queries.", "#### Training triples", "#### Queries", "#### Collection" ]
fda452a7fbfd9550db2f78d9d98e6b3ec16734df
# Dataset Summary **mRobust** is a multilingual version of the [TREC 2004 Robust passage ranking dataset](https://trec.nist.gov/data/robust/04.guidelines.html). For more information, checkout our papers: * [**mRobust: A Multilingual Version of the MS MARCO Passage Ranking Dataset**](https://arxiv.org/abs/2209.13738) * [**A cost-benefit analysis of cross-lingual transfer methods**](https://arxiv.org/abs/2105.06813) The current version is composed 10 languages: Chinese, French, German, Indonesian, Italian, Portuguese, Russian, Spanish, Dutch and Vietnamese. ### Supported languages | Language name | Language code | |---------------|---------------| | English | english | | Chinese | chinese | | French | french | | German | german | | Indonesian | indonesian | | Italian | italian | | Portuguese | portuguese | | Russian | russian | | Spanish | spanish | | Dutch | dutch | | Vietnamese | vietnamese | # Dataset Structure You can load mRobust dataset by choosing a specific language. We include the translated collections of documents and queries. #### Queries ```python >>> dataset = load_dataset('unicamp-dl/mrobust', 'queries-spanish') >>> dataset['queries'][1] {'id': '302', 'text': '¿Está controlada la enfermedad de la poliomielitis (polio) en el mundo?'} ``` #### Collection ```python >>> dataset = load_dataset('unicamp-dl/mrobust', 'collection-portuguese') >>> dataset['collection'][5] {'id': 'FT931-16660', 'text': '930105 FT 05 JAN 93 / Cenelec: Correção O endereço do Cenelec, Comitê Europeu de Normalização Eletrotécnica, estava incorreto na edição de ontem. É Rue de Stassart 35, B-1050, Bruxelas, Tel (322) 519 6871. CEN, Comitê Europeu de Normalização, está localizado na Rue de Stassart 36, B-1050, Bruxelas, Tel 519 6811.'} ``` # Citation Information ``` @misc{https://doi.org/10.48550/arxiv.2209.13738, doi = {10.48550/ARXIV.2209.13738}, url = {https://arxiv.org/abs/2209.13738}, author = {Jeronymo, Vitor and Nascimento, Mauricio and Lotufo, Roberto and Nogueira, Rodrigo}, title = {mRobust04: A Multilingual Version of the TREC Robust 2004 Benchmark}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
unicamp-dl/mrobust
[ "arxiv:2209.13738", "arxiv:2105.06813", "region:us" ]
2022-03-02T23:29:22+00:00
{}
2023-11-23T10:45:25+00:00
[ "2209.13738", "2105.06813" ]
[]
TAGS #arxiv-2209.13738 #arxiv-2105.06813 #region-us
Dataset Summary =============== mRobust is a multilingual version of the TREC 2004 Robust passage ranking dataset. For more information, checkout our papers: * mRobust: A Multilingual Version of the MS MARCO Passage Ranking Dataset * A cost-benefit analysis of cross-lingual transfer methods The current version is composed 10 languages: Chinese, French, German, Indonesian, Italian, Portuguese, Russian, Spanish, Dutch and Vietnamese. ### Supported languages Dataset Structure ================= You can load mRobust dataset by choosing a specific language. We include the translated collections of documents and queries. #### Queries #### Collection
[ "### Supported languages\n\n\n\nDataset Structure\n=================\n\n\nYou can load mRobust dataset by choosing a specific language. We include the translated collections of documents and queries.", "#### Queries", "#### Collection" ]
[ "TAGS\n#arxiv-2209.13738 #arxiv-2105.06813 #region-us \n", "### Supported languages\n\n\n\nDataset Structure\n=================\n\n\nYou can load mRobust dataset by choosing a specific language. We include the translated collections of documents and queries.", "#### Queries", "#### Collection" ]
839230b8ff8b06ae3707e3b4ea418b34600061f5
# Dataset Card Creation Guide ## Table of Contents - [Dataset Card Creation Guide](#dataset-card-creation-guide) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** [Github](https://github.com/avi-jit/numeracy-literacy) - **Paper:** [Anthology](https://aclanthology.org/2021.emnlp-main.557) - **Point of Contact:** [Avijit Thawani](mailto:[email protected]) ### Dataset Summary Wiki-Convert is a 900,000+ sentences dataset of precise number annotations from English Wikipedia. It relies on Wiki contributors' annotations in the form of a [{{Convert}}](https://en.wikipedia.org/wiki/Template:Convert) template. ### Supported Tasks and Leaderboards - `sequence-modeling`: The dataset can be used to train a model for [Language Mddeling], which consists in [TASK DESCRIPTION]. Success on this task is typically measured by achieving a low [perplexity](https://huggingface.co/transformers/perplexity.html). ### Languages The dataset is extracted from English Wikipedia, hence overwhelmingly contains English text. ## Dataset Structure ### Data Instances Each row in the json file contains metadata about the source Wikipedia sentence, along with annotations for a single number, e.g., `number: 10` in the below example. The annotations are inspired by Numeracy-600K and are in the form of `length` and `offset` from the beginning of the sentence. ``` { 'id': 1080801, 'UNIQUE_STORY_INDEX': '1080801', 'offset': 83, 'length': 2, 'magnitude': 0, 'comment': "Like all Type UB III submarines, UB-117 carried 10 torpedoes and was armed with a  10 cms deck gun. ''", 'number': 10 } ``` Please refer to https://github.com/avi-jit/numeracy-literacy for more details. ### Data Splits | | Tain | Dev | Test | | ----- | :------: | :-----: | :----: | | Input Sentences | 739,583 | 92,447 | 92,449| ## License Provided under MIT License. ## Citation ``` @inproceedings{thawani-etal-2021-numeracy, title = "Numeracy enhances the Literacy of Language Models", author = "Thawani, Avijit and Pujara, Jay and Ilievski, Filip", booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.557", pages = "6960--6967", abstract = "Specialized number representations in NLP have shown improvements on numerical reasoning tasks like arithmetic word problems and masked number prediction. But humans also use numeracy to make better sense of world concepts, e.g., you can seat 5 people in your {`}room{'} but not 500. Does a better grasp of numbers improve a model{'}s understanding of other concepts and words? This paper studies the effect of using six different number encoders on the task of masked word prediction (MWP), as a proxy for evaluating literacy. To support this investigation, we develop Wiki-Convert, a 900,000 sentence dataset annotated with numbers and units, to avoid conflating nominal and ordinal number occurrences. We find a significant improvement in MWP for sentences containing numbers, that exponent embeddings are the best number encoders, yielding over 2 points jump in prediction accuracy over a BERT baseline, and that these enhanced literacy skills also generalize to contexts without annotated numbers. We release all code at https://git.io/JuZXn.", } ``` Thanks to [@avi-jit](https://github.com/avi-jit) for adding this dataset.
usc-isi/WikiConvert
[ "task_categories:fill-mask", "task_categories:other", "task_categories:text-generation", "task_ids:language-modeling", "task_ids:masked-language-modeling", "language_creators:found", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:extended|wikipedia", "language:en", "license:mit", "numeracy", "natural-language-understanding", "tokenization", "region:us" ]
2022-03-02T23:29:22+00:00
{"language_creators": ["found"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["extended|wikipedia"], "task_categories": ["fill-mask", "other", "text-generation"], "task_ids": ["language-modeling", "masked-language-modeling"], "pretty_name": "Wiki-Convert", "YAML tags": [{}, "found"], "language_bcp47": ["en-US"], "tags": ["numeracy", "natural-language-understanding", "tokenization"]}
2022-10-24T16:40:43+00:00
[]
[ "en" ]
TAGS #task_categories-fill-mask #task_categories-other #task_categories-text-generation #task_ids-language-modeling #task_ids-masked-language-modeling #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-extended|wikipedia #language-English #license-mit #numeracy #natural-language-understanding #tokenization #region-us
Dataset Card Creation Guide =========================== Table of Contents ----------------- * Dataset Card Creation Guide + Table of Contents + Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages + Dataset Structure - Data Instances - Data Fields - Data Splits + Dataset Creation - Curation Rationale - Source Data * Initial Data Collection and Normalization * Who are the source language producers? - Annotations * Annotation process * Who are the annotators? - Personal and Sensitive Information + Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations + Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions Dataset Description ------------------- * Repository: Github * Paper: Anthology * Point of Contact: Avijit Thawani ### Dataset Summary Wiki-Convert is a 900,000+ sentences dataset of precise number annotations from English Wikipedia. It relies on Wiki contributors' annotations in the form of a {{Convert}} template. ### Supported Tasks and Leaderboards * 'sequence-modeling': The dataset can be used to train a model for [Language Mddeling], which consists in [TASK DESCRIPTION]. Success on this task is typically measured by achieving a low perplexity. ### Languages The dataset is extracted from English Wikipedia, hence overwhelmingly contains English text. Dataset Structure ----------------- ### Data Instances Each row in the json file contains metadata about the source Wikipedia sentence, along with annotations for a single number, e.g., 'number: 10' in the below example. The annotations are inspired by Numeracy-600K and are in the form of 'length' and 'offset' from the beginning of the sentence. Please refer to URL for more details. ### Data Splits License ------- Provided under MIT License. Thanks to @avi-jit for adding this dataset.
[ "### Dataset Summary\n\n\nWiki-Convert is a 900,000+ sentences dataset of precise number annotations from English Wikipedia. It relies on Wiki contributors' annotations in the form of a {{Convert}} template.", "### Supported Tasks and Leaderboards\n\n\n* 'sequence-modeling': The dataset can be used to train a model for [Language Mddeling], which consists in [TASK DESCRIPTION]. Success on this task is typically measured by achieving a low perplexity.", "### Languages\n\n\nThe dataset is extracted from English Wikipedia, hence overwhelmingly contains English text.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nEach row in the json file contains metadata about the source Wikipedia sentence, along with annotations for a single number, e.g., 'number: 10' in the below example. The annotations are inspired by Numeracy-600K and are in the form of 'length' and 'offset' from the beginning of the sentence.\n\n\nPlease refer to URL for more details.", "### Data Splits\n\n\n\nLicense\n-------\n\n\nProvided under MIT License.\n\n\nThanks to @avi-jit for adding this dataset." ]
[ "TAGS\n#task_categories-fill-mask #task_categories-other #task_categories-text-generation #task_ids-language-modeling #task_ids-masked-language-modeling #language_creators-found #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-extended|wikipedia #language-English #license-mit #numeracy #natural-language-understanding #tokenization #region-us \n", "### Dataset Summary\n\n\nWiki-Convert is a 900,000+ sentences dataset of precise number annotations from English Wikipedia. It relies on Wiki contributors' annotations in the form of a {{Convert}} template.", "### Supported Tasks and Leaderboards\n\n\n* 'sequence-modeling': The dataset can be used to train a model for [Language Mddeling], which consists in [TASK DESCRIPTION]. Success on this task is typically measured by achieving a low perplexity.", "### Languages\n\n\nThe dataset is extracted from English Wikipedia, hence overwhelmingly contains English text.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nEach row in the json file contains metadata about the source Wikipedia sentence, along with annotations for a single number, e.g., 'number: 10' in the below example. The annotations are inspired by Numeracy-600K and are in the form of 'length' and 'offset' from the beginning of the sentence.\n\n\nPlease refer to URL for more details.", "### Data Splits\n\n\n\nLicense\n-------\n\n\nProvided under MIT License.\n\n\nThanks to @avi-jit for adding this dataset." ]
ca7a00ac18d9c120c43e240aa1fc1bcec2a854e0
# Preprocessed CANARD Voskarides et al. have trained a Query Resolution Term Classification (QuReTec) model using the CANARD data set. CANARD is a dataset for question-in-context rewriting that consists of questions each given in a dialog context together with a context-independent rewriting of the question. The context of each question is the dialog utterences that precede the question. CANARD can be used to evaluate question rewriting models that handle important linguistic phenomena such as coreference and ellipsis resolution. QuReTeC is trained to label the relevant terms in the conversation history for the current contextless question. The relevant terms are the terms that occur in both the rewritten question and the history. For example: **History:** \ Where was Bennett born?\ Bennett was born Michael Bennett DiFiglia in Buffalo, New York. When was he born? CANNOTANSWER \ **Current question**: \ Who are his parents? \ **Rewritten question**: \ Who are Michael Bennett's parents? The **gold/relevant terms** from the question history are: michael, bennett ## Data subsets This repository contains the following subsets: - gold_supervision (default): \ the gold terms are the overlapping terms between the question history from the rewritten question. - distant_supervision: \ the gold terms are the overlapping terms between the question history and the passage in which the answer to question can be found. ## Data structure Each entry contains the following keys: ``` prev_questions: string e.g.: Where was Bennett born? Bennett was born Michael Bennett DiFiglia in Buffalo, New York. When was he born? CANNOTANSWER. cur_question: string e.g.: Who are his parents? gold_terms: string[] e.g.: ["michael", "bennett"] bert_ner_overlap: 2-dimensional array. The first entry lists all the terms and the second end lists the labels for those terms. e.g.: [ ["where", "was", "bennett", "born", "?", "bennett", "was", "born", "michael", "bennett", "difiglia", "in", "buffalo", ",", "new", "york", ".", "when", "was", "he", "born", "?", "cannotanswer", ".", "[SEP]", "who", "are", "his", "parents", "?"], ["O", "O", "REL", "O", "O", "REL", "O", "O", "REL", "REL", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "O", "[SEP]", "O", "O", "O", "O", "O"] ] answer_text_with_window: string. For the 'gold_supervision' subset this field contains the rewritten question, e.g.: Who are Michael Bennett's parents? For the 'distant_supervision' subset this field contains the relevant passage to the question: e.g.: Bennett was born Michael Bennett DiFiglia in Buffalo, New York, the son of Helen (nee Ternoff), a secretary, and Salvatore Joseph DiFiglia, a factory worker. Michael Bennett (theater)'s father was Roman Catholic and Italian American and Michael Bennett (theater)'s mother was Jewish. Michael Bennett (theater) studied dance and ``` # Original authors QuReTeC model from the published SIGIR 2020 paper: Query Resolution for Conversational Search with Limited Supervision by N. Voskarides, D. Li, P. Ren, E. Kanoulas and M. de Rijke. [[pdf]](https://arxiv.org/abs/2005.11723). # Contributions Uploaded by G. Scheuer ([website](https://giguruscheuer.com))
uva-irlab/canard_quretec
[ "arxiv:2005.11723", "region:us" ]
2022-03-02T23:29:22+00:00
{}
2021-06-26T15:33:00+00:00
[ "2005.11723" ]
[]
TAGS #arxiv-2005.11723 #region-us
# Preprocessed CANARD Voskarides et al. have trained a Query Resolution Term Classification (QuReTec) model using the CANARD data set. CANARD is a dataset for question-in-context rewriting that consists of questions each given in a dialog context together with a context-independent rewriting of the question. The context of each question is the dialog utterences that precede the question. CANARD can be used to evaluate question rewriting models that handle important linguistic phenomena such as coreference and ellipsis resolution. QuReTeC is trained to label the relevant terms in the conversation history for the current contextless question. The relevant terms are the terms that occur in both the rewritten question and the history. For example: History: \ Where was Bennett born?\ Bennett was born Michael Bennett DiFiglia in Buffalo, New York. When was he born? CANNOTANSWER \ Current question: \ Who are his parents? \ Rewritten question: \ Who are Michael Bennett's parents? The gold/relevant terms from the question history are: michael, bennett ## Data subsets This repository contains the following subsets: - gold_supervision (default): \ the gold terms are the overlapping terms between the question history from the rewritten question. - distant_supervision: \ the gold terms are the overlapping terms between the question history and the passage in which the answer to question can be found. ## Data structure Each entry contains the following keys: # Original authors QuReTeC model from the published SIGIR 2020 paper: Query Resolution for Conversational Search with Limited Supervision by N. Voskarides, D. Li, P. Ren, E. Kanoulas and M. de Rijke. [[pdf]](URL # Contributions Uploaded by G. Scheuer (website)
[ "# Preprocessed CANARD\nVoskarides et al. have trained a Query Resolution Term Classification (QuReTec) model using the CANARD data set.\n\nCANARD is a dataset for question-in-context rewriting that consists of questions each given in a dialog context together with a context-independent rewriting of the question. The context of each question is the dialog utterences that precede the question. CANARD can be used to evaluate question rewriting models that handle important linguistic phenomena such as coreference and ellipsis resolution.\n\nQuReTeC is trained to label the relevant terms in the conversation history for the current contextless question. The relevant terms are the terms that occur in both the rewritten question and the history. For example:\n\nHistory: \\\nWhere was Bennett born?\\\nBennett was born Michael Bennett DiFiglia in Buffalo, New York. When was he born? CANNOTANSWER \\\nCurrent question: \\\nWho are his parents? \\\nRewritten question: \\\nWho are Michael Bennett's parents?\n\nThe gold/relevant terms from the question history are: michael, bennett", "## Data subsets\nThis repository contains the following subsets:\n- gold_supervision (default): \\\n the gold terms are the overlapping terms between the question history from the rewritten question. \n- distant_supervision: \\\n the gold terms are the overlapping terms between the question history and the passage in which the answer to question can be found.", "## Data structure\nEach entry contains the following keys:", "# Original authors\n\nQuReTeC model from the published SIGIR 2020 paper: Query Resolution for Conversational Search with Limited Supervision by N. Voskarides, D. Li, P. Ren, E. Kanoulas and M. de Rijke. [[pdf]](URL", "# Contributions\n\nUploaded by G. Scheuer (website)" ]
[ "TAGS\n#arxiv-2005.11723 #region-us \n", "# Preprocessed CANARD\nVoskarides et al. have trained a Query Resolution Term Classification (QuReTec) model using the CANARD data set.\n\nCANARD is a dataset for question-in-context rewriting that consists of questions each given in a dialog context together with a context-independent rewriting of the question. The context of each question is the dialog utterences that precede the question. CANARD can be used to evaluate question rewriting models that handle important linguistic phenomena such as coreference and ellipsis resolution.\n\nQuReTeC is trained to label the relevant terms in the conversation history for the current contextless question. The relevant terms are the terms that occur in both the rewritten question and the history. For example:\n\nHistory: \\\nWhere was Bennett born?\\\nBennett was born Michael Bennett DiFiglia in Buffalo, New York. When was he born? CANNOTANSWER \\\nCurrent question: \\\nWho are his parents? \\\nRewritten question: \\\nWho are Michael Bennett's parents?\n\nThe gold/relevant terms from the question history are: michael, bennett", "## Data subsets\nThis repository contains the following subsets:\n- gold_supervision (default): \\\n the gold terms are the overlapping terms between the question history from the rewritten question. \n- distant_supervision: \\\n the gold terms are the overlapping terms between the question history and the passage in which the answer to question can be found.", "## Data structure\nEach entry contains the following keys:", "# Original authors\n\nQuReTeC model from the published SIGIR 2020 paper: Query Resolution for Conversational Search with Limited Supervision by N. Voskarides, D. Li, P. Ren, E. Kanoulas and M. de Rijke. [[pdf]](URL", "# Contributions\n\nUploaded by G. Scheuer (website)" ]
4e82e7eb2b051c93a927d9483afe9741d46d226d
# TREC Cast 2019 [TREC Cast](http://www.treccast.ai) have released a document collection with topics and qrels of which a subset has been annotated such that it is suitable for multi-turn conversational search. ## Dataset statistics - # Passages: 38,426,252 - # Topics: 20 - # Queries: 173 ## Subsets ### CAR + MSMARCO Collection Together CAR and MSMARCO have a size of 6,13G, so downloading will take a while. You can use the collection as followed: ```python collection = load_dataset('trec-cast-2019-multi-turn', 'test_collection') ``` The collection has the following data format: ``` docno: str The document id format is [collection_id_paragraph_id] with collection id and paragraph id separated by an underscore. The collection ids are in the set: {MARCO, CAR}. E.g.: CAR_6869dee46ab12f0f7060874f7fc7b1c57d53144a text: str The content of the passage. ``` #### Sample Instead of using the entire data set, you can also download a sample set containing only 200,000 items: ```python collection = load_dataset('trec-cast-2019-multi-turn', 'test_collection_sample') ``` ### Topics You can get the topics as followed: ```python topics = load_dataset('trec-cast-2019-multi-turn', 'topics') ``` The topics have the following dataformat: ``` qid: str Query ID of the format "topicId_questionNumber" history: str[] A list of queries. It can be empty for the first question in a topic. query: str The query ``` ### Qrels You can get the qrels as followed: ```python qrels = load_dataset('trec-cast-2019-multi-turn', 'qrels') ``` The qrels have the following data format: ``` qid: str Query ID of the format "topicId_questionNumber" qrels: List[dict] A list of dictionaries with the keys 'docno' and 'relevance'. Relevance is an integer in the range [0, 4] ```
uva-irlab/trec-cast-2019-multi-turn
[ "task_categories:text-retrieval", "task_ids:document-retrieval", "multilinguality:monolingual", "size_categories:10M<n<100M", "language:en", "region:us" ]
2022-03-02T23:29:22+00:00
{"language": ["en"], "multilinguality": ["monolingual"], "size_categories": ["10M<n<100M"], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "language_bcp47": ["en-US"]}
2022-10-25T08:56:59+00:00
[]
[ "en" ]
TAGS #task_categories-text-retrieval #task_ids-document-retrieval #multilinguality-monolingual #size_categories-10M<n<100M #language-English #region-us
# TREC Cast 2019 TREC Cast have released a document collection with topics and qrels of which a subset has been annotated such that it is suitable for multi-turn conversational search. ## Dataset statistics - # Passages: 38,426,252 - # Topics: 20 - # Queries: 173 ## Subsets ### CAR + MSMARCO Collection Together CAR and MSMARCO have a size of 6,13G, so downloading will take a while. You can use the collection as followed: The collection has the following data format: #### Sample Instead of using the entire data set, you can also download a sample set containing only 200,000 items: ### Topics You can get the topics as followed: The topics have the following dataformat: ### Qrels You can get the qrels as followed: The qrels have the following data format:
[ "# TREC Cast 2019 \n\nTREC Cast have released a document collection with topics and qrels of which a subset has been annotated such that it is suitable for multi-turn conversational search.", "## Dataset statistics\n\n- # Passages: 38,426,252\n- # Topics: 20\n- # Queries: 173", "## Subsets", "### CAR + MSMARCO Collection\nTogether CAR and MSMARCO have a size of 6,13G, so downloading will take a while. You can use the collection as followed:\n\n\nThe collection has the following data format:", "#### Sample\nInstead of using the entire data set, you can also download a sample set containing only 200,000 items:", "### Topics\nYou can get the topics as followed:\n\n\nThe topics have the following dataformat:", "### Qrels\nYou can get the qrels as followed:\n\n\nThe qrels have the following data format:" ]
[ "TAGS\n#task_categories-text-retrieval #task_ids-document-retrieval #multilinguality-monolingual #size_categories-10M<n<100M #language-English #region-us \n", "# TREC Cast 2019 \n\nTREC Cast have released a document collection with topics and qrels of which a subset has been annotated such that it is suitable for multi-turn conversational search.", "## Dataset statistics\n\n- # Passages: 38,426,252\n- # Topics: 20\n- # Queries: 173", "## Subsets", "### CAR + MSMARCO Collection\nTogether CAR and MSMARCO have a size of 6,13G, so downloading will take a while. You can use the collection as followed:\n\n\nThe collection has the following data format:", "#### Sample\nInstead of using the entire data set, you can also download a sample set containing only 200,000 items:", "### Topics\nYou can get the topics as followed:\n\n\nThe topics have the following dataformat:", "### Qrels\nYou can get the qrels as followed:\n\n\nThe qrels have the following data format:" ]
248f523261b443cb10f619c1f42ae7d1b6895eb0
# Dataset Card for 12-factor ## Table of Contents - [Dataset Description](#dataset-description) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Source Data](#source-data) ## Dataset Description 100+ news article URL scored on 12 different factors and assigned a single score ## Languages The text in the dataset is in English ## Source Data The dataset is manually scraped and annotated by Alex
valurank/12-factor
[ "multilinguality:monolingual", "language:en", "license:other", "region:us" ]
2022-03-02T23:29:22+00:00
{"language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "task_categories": ["classification"], "task_ids": ["classification"]}
2022-10-21T12:39:15+00:00
[]
[ "en" ]
TAGS #multilinguality-monolingual #language-English #license-other #region-us
# Dataset Card for 12-factor ## Table of Contents - Dataset Description - Languages - Dataset Structure - Source Data ## Dataset Description 100+ news article URL scored on 12 different factors and assigned a single score ## Languages The text in the dataset is in English ## Source Data The dataset is manually scraped and annotated by Alex
[ "# Dataset Card for 12-factor", "## Table of Contents\n- Dataset Description\n- Languages\n- Dataset Structure\n- Source Data", "## Dataset Description\n\n100+ news article URL scored on 12 different factors and assigned a single score", "## Languages\n\nThe text in the dataset is in English", "## Source Data\n\nThe dataset is manually scraped and annotated by Alex" ]
[ "TAGS\n#multilinguality-monolingual #language-English #license-other #region-us \n", "# Dataset Card for 12-factor", "## Table of Contents\n- Dataset Description\n- Languages\n- Dataset Structure\n- Source Data", "## Dataset Description\n\n100+ news article URL scored on 12 different factors and assigned a single score", "## Languages\n\nThe text in the dataset is in English", "## Source Data\n\nThe dataset is manually scraped and annotated by Alex" ]
59e427a5a2ed756bb9ceb121a41ba595c24c7e9e
# Dataset Card for PoliticalBias ## Table of Contents - [Dataset Description](#dataset-description) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Source Data](#source-data) ## Dataset Description roughly 8200 articles written by the website’s editors, each article covering one topic with 3 links that describe the same piece of news from different angles (usually one from the right, one from the left, and one from the center) ## Languages The text in the dataset is in English ## Dataset Structure The dataset consists of four columns namely Left, Right, Center, and Main URL ## Source Data The dataset is scrapped from http://allsides.com/
valurank/PoliticalBias
[ "multilinguality:monolingual", "language:en", "license:other", "region:us" ]
2022-03-02T23:29:22+00:00
{"language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "task_categories": ["classification"], "task_ids": ["classification"]}
2022-10-21T12:38:13+00:00
[]
[ "en" ]
TAGS #multilinguality-monolingual #language-English #license-other #region-us
# Dataset Card for PoliticalBias ## Table of Contents - Dataset Description - Languages - Dataset Structure - Source Data ## Dataset Description roughly 8200 articles written by the website’s editors, each article covering one topic with 3 links that describe the same piece of news from different angles (usually one from the right, one from the left, and one from the center) ## Languages The text in the dataset is in English ## Dataset Structure The dataset consists of four columns namely Left, Right, Center, and Main URL ## Source Data The dataset is scrapped from URL
[ "# Dataset Card for PoliticalBias", "## Table of Contents\n- Dataset Description\n- Languages\n- Dataset Structure\n- Source Data", "## Dataset Description\n roughly 8200 articles written by the website’s editors, each article covering one topic with 3 links that describe the same piece of news from different angles (usually one from the right, one from the left, and one from the center)", "## Languages\nThe text in the dataset is in English", "## Dataset Structure\nThe dataset consists of four columns namely Left, Right, Center, and Main URL", "## Source Data\nThe dataset is scrapped from URL" ]
[ "TAGS\n#multilinguality-monolingual #language-English #license-other #region-us \n", "# Dataset Card for PoliticalBias", "## Table of Contents\n- Dataset Description\n- Languages\n- Dataset Structure\n- Source Data", "## Dataset Description\n roughly 8200 articles written by the website’s editors, each article covering one topic with 3 links that describe the same piece of news from different angles (usually one from the right, one from the left, and one from the center)", "## Languages\nThe text in the dataset is in English", "## Dataset Structure\nThe dataset consists of four columns namely Left, Right, Center, and Main URL", "## Source Data\nThe dataset is scrapped from URL" ]
6d9846d674e8a4faec1062539fa59cde158dd3c3
# Dataset Card for news-12factor ## Table of Contents - [Dataset Description](#dataset-description) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Source Data](#source-data) - [Annotations](#annotations) ## Dataset Description ~20k articles labeled left, right, or center by the editors of allsides.com. ## Languages The text in the dataset is in English ## Dataset Structure 3 folders, with many text files in each. Each text file represent the body text of one article. ## Source Data URL data was scraped using https://github.com/mozilla/readability ## Annotations Articles were manually annotated by news editors who were attempting to select representative articles from the left, right and center of each article topic. In other words, the dataset should generally be balanced - the left/right/center articles cover the same set of topics, and have roughly the same amount of articles in each.
valurank/PoliticalBias_AllSides_Txt
[ "multilinguality:monolingual", "language:en", "license:other", "region:us" ]
2022-03-02T23:29:22+00:00
{"language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "task_categories": ["classification"], "task_ids": ["classification"]}
2022-10-21T12:37:02+00:00
[]
[ "en" ]
TAGS #multilinguality-monolingual #language-English #license-other #region-us
# Dataset Card for news-12factor ## Table of Contents - Dataset Description - Languages - Dataset Structure - Source Data - Annotations ## Dataset Description ~20k articles labeled left, right, or center by the editors of URL. ## Languages The text in the dataset is in English ## Dataset Structure 3 folders, with many text files in each. Each text file represent the body text of one article. ## Source Data URL data was scraped using URL ## Annotations Articles were manually annotated by news editors who were attempting to select representative articles from the left, right and center of each article topic. In other words, the dataset should generally be balanced - the left/right/center articles cover the same set of topics, and have roughly the same amount of articles in each.
[ "# Dataset Card for news-12factor", "## Table of Contents\n- Dataset Description\n- Languages\n- Dataset Structure\n- Source Data\n- Annotations", "## Dataset Description\n\n~20k articles labeled left, right, or center by the editors of URL.", "## Languages\n\nThe text in the dataset is in English", "## Dataset Structure\n\n3 folders, with many text files in each. Each text file represent the body text of one article.", "## Source Data\n\nURL data was scraped using URL", "## Annotations\n\nArticles were manually annotated by news editors who were attempting to select representative articles from the left, right and center of each article topic. In other words, the dataset should generally be balanced - the left/right/center articles cover the same set of topics, and have roughly the same amount of articles in each." ]
[ "TAGS\n#multilinguality-monolingual #language-English #license-other #region-us \n", "# Dataset Card for news-12factor", "## Table of Contents\n- Dataset Description\n- Languages\n- Dataset Structure\n- Source Data\n- Annotations", "## Dataset Description\n\n~20k articles labeled left, right, or center by the editors of URL.", "## Languages\n\nThe text in the dataset is in English", "## Dataset Structure\n\n3 folders, with many text files in each. Each text file represent the body text of one article.", "## Source Data\n\nURL data was scraped using URL", "## Annotations\n\nArticles were manually annotated by news editors who were attempting to select representative articles from the left, right and center of each article topic. In other words, the dataset should generally be balanced - the left/right/center articles cover the same set of topics, and have roughly the same amount of articles in each." ]
f1f20a3da7b392df34fbb0e6760c45a76a654736
# Dataset Card for PoliticalBias_Sources ## Table of Contents - [Dataset Description](#dataset-description) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Source Data](#source-data) ## Dataset Description 908 rows of data containing source name of an article, the source bias and the type of source ## Languages The text in the dataset is in English ## Dataset Structure The dataset consists of three columns namely Source Name, Source Bias and Source Typ ## Source Data The dataset is scrapped from https://www.allsides.com/media-bias
valurank/PoliticalBias_Sources
[ "multilinguality:monolingual", "language:en", "license:other", "region:us" ]
2022-03-02T23:29:22+00:00
{"language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "task_categories": ["classification"], "task_ids": ["classification"]}
2022-10-21T12:34:55+00:00
[]
[ "en" ]
TAGS #multilinguality-monolingual #language-English #license-other #region-us
# Dataset Card for PoliticalBias_Sources ## Table of Contents - Dataset Description - Languages - Dataset Structure - Source Data ## Dataset Description 908 rows of data containing source name of an article, the source bias and the type of source ## Languages The text in the dataset is in English ## Dataset Structure The dataset consists of three columns namely Source Name, Source Bias and Source Typ ## Source Data The dataset is scrapped from URL
[ "# Dataset Card for PoliticalBias_Sources", "## Table of Contents\n- Dataset Description\n- Languages\n- Dataset Structure\n- Source Data", "## Dataset Description\n\n908 rows of data containing source name of an article, the source bias and the type of source", "## Languages\n\nThe text in the dataset is in English", "## Dataset Structure\n\nThe dataset consists of three columns namely Source Name, Source Bias and Source Typ", "## Source Data\n\nThe dataset is scrapped from URL" ]
[ "TAGS\n#multilinguality-monolingual #language-English #license-other #region-us \n", "# Dataset Card for PoliticalBias_Sources", "## Table of Contents\n- Dataset Description\n- Languages\n- Dataset Structure\n- Source Data", "## Dataset Description\n\n908 rows of data containing source name of an article, the source bias and the type of source", "## Languages\n\nThe text in the dataset is in English", "## Dataset Structure\n\nThe dataset consists of three columns namely Source Name, Source Bias and Source Typ", "## Source Data\n\nThe dataset is scrapped from URL" ]
07b6fb00c0591cbe67bfe089426f53eae947fa57
# Dataset Card for hate-multi ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Dataset Creation](#dataset-creation) - [Source Data](#source-data) ## Dataset Description ### Dataset Summary This dataset contains a collection of text labeled as hate speech (class 1) or not (class 0). ## Dataset Creation The dataset was creating by aggregating multiple publicly available datasets. ### Source Data The following datasets were used: * https://huggingface.co/datasets/hate_speech18 - Filtered to remove examples labeled as 'idk/skip', 'relation' * https://huggingface.co/datasets/hate_speech_offensive - Tweet text cleaned by lower casing, removing mentions and urls. Dropped instanced labeled as 'offensive language' * https://huggingface.co/datasets/ucberkeley-dlab/measuring-hate-speech - Tweet text cleaned by lower casing, removing mentions and urls. Dropped instanced with hatespeech == 1
valurank/hate-multi
[ "task_categories:text-classification", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:derived", "language:en", "license:other", "region:us" ]
2022-03-02T23:29:22+00:00
{"language": ["en"], "license": "other", "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["derived"], "task_categories": ["text-classification"]}
2022-10-25T08:57:06+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-derived #language-English #license-other #region-us
# Dataset Card for hate-multi ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Dataset Creation - Source Data ## Dataset Description ### Dataset Summary This dataset contains a collection of text labeled as hate speech (class 1) or not (class 0). ## Dataset Creation The dataset was creating by aggregating multiple publicly available datasets. ### Source Data The following datasets were used: * URL - Filtered to remove examples labeled as 'idk/skip', 'relation' * URL - Tweet text cleaned by lower casing, removing mentions and urls. Dropped instanced labeled as 'offensive language' * URL - Tweet text cleaned by lower casing, removing mentions and urls. Dropped instanced with hatespeech == 1
[ "# Dataset Card for hate-multi", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n- Dataset Creation\n - Source Data", "## Dataset Description", "### Dataset Summary\n\nThis dataset contains a collection of text labeled as hate speech (class 1) or not (class 0).", "## Dataset Creation\n\nThe dataset was creating by aggregating multiple publicly available datasets.", "### Source Data\n\nThe following datasets were used:\n* URL - Filtered to remove examples labeled as 'idk/skip', 'relation'\n* URL - Tweet text cleaned by lower casing, removing mentions and urls. Dropped instanced labeled as 'offensive language'\n* URL - Tweet text cleaned by lower casing, removing mentions and urls. Dropped instanced with hatespeech == 1" ]
[ "TAGS\n#task_categories-text-classification #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-derived #language-English #license-other #region-us \n", "# Dataset Card for hate-multi", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n- Dataset Creation\n - Source Data", "## Dataset Description", "### Dataset Summary\n\nThis dataset contains a collection of text labeled as hate speech (class 1) or not (class 0).", "## Dataset Creation\n\nThe dataset was creating by aggregating multiple publicly available datasets.", "### Source Data\n\nThe following datasets were used:\n* URL - Filtered to remove examples labeled as 'idk/skip', 'relation'\n* URL - Tweet text cleaned by lower casing, removing mentions and urls. Dropped instanced labeled as 'offensive language'\n* URL - Tweet text cleaned by lower casing, removing mentions and urls. Dropped instanced with hatespeech == 1" ]
f4573b39169bf35b045a334d3a8af21aaf705933
# Dataset Card for news-12factor ## Table of Contents - [Dataset Description](#dataset-description) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Source Data](#source-data) - [Annotations](#annotations) ## Dataset Description 80+ news articles with url, title, body text, scored on 12 quality factors and assigned a single rank. ## Languages The text in the dataset is in English ## Dataset Structure [Needs More Information] ## Source Data URL data was scraped using [news-please](https://github.com/fhamborg/news-please) ## Annotations Articles were manually annotated by Alex on a 12-factor score card.
valurank/news-12factor
[ "task_categories:text-classification", "task_ids:multi-class-classification", "multilinguality:monolingual", "language:en", "license:other", "region:us" ]
2022-03-02T23:29:22+00:00
{"language": ["en"], "license": ["other"], "multilinguality": ["monolingual"], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification"]}
2022-10-21T12:35:36+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-multi-class-classification #multilinguality-monolingual #language-English #license-other #region-us
# Dataset Card for news-12factor ## Table of Contents - Dataset Description - Languages - Dataset Structure - Source Data - Annotations ## Dataset Description 80+ news articles with url, title, body text, scored on 12 quality factors and assigned a single rank. ## Languages The text in the dataset is in English ## Dataset Structure ## Source Data URL data was scraped using news-please ## Annotations Articles were manually annotated by Alex on a 12-factor score card.
[ "# Dataset Card for news-12factor", "## Table of Contents\n- Dataset Description\n- Languages\n- Dataset Structure\n- Source Data\n- Annotations", "## Dataset Description\n\n80+ news articles with url, title, body text, scored on 12 quality factors and assigned a single rank.", "## Languages\n\nThe text in the dataset is in English", "## Dataset Structure", "## Source Data\n\nURL data was scraped using news-please", "## Annotations\n\nArticles were manually annotated by Alex on a 12-factor score card." ]
[ "TAGS\n#task_categories-text-classification #task_ids-multi-class-classification #multilinguality-monolingual #language-English #license-other #region-us \n", "# Dataset Card for news-12factor", "## Table of Contents\n- Dataset Description\n- Languages\n- Dataset Structure\n- Source Data\n- Annotations", "## Dataset Description\n\n80+ news articles with url, title, body text, scored on 12 quality factors and assigned a single rank.", "## Languages\n\nThe text in the dataset is in English", "## Dataset Structure", "## Source Data\n\nURL data was scraped using news-please", "## Annotations\n\nArticles were manually annotated by Alex on a 12-factor score card." ]
e753de990611be1c86814db557ec8a8fecccea58
# Dataset Card for hate-multi ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Dataset Creation](#dataset-creation) - [Source Data](#source-data) ## Dataset Description ### Dataset Summary This dataset contains a collection of text labeled as offensive (class 1) or not (class 0). ## Dataset Creation The dataset was creating by aggregating multiple publicly available datasets. ### Source Data The following datasets were used: * https://huggingface.co/datasets/hate_speech_offensive - Tweet text cleaned by lower casing, removing mentions and urls. Dropped instanced labeled as 'hate speech' * https://sites.google.com/site/offensevalsharedtask/olid - Tweet text cleaned by lower casing, removing mentions and urls. Used 'subtask_a' column for labeling.
valurank/offensive-multi
[ "task_categories:text-classification", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:derived", "language:en", "license:other", "region:us" ]
2022-03-02T23:29:22+00:00
{"language": ["en"], "license": "other", "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["derived"], "task_categories": ["text-classification"]}
2022-10-25T08:57:14+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-derived #language-English #license-other #region-us
# Dataset Card for hate-multi ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Dataset Creation - Source Data ## Dataset Description ### Dataset Summary This dataset contains a collection of text labeled as offensive (class 1) or not (class 0). ## Dataset Creation The dataset was creating by aggregating multiple publicly available datasets. ### Source Data The following datasets were used: * URL - Tweet text cleaned by lower casing, removing mentions and urls. Dropped instanced labeled as 'hate speech' * URL - Tweet text cleaned by lower casing, removing mentions and urls. Used 'subtask_a' column for labeling.
[ "# Dataset Card for hate-multi", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n- Dataset Creation\n - Source Data", "## Dataset Description", "### Dataset Summary\nThis dataset contains a collection of text labeled as offensive (class 1) or not (class 0).", "## Dataset Creation\nThe dataset was creating by aggregating multiple publicly available datasets.", "### Source Data\nThe following datasets were used:\n* URL - Tweet text cleaned by lower casing, removing mentions and urls. Dropped instanced labeled as 'hate speech'\n* URL - Tweet text cleaned by lower casing, removing mentions and urls. Used 'subtask_a' column for labeling." ]
[ "TAGS\n#task_categories-text-classification #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-derived #language-English #license-other #region-us \n", "# Dataset Card for hate-multi", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n- Dataset Creation\n - Source Data", "## Dataset Description", "### Dataset Summary\nThis dataset contains a collection of text labeled as offensive (class 1) or not (class 0).", "## Dataset Creation\nThe dataset was creating by aggregating multiple publicly available datasets.", "### Source Data\nThe following datasets were used:\n* URL - Tweet text cleaned by lower casing, removing mentions and urls. Dropped instanced labeled as 'hate speech'\n* URL - Tweet text cleaned by lower casing, removing mentions and urls. Used 'subtask_a' column for labeling." ]
9315319176c9787952fe616e77547287ee7cf875
script can be found here: https://github.com/vasudevgupta7/bigbird ```python DOC_STRIDE = 2048 MAX_LENGTH = 4096 SEED = 42 ```
vasudevgupta/bigbird-tokenized-natural-questions
[ "region:us" ]
2022-03-02T23:29:22+00:00
{}
2021-05-04T05:47:51+00:00
[]
[]
TAGS #region-us
script can be found here: URL
[]
[ "TAGS\n#region-us \n" ]
db71355e3b5bc188ea1eaf16e3e8846c5415b888
Test data for my `Quick`
vasudevgupta/data
[ "region:us" ]
2022-03-02T23:29:22+00:00
{}
2021-05-01T10:31:24+00:00
[]
[]
TAGS #region-us
Test data for my 'Quick'
[]
[ "TAGS\n#region-us \n" ]
6064dd00b89457d5b0104472c9e47705312184c8
Obtained using following code: ```python from datasets import load_dataset dataset = load_dataset("natural_questions", split="validation") dataset.save_to_disk("natural-questions-validation") ```
vasudevgupta/natural-questions-validation
[ "region:us" ]
2022-03-02T23:29:22+00:00
{}
2021-05-04T17:25:07+00:00
[]
[]
TAGS #region-us
Obtained using following code:
[]
[ "TAGS\n#region-us \n" ]
c75445b51b432ba9c7f95514e410b8fbadb892a3
Support documents for building https://huggingface.co/vblagoje/bart_lfqa model
vblagoje/lfqa_support_docs
[ "region:us" ]
2022-03-02T23:29:22+00:00
{}
2021-12-30T10:28:31+00:00
[]
[]
TAGS #region-us
Support documents for building URL model
[]
[ "TAGS\n#region-us \n" ]
fb8c5af3e97f49a827f2eeac4b4677bce882f92f
ShadowLink dataset is designed to evaluate the impact of entity overshadowing on the task of entity disambiguation. Paper: "Robustness Evaluation of Entity Disambiguation Using Prior Probes: the Case of Entity Overshadowing" by Vera Provatorova, Svitlana Vakulenko, Samarth Bhargav, Evangelos Kanoulas. EMNLP 2021. This version includes the test set used in our experiments (one short context example per entity). If you need an extended version with full-text context examples, please contact the authors.
vera-pro/ShadowLink
[ "region:us" ]
2022-03-02T23:29:22+00:00
{}
2021-12-14T16:38:53+00:00
[]
[]
TAGS #region-us
ShadowLink dataset is designed to evaluate the impact of entity overshadowing on the task of entity disambiguation. Paper: "Robustness Evaluation of Entity Disambiguation Using Prior Probes: the Case of Entity Overshadowing" by Vera Provatorova, Svitlana Vakulenko, Samarth Bhargav, Evangelos Kanoulas. EMNLP 2021. This version includes the test set used in our experiments (one short context example per entity). If you need an extended version with full-text context examples, please contact the authors.
[]
[ "TAGS\n#region-us \n" ]
f9be396f46b5f060005eb199d73de6ecb6fce2b2
# NER for Icelandic - MIM-GOLD-NER splits ## MIM-GOLD-NER The original MIM-GOLD-NER data is found at http://hdl.handle.net/20.500.12537/42 This repository packages the data for use with the Datasets library from hugginface. ## Old splits *This is no longer in use.* At the time of creation, the original data did not have train, dev and test splits. `create_splits.py` was used to create temporary splits.
vesteinn/icelandic-ner-MIM-GOLD-NER
[ "region:us" ]
2022-03-02T23:29:22+00:00
{}
2021-09-29T14:03:17+00:00
[]
[]
TAGS #region-us
# NER for Icelandic - MIM-GOLD-NER splits ## MIM-GOLD-NER The original MIM-GOLD-NER data is found at URL This repository packages the data for use with the Datasets library from hugginface. ## Old splits *This is no longer in use.* At the time of creation, the original data did not have train, dev and test splits. 'create_splits.py' was used to create temporary splits.
[ "# NER for Icelandic - MIM-GOLD-NER splits", "## MIM-GOLD-NER\n\nThe original MIM-GOLD-NER data is found at URL \n\nThis repository packages the data for use with the Datasets library from hugginface.", "## Old splits\n\n*This is no longer in use.*\n\nAt the time of creation, the original data did not have train, dev and test splits. 'create_splits.py' was used to create temporary splits." ]
[ "TAGS\n#region-us \n", "# NER for Icelandic - MIM-GOLD-NER splits", "## MIM-GOLD-NER\n\nThe original MIM-GOLD-NER data is found at URL \n\nThis repository packages the data for use with the Datasets library from hugginface.", "## Old splits\n\n*This is no longer in use.*\n\nAt the time of creation, the original data did not have train, dev and test splits. 'create_splits.py' was used to create temporary splits." ]
61eae7391a132cd6323cfb25a96f9c981a2f615a
# Natural Questions in Icelandic
vesteinn/icelandic-qa-NQiI
[ "task_categories:question-answering", "task_ids:open-domain-qa", "task_ids:extractive-qa", "annotations_creators:curated", "language_creators:curated", "multilinguality:monolingual", "source_datasets:original", "language:is", "license:cc-by-sa-4.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["curated"], "language_creators": ["curated"], "language": ["is"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": ["open-domain-qa", "extractive-qa"], "paperswithcode_id": "nqii", "pretty_name": "NQiI"}
2022-07-04T15:32:26+00:00
[]
[ "is" ]
TAGS #task_categories-question-answering #task_ids-open-domain-qa #task_ids-extractive-qa #annotations_creators-curated #language_creators-curated #multilinguality-monolingual #source_datasets-original #language-Icelandic #license-cc-by-sa-4.0 #region-us
# Natural Questions in Icelandic
[ "# Natural Questions in Icelandic" ]
[ "TAGS\n#task_categories-question-answering #task_ids-open-domain-qa #task_ids-extractive-qa #annotations_creators-curated #language_creators-curated #multilinguality-monolingual #source_datasets-original #language-Icelandic #license-cc-by-sa-4.0 #region-us \n", "# Natural Questions in Icelandic" ]
7aa770fdd6a2639befc2867f7feadfc4a791db41
# wiki-en-passages-20210101 This is a processed dump of the English Wikipedia from 2021-01-01. Each page has been splitted into paragraphs as they appear in the text. Lists, tables and headlines had been removed. In total it has 38,080,804 passages. Further, each article contain meta-data on the number of languages this article exists in and on the number of views this article received over a 1 year period. The articles are sorted from most popular (most languages available, most views) to least popular.
vocab-transformers/wiki-en-passages-20210101
[ "region:us" ]
2022-03-02T23:29:22+00:00
{}
2022-02-24T17:09:32+00:00
[]
[]
TAGS #region-us
# wiki-en-passages-20210101 This is a processed dump of the English Wikipedia from 2021-01-01. Each page has been splitted into paragraphs as they appear in the text. Lists, tables and headlines had been removed. In total it has 38,080,804 passages. Further, each article contain meta-data on the number of languages this article exists in and on the number of views this article received over a 1 year period. The articles are sorted from most popular (most languages available, most views) to least popular.
[ "# wiki-en-passages-20210101\r\nThis is a processed dump of the English Wikipedia from 2021-01-01. Each page has been splitted into paragraphs as they appear in the text. Lists, tables and headlines had been removed. In total it has 38,080,804 passages.\r\n\r\nFurther, each article contain meta-data on the number of languages this article exists in and on the number of views this article received over a 1 year period.\r\n\r\nThe articles are sorted from most popular (most languages available, most views) to least popular." ]
[ "TAGS\n#region-us \n", "# wiki-en-passages-20210101\r\nThis is a processed dump of the English Wikipedia from 2021-01-01. Each page has been splitted into paragraphs as they appear in the text. Lists, tables and headlines had been removed. In total it has 38,080,804 passages.\r\n\r\nFurther, each article contain meta-data on the number of languages this article exists in and on the number of views this article received over a 1 year period.\r\n\r\nThe articles are sorted from most popular (most languages available, most views) to least popular." ]
fe2b27aeec9a4e74e359082a9a5c2f40ad111180
# Dataset Card for vumichien/common_voice_large_jsut_jsss_css10
vumichien/common_voice_large_jsut_jsss_css10
[ "task_categories:automatic-speech-recognition", "language_creators:expert-generated", "multilinguality:monolingual", "language:ja", "license:cc-by-nc-nd-4.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": [], "language_creators": ["expert-generated"], "language": ["ja"], "license": ["cc-by-nc-nd-4.0"], "multilinguality": ["monolingual"], "source_datasets": {"common_voice": ["mozilla-foundation/common_voice_7_0"], "JSUT": ["Japanese speech corpus of Saruwatari-lab"], "JSSS": ["Japanese speech corpus for summarization and simplification"], "CSS10": ["A Collection of Single Speaker Speech Datasets for 10 Languages"]}, "task_categories": ["automatic-speech-recognition"], "task_ids": []}
2022-10-24T23:35:20+00:00
[]
[ "ja" ]
TAGS #task_categories-automatic-speech-recognition #language_creators-expert-generated #multilinguality-monolingual #language-Japanese #license-cc-by-nc-nd-4.0 #region-us
# Dataset Card for vumichien/common_voice_large_jsut_jsss_css10
[ "# Dataset Card for vumichien/common_voice_large_jsut_jsss_css10" ]
[ "TAGS\n#task_categories-automatic-speech-recognition #language_creators-expert-generated #multilinguality-monolingual #language-Japanese #license-cc-by-nc-nd-4.0 #region-us \n", "# Dataset Card for vumichien/common_voice_large_jsut_jsss_css10" ]
11bef3dfce0ce107eb5e276373dcd28759ce85ee
# Dataset Card for "imdb-javanese" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits Sample Size](#data-instances-sample-size) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [Github](https://github.com/w11wo/nlp-datasets#javanese-imdb) - **Repository:** [Github](https://github.com/w11wo/nlp-datasets#javanese-imdb) - **Paper:** [Aclweb](http://www.aclweb.org/anthology/P11-1015) - **Point of Contact:** [Wilson Wongso](https://github.com/w11wo) - **Size of downloaded dataset files:** 17.0 MB - **Size of the generated dataset:** 47.5 MB - **Total amount of disk used:** 64.5 MB ### Dataset Summary Large Movie Review Dataset translated to Javanese. This is a dataset for binary sentiment classification containing substantially more data than previous benchmark datasets. We provide a set of 25,000 highly polar movie reviews for training, and 25,000 for testing. There is additional unlabeled data for use as well. We translated the [original IMDB Dataset](https://huggingface.co/datasets/imdb) to Javanese using the multi-lingual MarianMT Transformer model from [`Helsinki-NLP/opus-mt-en-mul`](https://huggingface.co/Helsinki-NLP/opus-mt-en-mul). ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure We show detailed information for up to 5 configurations of the dataset. ### Data Instances An example of `javanese_imdb_train.csv` looks as follows. | label | text | | ----- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | 1 | "Drama romantik sing digawé karo direktur Martin Ritt kuwi ora dingertèni, nanging ana momen-momen sing marahi karisma lintang Jane Fonda lan Robert De Niro (kelompok sing luar biasa). Dhèwèké dadi randha sing ora isa mlaku, iso anu anyar lan anyar-inventor-- kowé isa nganggep isiné. Adapsi novel Pat Barker ""Union Street"" (yak titel sing apik!) arep dinggo-back-back it on bland, lan pendidikan film kuwi gampang, nanging isih nyenengké; a rosy-hued-inventor-fantasi. Ora ana sing ngganggu gambar sing sejati ding kok iso dinggo nggawe gambar sing paling nyeneng." | | 0 | "Pengalaman wong lanang sing nduwé perasaan sing ora lumrah kanggo babi. Mulai nganggo tuladha sing luar biasa yaiku komedia. Wong orkestra termel digawé dadi wong gila, sing kasar merga nyanyian nyanyi. Sayangé, kuwi tetep absurd wektu WHOLE tanpa ceramah umum sing mung digawé. Malah, sing ana ing jaman kuwi kudu ditinggalké. Diyalog kryptik sing nggawé Shakespeare marah gampang kanggo kelas telu. Pak teknis kuwi luwih apik timbang kowe mikir nganggo cinematografi sing apik sing jenengé Vilmos Zsmond. Masa depan bintang Saly Kirkland lan Frederic Forrest isa ndelok." | ### Data Fields - `text`: The movie review translated into Javanese. - `label`: The sentiment exhibited in the review, either `1` (positive) or `0` (negative). ### Data Splits Sample Size | train | unsupervised | test | | ----: | -----------: | ----: | | 25000 | 50000 | 25000 | ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Citation Information If you use this dataset in your research, please cite: ``` @inproceedings{wongso2021causal, title={Causal and Masked Language Modeling of Javanese Language using Transformer-based Architectures}, author={Wongso, Wilson and Setiawan, David Samuel and Suhartono, Derwin}, booktitle={2021 International Conference on Advanced Computer Science and Information Systems (ICACSIS)}, pages={1--7}, year={2021}, organization={IEEE} } ``` ``` @InProceedings{maas-EtAl:2011:ACL-HLT2011, author = {Maas, Andrew L. and Daly, Raymond E. and Pham, Peter T. and Huang, Dan and Ng, Andrew Y. and Potts, Christopher}, title = {Learning Word Vectors for Sentiment Analysis}, booktitle = {Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies}, month = {June}, year = {2011}, address = {Portland, Oregon, USA}, publisher = {Association for Computational Linguistics}, pages = {142--150}, url = {http://www.aclweb.org/anthology/P11-1015} } ```
w11wo/imdb-javanese
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:found", "language_creators:machine-generated", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:jv", "license:odbl", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["found"], "language_creators": ["machine-generated"], "language": ["jv"], "license": ["odbl"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["sentiment-classification"], "extended": ["original"]}
2022-10-25T09:01:48+00:00
[]
[ "jv" ]
TAGS #task_categories-text-classification #task_ids-sentiment-classification #annotations_creators-found #language_creators-machine-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Javanese #license-odbl #region-us
Dataset Card for "imdb-javanese" ================================ Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits Sample Size * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information Dataset Description ------------------- * Homepage: Github * Repository: Github * Paper: Aclweb * Point of Contact: Wilson Wongso * Size of downloaded dataset files: 17.0 MB * Size of the generated dataset: 47.5 MB * Total amount of disk used: 64.5 MB ### Dataset Summary Large Movie Review Dataset translated to Javanese. This is a dataset for binary sentiment classification containing substantially more data than previous benchmark datasets. We provide a set of 25,000 highly polar movie reviews for training, and 25,000 for testing. There is additional unlabeled data for use as well. We translated the original IMDB Dataset to Javanese using the multi-lingual MarianMT Transformer model from 'Helsinki-NLP/opus-mt-en-mul'. ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- We show detailed information for up to 5 configurations of the dataset. ### Data Instances An example of 'javanese\_imdb\_train.csv' looks as follows. ### Data Fields * 'text': The movie review translated into Javanese. * 'label': The sentiment exhibited in the review, either '1' (positive) or '0' (negative). ### Data Splits Sample Size Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information If you use this dataset in your research, please cite:
[ "### Dataset Summary\n\n\nLarge Movie Review Dataset translated to Javanese. This is a dataset for binary sentiment classification containing substantially more data than previous benchmark datasets. We provide a set of 25,000 highly polar movie reviews for training, and 25,000 for testing. There is additional unlabeled data for use as well. We translated the original IMDB Dataset to Javanese using the multi-lingual MarianMT Transformer model from 'Helsinki-NLP/opus-mt-en-mul'.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------\n\n\nWe show detailed information for up to 5 configurations of the dataset.", "### Data Instances\n\n\nAn example of 'javanese\\_imdb\\_train.csv' looks as follows.", "### Data Fields\n\n\n* 'text': The movie review translated into Javanese.\n* 'label': The sentiment exhibited in the review, either '1' (positive) or '0' (negative).", "### Data Splits Sample Size\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nIf you use this dataset in your research, please cite:" ]
[ "TAGS\n#task_categories-text-classification #task_ids-sentiment-classification #annotations_creators-found #language_creators-machine-generated #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Javanese #license-odbl #region-us \n", "### Dataset Summary\n\n\nLarge Movie Review Dataset translated to Javanese. This is a dataset for binary sentiment classification containing substantially more data than previous benchmark datasets. We provide a set of 25,000 highly polar movie reviews for training, and 25,000 for testing. There is additional unlabeled data for use as well. We translated the original IMDB Dataset to Javanese using the multi-lingual MarianMT Transformer model from 'Helsinki-NLP/opus-mt-en-mul'.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------\n\n\nWe show detailed information for up to 5 configurations of the dataset.", "### Data Instances\n\n\nAn example of 'javanese\\_imdb\\_train.csv' looks as follows.", "### Data Fields\n\n\n* 'text': The movie review translated into Javanese.\n* 'label': The sentiment exhibited in the review, either '1' (positive) or '0' (negative).", "### Data Splits Sample Size\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nIf you use this dataset in your research, please cite:" ]
decea5bf4ca7c23d66f55702070fb325528d4b7d
# Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** annotations_creators: - expert-generated language_creators: - expert-generated languages: - english licenses: - unknown multilinguality: - monolingual pretty_name: maslow-stories size_categories: - unknown source_datasets: [] task_categories: - question-answering task_ids: - multiple-choice-qa
wanagenst/maslow-stories
[ "region:us" ]
2022-03-02T23:29:22+00:00
{}
2021-12-29T23:14:44+00:00
[]
[]
TAGS #region-us
# Dataset Card for [Dataset Name] ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: - Repository: - Paper: - Leaderboard: - Point of Contact: annotations_creators: - expert-generated language_creators: - expert-generated languages: - english licenses: - unknown multilinguality: - monolingual pretty_name: maslow-stories size_categories: - unknown source_datasets: [] task_categories: - question-answering task_ids: - multiple-choice-qa
[ "# Dataset Card for [Dataset Name]", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:\n\nannotations_creators:\n- expert-generated\nlanguage_creators:\n- expert-generated\nlanguages:\n- english\nlicenses:\n- unknown\nmultilinguality:\n- monolingual\npretty_name: maslow-stories\nsize_categories:\n- unknown\nsource_datasets: []\ntask_categories:\n- question-answering\ntask_ids:\n- multiple-choice-qa" ]
[ "TAGS\n#region-us \n", "# Dataset Card for [Dataset Name]", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:\n\nannotations_creators:\n- expert-generated\nlanguage_creators:\n- expert-generated\nlanguages:\n- english\nlicenses:\n- unknown\nmultilinguality:\n- monolingual\npretty_name: maslow-stories\nsize_categories:\n- unknown\nsource_datasets: []\ntask_categories:\n- question-answering\ntask_ids:\n- multiple-choice-qa" ]
0c31d59390719a2d53332194f2e768db99ac0474
# Dataset Card for LSOIE ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://github.com/Jacobsolawetz/large-scale-oie - **Repository:** https://github.com/Jacobsolawetz/large-scale-oie - **Paper:** https://arxiv.org/abs/2101.11177 - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Needs More Information] ### Dataset Summary The Large Scale Open Information Extraction Dataset (LSOIE), is a dataset 20 times larger than the next largest human-annotated Open Information Extraction (OIE) dataset. LSOIE is a built upon the QA-SRL 2.0 dataset by transforming the list of Questions and answers for each predicate to a tuple representing a fact. ### Supported Tasks and Leaderboards Open Information Extraction ### Languages The text in this dataset is english. ## Dataset Structure ### Data Instances A datapoint comprises one fact together with the sentence it was extracted from. There can be multiple facts for each Sentence. Each fact is represented by a tuple $(a_0, p, a_1,\dots a_n)$ where $a_0$ is the head entity $p$ is the predicate and $a_1, \dots,a_n$ represent the tail. ### Data Fields - word_ids : sequence of indices (int) representing tokens in a sentence, - words : a sequence of strings, the tokens in the sentence, - pred : the predicate of the fact, - pred_ids : ids of the tokens in the predicate, - head_pred_id : id of the head token in the predicate, - sent_id : sentence id, - run_id : , - label : Sequence of tags (BIO) representing the fact, e.g. if the fact is given by $(a_0, p, a_1, \dots, a_n) $ ### Data Splits [Needs More Information] ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information [Needs More Information]
wardenga/lsoie
[ "task_categories:text-retrieval", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:monolingual", "size_categories:unknown", "source_datasets:extended|qa_srl", "language:en", "license:mit", "Open Information Extraction", "arxiv:2101.11177", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["machine-generated"], "language_creators": ["found"], "language": ["en"], "license": ["mit"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": ["extended|qa_srl"], "task_categories": ["text-retrieval"], "task_ids": [], "pretty_name": "LSOIE", "tags": ["Open Information Extraction"]}
2022-10-21T04:51:54+00:00
[ "2101.11177" ]
[ "en" ]
TAGS #task_categories-text-retrieval #annotations_creators-machine-generated #language_creators-found #multilinguality-monolingual #size_categories-unknown #source_datasets-extended|qa_srl #language-English #license-mit #Open Information Extraction #arxiv-2101.11177 #region-us
# Dataset Card for LSOIE ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: URL - Repository: URL - Paper: URL - Leaderboard: - Point of Contact: ### Dataset Summary The Large Scale Open Information Extraction Dataset (LSOIE), is a dataset 20 times larger than the next largest human-annotated Open Information Extraction (OIE) dataset. LSOIE is a built upon the QA-SRL 2.0 dataset by transforming the list of Questions and answers for each predicate to a tuple representing a fact. ### Supported Tasks and Leaderboards Open Information Extraction ### Languages The text in this dataset is english. ## Dataset Structure ### Data Instances A datapoint comprises one fact together with the sentence it was extracted from. There can be multiple facts for each Sentence. Each fact is represented by a tuple $(a_0, p, a_1,\dots a_n)$ where $a_0$ is the head entity $p$ is the predicate and $a_1, \dots,a_n$ represent the tail. ### Data Fields - word_ids : sequence of indices (int) representing tokens in a sentence, - words : a sequence of strings, the tokens in the sentence, - pred : the predicate of the fact, - pred_ids : ids of the tokens in the predicate, - head_pred_id : id of the head token in the predicate, - sent_id : sentence id, - run_id : , - label : Sequence of tags (BIO) representing the fact, e.g. if the fact is given by $(a_0, p, a_1, \dots, a_n) $ ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information
[ "# Dataset Card for LSOIE", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: \n- Point of Contact:", "### Dataset Summary\n\nThe Large Scale Open Information Extraction Dataset (LSOIE), is a dataset 20 times larger than the next largest human-annotated Open Information Extraction (OIE) dataset. LSOIE is a built upon the QA-SRL 2.0 dataset by transforming the list of Questions and answers for each predicate to a tuple representing a fact.", "### Supported Tasks and Leaderboards\n\nOpen Information Extraction", "### Languages\n\nThe text in this dataset is english.", "## Dataset Structure", "### Data Instances\n\nA datapoint comprises one fact together with the sentence it was extracted from. There can be multiple facts for each Sentence. Each fact is represented by a tuple $(a_0, p, a_1,\\dots a_n)$ where $a_0$ is the head entity $p$ is the predicate and $a_1, \\dots,a_n$ represent the tail.", "### Data Fields\n\n- word_ids : sequence of indices (int) representing tokens in a sentence,\n- words : a sequence of strings, the tokens in the sentence,\n- pred : the predicate of the fact,\n- pred_ids : ids of the tokens in the predicate,\n- head_pred_id : id of the head token in the predicate,\n- sent_id : sentence id,\n- run_id : ,\n- label : Sequence of tags (BIO) representing the fact, e.g. if the fact is given by $(a_0, p, a_1, \\dots, a_n) $", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information" ]
[ "TAGS\n#task_categories-text-retrieval #annotations_creators-machine-generated #language_creators-found #multilinguality-monolingual #size_categories-unknown #source_datasets-extended|qa_srl #language-English #license-mit #Open Information Extraction #arxiv-2101.11177 #region-us \n", "# Dataset Card for LSOIE", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: \n- Point of Contact:", "### Dataset Summary\n\nThe Large Scale Open Information Extraction Dataset (LSOIE), is a dataset 20 times larger than the next largest human-annotated Open Information Extraction (OIE) dataset. LSOIE is a built upon the QA-SRL 2.0 dataset by transforming the list of Questions and answers for each predicate to a tuple representing a fact.", "### Supported Tasks and Leaderboards\n\nOpen Information Extraction", "### Languages\n\nThe text in this dataset is english.", "## Dataset Structure", "### Data Instances\n\nA datapoint comprises one fact together with the sentence it was extracted from. There can be multiple facts for each Sentence. Each fact is represented by a tuple $(a_0, p, a_1,\\dots a_n)$ where $a_0$ is the head entity $p$ is the predicate and $a_1, \\dots,a_n$ represent the tail.", "### Data Fields\n\n- word_ids : sequence of indices (int) representing tokens in a sentence,\n- words : a sequence of strings, the tokens in the sentence,\n- pred : the predicate of the fact,\n- pred_ids : ids of the tokens in the predicate,\n- head_pred_id : id of the head token in the predicate,\n- sent_id : sentence id,\n- run_id : ,\n- label : Sequence of tags (BIO) representing the fact, e.g. if the fact is given by $(a_0, p, a_1, \\dots, a_n) $", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information" ]
9c4fd2e7c4968bc5b8b26e94e0bd2eaa69849873
# Dataset Card for COVID-19-vaccine-attitude-tweets ## Dataset Description - **Paper:** [Be Careful Who You Follow. The Impact of the Initial Set of Friends on COVID-19 Vaccine tweets](https://www.researchgate.net/publication/355726080_Be_Careful_Who_You_Follow_The_Impact_of_the_Initial_Set_of_Friends_on_COVID-19_Vaccine_Tweets) - **Point of Contact:** [Izabela Krysinska]([email protected]) ### Dataset Summary The dataset consists of 2564 manually annotated tweets related to COVID-19 vaccines. The dataset can be used to discover the attitude expressed in the tweet towards the subject of COVID-19 vaccines. Tweets are in English. The dataset was curated in such a way as to maximize the likelihood of tweets with a strong emotional tone. We have assumed the existence of three classes: - PRO (label 0): positive, the tweet unequivocally suggests support for getting vaccinated against COVID-19 - NEUTRAL (label 1): the tweet is mostly informative, does not show emotions vs. presented information, contains strong positive or negative emotions but concerning politics (vaccine distribution, vaccine passports, etc.) - AGAINST (label 2): the tweet is clearly against vaccination and contains warnings, conspiracy theories, etc. The dataset does not contain the content of Twitter statuses. Original tweets can be obtained via Twitter API. You can use [`twitter`](https://python-twitter.readthedocs.io/en/latest/index.html) library: ```python import twitter from datasets import load_dataset api = twitter.Api(consumer_key=<consumer key>, consumer_secret=<consumer secret>, access_token_key=<access token>, access_token_secret=<access token secret>, sleep_on_rate_limit=True) tweets = load_dataset('webimmunization/COVID-19-vaccine-attitude-tweets') def add_tweet_content(example): try: status = api.GetStatus(tweet_id) except twitter.TwitterError as err: print(err) status = {'text': None} return {'status': status.text} tweets_with_text = tweets.map(add_tweet_content) ``` ### Supported Tasks and Leaderboards - `text-classification`: The dataset can be used to discover the attitude expressed in the tweet towards the subject of COVID-19 vaccines, whether the tweet presents a positive, neutral or negative attitude. Success on this task can be measured by achieving a *high* AUROC or [F1](https://huggingface.co/metrics/f1). ### Languages [EN] English. The text that can be accessed via the Twitter API using the identifiers in this dataset is in English. ## Dataset Structure ### Data Instances The 1st column is Twitter Status ID and the 2nd column is the label denoting the attitude towards vaccines against COVID-19. Example: ``` { 'id': '1387627601955545089', 'attitude': 0 # positive attitude } ``` ### Data Fields - `attitude`: attitude towards vaccines against COVID-19. `0` denotes positive attitude, `1` denotes neutral attitude, `2` dentoes negative attitude. - `id`: Twitter status id ### Data Splits [Needs More Information] ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data Social media posts. #### Initial Data Collection and Normalization We queried the Twitter search engine with manually curated hashtags such as \#coronavaccine, \#getvaccinated, #mRNA, #PfizerGang, #VaccineNoThankYou, #vaccinesWork, #BillGatesVaccine, #VaccinesKill, etc. to fetch tweets related to COVID-19 vaccines. Then we have searched for tweets with conspicuous emotional load, both negative and positive. Once we had the set of emotionally loaded tweets we started fetching other tweets posted by the authors of emotional tweets. We'd been collecting tweets from mid of April for about a month. Then we filtered out tweets that were not related to the vaccines. In this manner, we collected tweets that are more probable to be emotional rather than strictly informative. #### Who are the source language producers? The language producers are users of Twitter. ### Annotations #### Annotation process We have manually annotated over 2500 tweets using the following annotation protocol. We have assumed the existence of three classes: - PRO (label 0): positive, the tweet unequivocally suggests support for getting vaccinated against COVID-19 - NEUTRAL(label 1): the tweet is mostly informative, does not show emotions vs. presented information, contains strong positive or negative emotions but concerning politics (vaccine distribution, vaccine passports, etc.) - AGAINST(label 2): the tweet is clearly against vaccination and contains warnings, conspiracy theories, etc. The PRO class consists of tweets which explicitly urge people to go get vaccinated. The AGAINST class contains tweets which explicitly warn people against getting the vaccine. Tweet annotation has been conducted using [Prodigy](https://prodi.gy) tool. The annotators were provided with the following instructions: - Do not spend too much time on a tweet and try to make a quick decision, the slight discrepancy in labeling (especially if you are deciding between *PRO* and *NEUTRAL*) will not affect the classifier significantly. - Assign tweets that seem to originate from news sites as *NEUTRAL* and use *PRO* for tweets that express unequivocal support for getting the vaccine. - There are many tweets on vaccination and politics. They should fall into the *NEUTRAL* class unless they contain a clear call to action: go get vaccinated! - Use only the contents of the tweet to label it, do not open the links if the content of a tweet is not enough for labeling (e.g., “Hmm, interesting, https://t.co/ki345o2i345”), skip such tweets instead of giving it a label. - Use the option to skip a tweet only when there is nothing in the tweet except for an URL or a few meaningless words, otherwise do not hesitate to put the tweet in the *NEUTRAL* class. We have asked 8 annotators to annotate the same set of 100 tweets using the guidelines proposed in the annotation protocol to verify the annotation protocol. We have measured the interrater agreement using the Fliess' kappa coefficient <cite>[Fleiss 1971][2]</cite>. The results were as follows: - when measuring the agreement with four possible classes (*PRO*, *NEUTRAL*, *AGAINST*, *NONE*, where the last class represents tweets that were rejected from annotation), the agreement is `kappa=0.3940` - when measuring the agreement after removing tweets that were rejected, the agreement is `kappa=0.3560` - when measuring the agreement if rejected tweets are classified as *NEUTRAL*, the agreement is `kappa=0.3753` - when measuring the agreement for only two classes (using *PRO*, *NEUTRAL* and *NONE* as one class, and *AGAINST* as another class), the agreement is `kappa=0.5419` #### Who are the annotators? [Members of the #WebImmunization project](https://webimmunization.cm-uj.krakow.pl/en/team/) ### Personal and Sensitive Information According to the Twitter developer policy, if displayed content ceases to be available through the Twitter API, it can not be obtained from other sources. Thus, we provide tweets' ids to maintain the integrity of all Twitter content with Twitter service. The proper way to extract tweets' content is via Twitter API. Whenever Twitter decided to suspend the author of the tweet, or the author decides to delete their tweet it won't be possible to obtain the tweet's content with this dataset. ## Considerations for Using the Data ### Social Impact of Dataset The COVID-19 is a serious global health threat that can be mitigated only by public health interventions that require massive participation. Mass vaccination against COVID-19 is one of the most effective and economically promising solutions to stop the spread of the Sars-Cov-2 virus, which is responsible for the pandemic. Understanding how misinformation about COVID-19 vaccines is spreading in one of the globally most important social networks is paramount. ### Discussion of Biases [Needs More Information] ### Other Known Limitations #### Interannotator agreement According to a popular interpretation of Fleiss' kappa <cite>[Landis 1977][2]</cite>, the annotators are in fair agreement in the first three scenarios and moderate agreement in the last scenario. These results suggest that the annotators are struggling to distinguish between *PRO* and *NEUTRAL* classes, and sometimes they have divergent opinions on whether the tweet should be rejected from training. Still, they are coherent when labeling *AGAINST* tweets. #### Suspended account & deleted tweets Some of the statuses from the dataset can not be obtained due to account suspension or tweet deletion. The last time we check (15th of November, 2021), about 12% of tweets were authored by suspended accounts and about 10% were already deleted. ### Dataset Curators Agata Olejniuk Poznan University of Technology, Poland The research leading to these results has received funding from the EEA Financial Mechanism 2014-2021. Project registration number: 2019/35/J/HS6 /03498. ### Licensing Information [Needs More Information] ### Citation Information ``` @inproceedings{krysinska2021careful, title={Be Careful Who You Follow: The Impact of the Initial Set of Friends on COVID-19 Vaccine Tweets}, author={Krysi{\'n}ska, Izabela and W{\'o}jtowicz, Tomi and Olejniuk, Agata and Morzy, Miko{\l}aj and Piasecki, Jan}, booktitle={Proceedings of the 2021 Workshop on Open Challenges in Online Social Networks}, pages={1--8}, year={2021} } ``` [DOI](https://doi.org/10.1145/3472720.3483619) ### Contributions We would like to cordially thank the [members of the #WebImmunization project](https://webimmunization.cm-uj.krakow.pl/en/team/) for helping with data annotation. ## References [1]: Joseph L Fleiss. Measuring nominal scale agreement among many raters.Psychological bulletin, 76(5):378, 1971. [2]: J Richard Landis and Gary G Koch. The measurement of observer agreement for categorical data. biometrics, pages 159–174, 1977.
webimmunization/COVID-19-vaccine-attitude-tweets
[ "task_categories:text-classification", "task_ids:sentiment-classification", "task_ids:intent-classification", "annotations_creators:crowdsourced", "language_creators:other", "multilinguality:monolingual", "size_categories:54KB", "source_datasets:original", "language:en", "license:cc-by-4.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["other"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["54KB"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["sentiment-classification", "intent-classification"], "pretty_name": "twitter covid19 tweets"}
2022-10-25T09:01:50+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #task_ids-sentiment-classification #task_ids-intent-classification #annotations_creators-crowdsourced #language_creators-other #multilinguality-monolingual #size_categories-54KB #source_datasets-original #language-English #license-cc-by-4.0 #region-us
# Dataset Card for COVID-19-vaccine-attitude-tweets ## Dataset Description - Paper: Be Careful Who You Follow. The Impact of the Initial Set of Friends on COVID-19 Vaccine tweets - Point of Contact: Izabela Krysinska ### Dataset Summary The dataset consists of 2564 manually annotated tweets related to COVID-19 vaccines. The dataset can be used to discover the attitude expressed in the tweet towards the subject of COVID-19 vaccines. Tweets are in English. The dataset was curated in such a way as to maximize the likelihood of tweets with a strong emotional tone. We have assumed the existence of three classes: - PRO (label 0): positive, the tweet unequivocally suggests support for getting vaccinated against COVID-19 - NEUTRAL (label 1): the tweet is mostly informative, does not show emotions vs. presented information, contains strong positive or negative emotions but concerning politics (vaccine distribution, vaccine passports, etc.) - AGAINST (label 2): the tweet is clearly against vaccination and contains warnings, conspiracy theories, etc. The dataset does not contain the content of Twitter statuses. Original tweets can be obtained via Twitter API. You can use 'twitter' library: ### Supported Tasks and Leaderboards - 'text-classification': The dataset can be used to discover the attitude expressed in the tweet towards the subject of COVID-19 vaccines, whether the tweet presents a positive, neutral or negative attitude. Success on this task can be measured by achieving a *high* AUROC or F1. ### Languages [EN] English. The text that can be accessed via the Twitter API using the identifiers in this dataset is in English. ## Dataset Structure ### Data Instances The 1st column is Twitter Status ID and the 2nd column is the label denoting the attitude towards vaccines against COVID-19. Example: ### Data Fields - 'attitude': attitude towards vaccines against COVID-19. '0' denotes positive attitude, '1' denotes neutral attitude, '2' dentoes negative attitude. - 'id': Twitter status id ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data Social media posts. #### Initial Data Collection and Normalization We queried the Twitter search engine with manually curated hashtags such as \#coronavaccine, \#getvaccinated, #mRNA, #PfizerGang, #VaccineNoThankYou, #vaccinesWork, #BillGatesVaccine, #VaccinesKill, etc. to fetch tweets related to COVID-19 vaccines. Then we have searched for tweets with conspicuous emotional load, both negative and positive. Once we had the set of emotionally loaded tweets we started fetching other tweets posted by the authors of emotional tweets. We'd been collecting tweets from mid of April for about a month. Then we filtered out tweets that were not related to the vaccines. In this manner, we collected tweets that are more probable to be emotional rather than strictly informative. #### Who are the source language producers? The language producers are users of Twitter. ### Annotations #### Annotation process We have manually annotated over 2500 tweets using the following annotation protocol. We have assumed the existence of three classes: - PRO (label 0): positive, the tweet unequivocally suggests support for getting vaccinated against COVID-19 - NEUTRAL(label 1): the tweet is mostly informative, does not show emotions vs. presented information, contains strong positive or negative emotions but concerning politics (vaccine distribution, vaccine passports, etc.) - AGAINST(label 2): the tweet is clearly against vaccination and contains warnings, conspiracy theories, etc. The PRO class consists of tweets which explicitly urge people to go get vaccinated. The AGAINST class contains tweets which explicitly warn people against getting the vaccine. Tweet annotation has been conducted using Prodigy tool. The annotators were provided with the following instructions: - Do not spend too much time on a tweet and try to make a quick decision, the slight discrepancy in labeling (especially if you are deciding between *PRO* and *NEUTRAL*) will not affect the classifier significantly. - Assign tweets that seem to originate from news sites as *NEUTRAL* and use *PRO* for tweets that express unequivocal support for getting the vaccine. - There are many tweets on vaccination and politics. They should fall into the *NEUTRAL* class unless they contain a clear call to action: go get vaccinated! - Use only the contents of the tweet to label it, do not open the links if the content of a tweet is not enough for labeling (e.g., “Hmm, interesting, https://t.co/ki345o2i345”), skip such tweets instead of giving it a label. - Use the option to skip a tweet only when there is nothing in the tweet except for an URL or a few meaningless words, otherwise do not hesitate to put the tweet in the *NEUTRAL* class. We have asked 8 annotators to annotate the same set of 100 tweets using the guidelines proposed in the annotation protocol to verify the annotation protocol. We have measured the interrater agreement using the Fliess' kappa coefficient <cite>[Fleiss 1971][2]</cite>. The results were as follows: - when measuring the agreement with four possible classes (*PRO*, *NEUTRAL*, *AGAINST*, *NONE*, where the last class represents tweets that were rejected from annotation), the agreement is 'kappa=0.3940' - when measuring the agreement after removing tweets that were rejected, the agreement is 'kappa=0.3560' - when measuring the agreement if rejected tweets are classified as *NEUTRAL*, the agreement is 'kappa=0.3753' - when measuring the agreement for only two classes (using *PRO*, *NEUTRAL* and *NONE* as one class, and *AGAINST* as another class), the agreement is 'kappa=0.5419' #### Who are the annotators? Members of the #WebImmunization project ### Personal and Sensitive Information According to the Twitter developer policy, if displayed content ceases to be available through the Twitter API, it can not be obtained from other sources. Thus, we provide tweets' ids to maintain the integrity of all Twitter content with Twitter service. The proper way to extract tweets' content is via Twitter API. Whenever Twitter decided to suspend the author of the tweet, or the author decides to delete their tweet it won't be possible to obtain the tweet's content with this dataset. ## Considerations for Using the Data ### Social Impact of Dataset The COVID-19 is a serious global health threat that can be mitigated only by public health interventions that require massive participation. Mass vaccination against COVID-19 is one of the most effective and economically promising solutions to stop the spread of the Sars-Cov-2 virus, which is responsible for the pandemic. Understanding how misinformation about COVID-19 vaccines is spreading in one of the globally most important social networks is paramount. ### Discussion of Biases ### Other Known Limitations #### Interannotator agreement According to a popular interpretation of Fleiss' kappa <cite>[Landis 1977][2]</cite>, the annotators are in fair agreement in the first three scenarios and moderate agreement in the last scenario. These results suggest that the annotators are struggling to distinguish between *PRO* and *NEUTRAL* classes, and sometimes they have divergent opinions on whether the tweet should be rejected from training. Still, they are coherent when labeling *AGAINST* tweets. #### Suspended account & deleted tweets Some of the statuses from the dataset can not be obtained due to account suspension or tweet deletion. The last time we check (15th of November, 2021), about 12% of tweets were authored by suspended accounts and about 10% were already deleted. ### Dataset Curators Agata Olejniuk Poznan University of Technology, Poland The research leading to these results has received funding from the EEA Financial Mechanism 2014-2021. Project registration number: 2019/35/J/HS6 /03498. ### Licensing Information DOI ### Contributions We would like to cordially thank the members of the #WebImmunization project for helping with data annotation. ## References [1]: Joseph L Fleiss. Measuring nominal scale agreement among many raters.Psychological bulletin, 76(5):378, 1971. [2]: J Richard Landis and Gary G Koch. The measurement of observer agreement for categorical data. biometrics, pages 159–174, 1977.
[ "# Dataset Card for COVID-19-vaccine-attitude-tweets", "## Dataset Description\n\n- Paper: Be Careful Who You Follow. The Impact of the Initial Set of Friends on COVID-19 Vaccine tweets\n- Point of Contact: Izabela Krysinska", "### Dataset Summary\n\nThe dataset consists of 2564 manually annotated tweets related to COVID-19 vaccines. The dataset can be used to discover the attitude expressed in the tweet towards the subject of COVID-19 vaccines. Tweets are in English. The dataset was curated in such a way as to maximize the likelihood of tweets with a strong emotional tone. We have assumed the existence of three classes:\n\n- PRO (label 0): positive, the tweet unequivocally suggests support for getting vaccinated against COVID-19\n- NEUTRAL (label 1): the tweet is mostly informative, does not show emotions vs. presented information, contains strong positive or negative emotions but concerning politics (vaccine distribution, vaccine passports, etc.)\n- AGAINST (label 2): the tweet is clearly against vaccination and contains warnings, conspiracy theories, etc.\n\n\nThe dataset does not contain the content of Twitter statuses. Original tweets can be obtained via Twitter API.\nYou can use 'twitter' library:", "### Supported Tasks and Leaderboards\n\n- 'text-classification': The dataset can be used to discover the attitude expressed in the tweet towards the subject of COVID-19 vaccines, whether the tweet presents a positive, neutral or negative attitude. Success on this task can be measured by achieving a *high* AUROC or F1.", "### Languages\n[EN] English.\nThe text that can be accessed via the Twitter API using the identifiers in this dataset is in English.", "## Dataset Structure", "### Data Instances\nThe 1st column is Twitter Status ID and the 2nd column is the label denoting the attitude towards vaccines against COVID-19.\nExample:", "### Data Fields\n\n\n- 'attitude': attitude towards vaccines against COVID-19. '0' denotes positive attitude, '1' denotes neutral attitude, '2' dentoes negative attitude.\n\n- 'id': Twitter status id", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data\n\nSocial media posts.", "#### Initial Data Collection and Normalization\n\nWe queried the Twitter search engine with manually curated hashtags such as \\#coronavaccine, \\#getvaccinated, #mRNA, #PfizerGang, #VaccineNoThankYou, #vaccinesWork, #BillGatesVaccine, #VaccinesKill, etc. to fetch tweets related to COVID-19 vaccines. Then we have searched for tweets with conspicuous emotional load, both negative and positive. Once we had the set of emotionally loaded tweets we started fetching other tweets posted by the authors of emotional tweets. We'd been collecting tweets from mid of April for about a month. Then we filtered out tweets that were not related to the vaccines. In this manner, we collected tweets that are more probable to be emotional rather than strictly informative.", "#### Who are the source language producers?\nThe language producers are users of Twitter.", "### Annotations", "#### Annotation process\n\nWe have manually annotated over 2500 tweets using the following annotation protocol. We have assumed the existence of three classes:\n\n- PRO (label 0): positive, the tweet unequivocally suggests support for getting vaccinated against COVID-19\n- NEUTRAL(label 1): the tweet is mostly informative, does not show emotions vs. presented information, contains strong positive or negative emotions but concerning politics (vaccine distribution, vaccine passports, etc.)\n- AGAINST(label 2): the tweet is clearly against vaccination and contains warnings, conspiracy theories, etc.\n\n\nThe PRO class consists of tweets which explicitly urge people to go get vaccinated. The AGAINST class contains tweets which explicitly warn people against getting the vaccine.\n\nTweet annotation has been conducted using Prodigy tool. The annotators were provided with the following instructions:\n\n- Do not spend too much time on a tweet and try to make a quick decision, the slight discrepancy in labeling (especially if you are deciding between *PRO* and *NEUTRAL*) will not affect the classifier significantly.\n- Assign tweets that seem to originate from news sites as *NEUTRAL* and use *PRO* for tweets that express unequivocal support for getting the vaccine.\n- There are many tweets on vaccination and politics. They should fall into the *NEUTRAL* class unless they contain a clear call to action: go get vaccinated!\n- Use only the contents of the tweet to label it, do not open the links if the content of a tweet is not enough for labeling (e.g., “Hmm, interesting, https://t.co/ki345o2i345”), skip such tweets instead of giving it a label.\n - Use the option to skip a tweet only when there is nothing in the tweet except for an URL or a few meaningless words, otherwise do not hesitate to put the tweet in the *NEUTRAL* class.\n\n\nWe have asked 8 annotators to annotate the same set of 100 tweets using the guidelines proposed in the annotation protocol to verify the annotation protocol. We have measured the interrater agreement using the Fliess' kappa coefficient <cite>[Fleiss 1971][2]</cite>. The results were as follows:\n- when measuring the agreement with four possible classes (*PRO*, *NEUTRAL*, *AGAINST*, *NONE*, where the last class represents tweets that were rejected from annotation), the agreement is 'kappa=0.3940'\n- when measuring the agreement after removing tweets that were rejected, the agreement is 'kappa=0.3560'\n- when measuring the agreement if rejected tweets are classified as *NEUTRAL*, the agreement is 'kappa=0.3753'\n- when measuring the agreement for only two classes (using *PRO*, *NEUTRAL* and *NONE* as one class, and *AGAINST* as another class), the agreement is 'kappa=0.5419'", "#### Who are the annotators?\nMembers of the #WebImmunization project", "### Personal and Sensitive Information\n\nAccording to the Twitter developer policy, if displayed content ceases to be available through the Twitter API, it can not be obtained from other sources. Thus, we provide tweets' ids to maintain the integrity of all Twitter content with Twitter service. The proper way to extract tweets' content is via Twitter API. Whenever Twitter decided to suspend the author of the tweet, or the author decides to delete their tweet it won't be possible to obtain the tweet's content with this dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThe COVID-19 is a serious global health threat that can be mitigated only by public health interventions that require massive participation. Mass vaccination against COVID-19 is one of the most effective and economically promising solutions to stop the spread of the Sars-Cov-2 virus, which is responsible for the pandemic. Understanding how misinformation about COVID-19 vaccines is spreading in one of the globally most important social networks is paramount.", "### Discussion of Biases", "### Other Known Limitations", "#### Interannotator agreement\nAccording to a popular interpretation of Fleiss' kappa <cite>[Landis 1977][2]</cite>, the annotators are in fair agreement in the first three scenarios and moderate agreement in the last scenario. These results suggest that the annotators are struggling to distinguish between *PRO* and *NEUTRAL* classes, and sometimes they have divergent opinions on whether the tweet should be rejected from training. Still, they are coherent when labeling *AGAINST* tweets.", "#### Suspended account & deleted tweets\nSome of the statuses from the dataset can not be obtained due to account suspension or tweet deletion. The last time we check (15th of November, 2021), about 12% of tweets were authored by suspended accounts and about 10% were already deleted.", "### Dataset Curators\n\nAgata Olejniuk\nPoznan University of Technology, Poland\n\nThe research leading to these results has received funding from the EEA Financial Mechanism 2014-2021. Project registration number: 2019/35/J/HS6 /03498.", "### Licensing Information\n\n\n\n\n\n\n\nDOI", "### Contributions\n\nWe would like to cordially thank the members of the #WebImmunization project for helping with data annotation.", "## References\n\n[1]: Joseph L Fleiss. Measuring nominal scale agreement among many raters.Psychological bulletin, 76(5):378, 1971.\n\n[2]: J Richard Landis and Gary G Koch. The measurement of observer agreement for categorical data. biometrics, pages 159–174, 1977." ]
[ "TAGS\n#task_categories-text-classification #task_ids-sentiment-classification #task_ids-intent-classification #annotations_creators-crowdsourced #language_creators-other #multilinguality-monolingual #size_categories-54KB #source_datasets-original #language-English #license-cc-by-4.0 #region-us \n", "# Dataset Card for COVID-19-vaccine-attitude-tweets", "## Dataset Description\n\n- Paper: Be Careful Who You Follow. The Impact of the Initial Set of Friends on COVID-19 Vaccine tweets\n- Point of Contact: Izabela Krysinska", "### Dataset Summary\n\nThe dataset consists of 2564 manually annotated tweets related to COVID-19 vaccines. The dataset can be used to discover the attitude expressed in the tweet towards the subject of COVID-19 vaccines. Tweets are in English. The dataset was curated in such a way as to maximize the likelihood of tweets with a strong emotional tone. We have assumed the existence of three classes:\n\n- PRO (label 0): positive, the tweet unequivocally suggests support for getting vaccinated against COVID-19\n- NEUTRAL (label 1): the tweet is mostly informative, does not show emotions vs. presented information, contains strong positive or negative emotions but concerning politics (vaccine distribution, vaccine passports, etc.)\n- AGAINST (label 2): the tweet is clearly against vaccination and contains warnings, conspiracy theories, etc.\n\n\nThe dataset does not contain the content of Twitter statuses. Original tweets can be obtained via Twitter API.\nYou can use 'twitter' library:", "### Supported Tasks and Leaderboards\n\n- 'text-classification': The dataset can be used to discover the attitude expressed in the tweet towards the subject of COVID-19 vaccines, whether the tweet presents a positive, neutral or negative attitude. Success on this task can be measured by achieving a *high* AUROC or F1.", "### Languages\n[EN] English.\nThe text that can be accessed via the Twitter API using the identifiers in this dataset is in English.", "## Dataset Structure", "### Data Instances\nThe 1st column is Twitter Status ID and the 2nd column is the label denoting the attitude towards vaccines against COVID-19.\nExample:", "### Data Fields\n\n\n- 'attitude': attitude towards vaccines against COVID-19. '0' denotes positive attitude, '1' denotes neutral attitude, '2' dentoes negative attitude.\n\n- 'id': Twitter status id", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data\n\nSocial media posts.", "#### Initial Data Collection and Normalization\n\nWe queried the Twitter search engine with manually curated hashtags such as \\#coronavaccine, \\#getvaccinated, #mRNA, #PfizerGang, #VaccineNoThankYou, #vaccinesWork, #BillGatesVaccine, #VaccinesKill, etc. to fetch tweets related to COVID-19 vaccines. Then we have searched for tweets with conspicuous emotional load, both negative and positive. Once we had the set of emotionally loaded tweets we started fetching other tweets posted by the authors of emotional tweets. We'd been collecting tweets from mid of April for about a month. Then we filtered out tweets that were not related to the vaccines. In this manner, we collected tweets that are more probable to be emotional rather than strictly informative.", "#### Who are the source language producers?\nThe language producers are users of Twitter.", "### Annotations", "#### Annotation process\n\nWe have manually annotated over 2500 tweets using the following annotation protocol. We have assumed the existence of three classes:\n\n- PRO (label 0): positive, the tweet unequivocally suggests support for getting vaccinated against COVID-19\n- NEUTRAL(label 1): the tweet is mostly informative, does not show emotions vs. presented information, contains strong positive or negative emotions but concerning politics (vaccine distribution, vaccine passports, etc.)\n- AGAINST(label 2): the tweet is clearly against vaccination and contains warnings, conspiracy theories, etc.\n\n\nThe PRO class consists of tweets which explicitly urge people to go get vaccinated. The AGAINST class contains tweets which explicitly warn people against getting the vaccine.\n\nTweet annotation has been conducted using Prodigy tool. The annotators were provided with the following instructions:\n\n- Do not spend too much time on a tweet and try to make a quick decision, the slight discrepancy in labeling (especially if you are deciding between *PRO* and *NEUTRAL*) will not affect the classifier significantly.\n- Assign tweets that seem to originate from news sites as *NEUTRAL* and use *PRO* for tweets that express unequivocal support for getting the vaccine.\n- There are many tweets on vaccination and politics. They should fall into the *NEUTRAL* class unless they contain a clear call to action: go get vaccinated!\n- Use only the contents of the tweet to label it, do not open the links if the content of a tweet is not enough for labeling (e.g., “Hmm, interesting, https://t.co/ki345o2i345”), skip such tweets instead of giving it a label.\n - Use the option to skip a tweet only when there is nothing in the tweet except for an URL or a few meaningless words, otherwise do not hesitate to put the tweet in the *NEUTRAL* class.\n\n\nWe have asked 8 annotators to annotate the same set of 100 tweets using the guidelines proposed in the annotation protocol to verify the annotation protocol. We have measured the interrater agreement using the Fliess' kappa coefficient <cite>[Fleiss 1971][2]</cite>. The results were as follows:\n- when measuring the agreement with four possible classes (*PRO*, *NEUTRAL*, *AGAINST*, *NONE*, where the last class represents tweets that were rejected from annotation), the agreement is 'kappa=0.3940'\n- when measuring the agreement after removing tweets that were rejected, the agreement is 'kappa=0.3560'\n- when measuring the agreement if rejected tweets are classified as *NEUTRAL*, the agreement is 'kappa=0.3753'\n- when measuring the agreement for only two classes (using *PRO*, *NEUTRAL* and *NONE* as one class, and *AGAINST* as another class), the agreement is 'kappa=0.5419'", "#### Who are the annotators?\nMembers of the #WebImmunization project", "### Personal and Sensitive Information\n\nAccording to the Twitter developer policy, if displayed content ceases to be available through the Twitter API, it can not be obtained from other sources. Thus, we provide tweets' ids to maintain the integrity of all Twitter content with Twitter service. The proper way to extract tweets' content is via Twitter API. Whenever Twitter decided to suspend the author of the tweet, or the author decides to delete their tweet it won't be possible to obtain the tweet's content with this dataset.", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nThe COVID-19 is a serious global health threat that can be mitigated only by public health interventions that require massive participation. Mass vaccination against COVID-19 is one of the most effective and economically promising solutions to stop the spread of the Sars-Cov-2 virus, which is responsible for the pandemic. Understanding how misinformation about COVID-19 vaccines is spreading in one of the globally most important social networks is paramount.", "### Discussion of Biases", "### Other Known Limitations", "#### Interannotator agreement\nAccording to a popular interpretation of Fleiss' kappa <cite>[Landis 1977][2]</cite>, the annotators are in fair agreement in the first three scenarios and moderate agreement in the last scenario. These results suggest that the annotators are struggling to distinguish between *PRO* and *NEUTRAL* classes, and sometimes they have divergent opinions on whether the tweet should be rejected from training. Still, they are coherent when labeling *AGAINST* tweets.", "#### Suspended account & deleted tweets\nSome of the statuses from the dataset can not be obtained due to account suspension or tweet deletion. The last time we check (15th of November, 2021), about 12% of tweets were authored by suspended accounts and about 10% were already deleted.", "### Dataset Curators\n\nAgata Olejniuk\nPoznan University of Technology, Poland\n\nThe research leading to these results has received funding from the EEA Financial Mechanism 2014-2021. Project registration number: 2019/35/J/HS6 /03498.", "### Licensing Information\n\n\n\n\n\n\n\nDOI", "### Contributions\n\nWe would like to cordially thank the members of the #WebImmunization project for helping with data annotation.", "## References\n\n[1]: Joseph L Fleiss. Measuring nominal scale agreement among many raters.Psychological bulletin, 76(5):378, 1971.\n\n[2]: J Richard Landis and Gary G Koch. The measurement of observer agreement for categorical data. biometrics, pages 159–174, 1977." ]
042bdf645bc980c751e9b217cc77e47624394390
# Dataset Card for the args.me corpus ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Dataset Usage](#dataset-usage) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://zenodo.org/record/4139439 - **Repository:** https://git.webis.de/code-research/arguana/args/args-framework - **Paper:** [Building an Argument Search Engine for the Web](https://webis.de/downloads/publications/papers/wachsmuth_2017f.pdf) - **Leaderboard:** https://touche.webis.de/ - **Point of Contact:** [Webis Group](https://webis.de/people.html) ### Dataset Summary The args.me corpus (version 1.0, cleaned) comprises 382 545 arguments crawled from four debate portals in the middle of 2019. The debate portals are Debatewise, IDebate.org, Debatepedia, and Debate.org. The arguments are extracted using heuristics that are designed for each debate portal. ### Dataset Usage ```python import datasets args = datasets.load_dataset('webis/args_me', 'corpus', streaming=True) args_iterator = iter(args) for arg in args_iterator: print(args['conclusion']) print(args['id']) print(args['argument']) print(args['stance']) break ``` ### Supported Tasks and Leaderboards Document Retrieval, Argument Retrieval for Controversial Questions ### Languages The args.me corpus is monolingual; it only includes English (mostly en-US) documents. ## Dataset Structure ### Data Instances #### Corpus ``` {'conclusion': 'Science is the best!', 'id': 'd6517702-2019-04-18T12:36:24Z-00000-000', 'argument': 'Science is aright I guess, but Physical Education (P.E) is better. Think about it, you could sit in a classroom for and hour learning about molecular reconfiguration, or you could play football with your mates. Why would you want to learn about molecular reconfiguration anyway? I think the argument here would be based on, healthy mind or healthy body. With science being the healthy mind and P.E being the healthy body. To work this one out all you got to do is ask Steven Hawkins. Only 500 words', 'stance': 'CON'} ``` ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/) ### Citation Information ``` @dataset{yamen_ajjour_2020_4139439, author = {Yamen Ajjour and Henning Wachsmuth and Johannes Kiesel and Martin Potthast and Matthias Hagen and Benno Stein}, title = {args.me corpus}, month = oct, year = 2020, publisher = {Zenodo}, version = {1.0-cleaned}, doi = {10.5281/zenodo.4139439}, url = {https://doi.org/10.5281/zenodo.4139439} } ```
webis/args_me
[ "task_categories:text-retrieval", "task_ids:document-retrieval", "annotations_creators:machine-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:100K<n<1M", "source_datasets:original", "language:en", "license:cc-by-4.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["machine-generated"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "pretty_name": "Webis args.me argument corpus"}
2022-09-21T11:09:09+00:00
[]
[ "en" ]
TAGS #task_categories-text-retrieval #task_ids-document-retrieval #annotations_creators-machine-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-cc-by-4.0 #region-us
# Dataset Card for the URL corpus ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Dataset Usage - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: URL - Paper: Building an Argument Search Engine for the Web - Leaderboard: URL - Point of Contact: Webis Group ### Dataset Summary The URL corpus (version 1.0, cleaned) comprises 382 545 arguments crawled from four debate portals in the middle of 2019. The debate portals are Debatewise, URL, Debatepedia, and URL. The arguments are extracted using heuristics that are designed for each debate portal. ### Dataset Usage ### Supported Tasks and Leaderboards Document Retrieval, Argument Retrieval for Controversial Questions ### Languages The URL corpus is monolingual; it only includes English (mostly en-US) documents. ## Dataset Structure ### Data Instances #### Corpus ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information Creative Commons Attribution 4.0 International (CC BY 4.0)
[ "# Dataset Card for the URL corpus", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Dataset Usage\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Building an Argument Search Engine for the Web\n- Leaderboard: URL\n- Point of Contact: Webis Group", "### Dataset Summary\n\nThe URL corpus (version 1.0, cleaned) comprises 382 545 arguments crawled from four debate portals in the middle of 2019. The debate portals are Debatewise, URL, Debatepedia, and URL. The arguments are extracted using heuristics that are designed for each debate portal.", "### Dataset Usage", "### Supported Tasks and Leaderboards\n\nDocument Retrieval, Argument Retrieval for Controversial Questions", "### Languages\n\nThe URL corpus is monolingual; it only includes English (mostly en-US) documents.", "## Dataset Structure", "### Data Instances", "#### Corpus", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\nCreative Commons Attribution 4.0 International (CC BY 4.0)" ]
[ "TAGS\n#task_categories-text-retrieval #task_ids-document-retrieval #annotations_creators-machine-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-100K<n<1M #source_datasets-original #language-English #license-cc-by-4.0 #region-us \n", "# Dataset Card for the URL corpus", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Dataset Usage\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: Building an Argument Search Engine for the Web\n- Leaderboard: URL\n- Point of Contact: Webis Group", "### Dataset Summary\n\nThe URL corpus (version 1.0, cleaned) comprises 382 545 arguments crawled from four debate portals in the middle of 2019. The debate portals are Debatewise, URL, Debatepedia, and URL. The arguments are extracted using heuristics that are designed for each debate portal.", "### Dataset Usage", "### Supported Tasks and Leaderboards\n\nDocument Retrieval, Argument Retrieval for Controversial Questions", "### Languages\n\nThe URL corpus is monolingual; it only includes English (mostly en-US) documents.", "## Dataset Structure", "### Data Instances", "#### Corpus", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\nCreative Commons Attribution 4.0 International (CC BY 4.0)" ]
73b3cf841cb759a33cc71d2e4f176605012488d6
# Dataset Card for ConcluGen ## Table of Contents - [Dataset Card for ConcluGen](#dataset-card-for-conclugen) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://zenodo.org/record/4818134 - **Repository:** https://github.com/webis-de/acl21-informative-conclusion-generation - **Paper:** [Generating Informative Conclusions for Argumentative Texts](https://aclanthology.org/2021.findings-acl.306.pdf) - **Leaderboard:** [N/A] - **Point of Contact:** [Shahbaz Syed](mailto:[email protected]) ### Dataset Summary The ConcluGen corpus is constructed for the task of argument summarization. It consists of 136,996 pairs of argumentative texts and their conclusions collected from the ChangeMyView subreddit, a web portal for argumentative discussions on controversial topics. The corpus has three variants: topics, aspects, and targets. Each variation encodes the corresponding information via control codes. These provide additional argumentative knowledge for generating more informative conclusions. ### Supported Tasks and Leaderboards Argument Summarization, Conclusion Generation ### Languages English ('en') as spoken by Reddit users on the [r/changemyview](https://old.reddit.com/r/changemyview/) subreddits. ## Dataset Structure ### Data Instances An example consists of a unique 'id', an 'argument', and its 'conclusion'. **base** Contains only the argument and its conclusion. ``` {'id': 'ee11c116-23df-4795-856e-8b6c6626d5ed', 'argument': "In my opinion, the world would be a better place if alcohol was illegal. I've done a little bit of research to get some numbers, and I was quite shocked at what I found. Source On average, one in three people will be involved in a drunk driving crash in their lifetime. In 2011, 9,878 people died in drunk driving crashes Drunk driving costs each adult in this country almost 500 per year. Drunk driving costs the United States 132 billion a year. Every day in America, another 27 people die as a result of drunk driving crashes. Almost every 90 seconds, a person is injured in a drunk driving crash. These are just the driving related statistics. They would each get reduced by at least 75 if the sale of alcohol was illegal. I just don't see enough positives to outweigh all the deaths and injuries that result from irresponsible drinking. Alcohol is quite literally a drug, and is also extremely addicting. It would already be illegal if not for all these pointless ties with culture. Most people wouldn't even think to live in a world without alcohol, but in my opinion that world would be a better, safer, and more productive one. , or at least defend the fact that it's legal.", 'conclusion': 'I think alcohol should be illegal.'} ``` **topic** Argument encoded with the discussion topic. ``` {"id":"b22272fd-00d2-4373-b46c-9c1d9d21e6c2","argument":"<|TOPIC|>Should Planned Parenthood Be Defunded?<|ARGUMENT|>Even the best contraceptive methods such as surgical sterilisation can fail, and even with perfect use the pill may not work.<|CONCLUSION|>","conclusion":"Even with the best intentions and preparation, contraceptives can and do fail."} ``` **aspects** Argument encoded with the discussion topic and argument's aspects. ``` {"id":"adc92826-7892-42d4-9405-855e845bf027","argument":"<|TOPIC|>Gender Neutral Bathrooms: Should They be Standard?<|ARGUMENT|>Men's toilets and women's urine have different odours due to hormone differences in each biological sex. As a result, the urine of one sex may smell much worse to the other sex and vice versa, meaning that it is logical to keep their toilet facilities separate.<|ASPECTS|>hormone differences, urine, separate, facilities, different odours, smell much worse<|CONCLUSION|>","conclusion":"Men and women, because of their different biological characteristics, each need a different type of bathroom. Gender-segregated bathrooms reflect and honour these differences."} ``` **targets** Argument encoded with the discussion topic and possible conclusion targets. ``` {"id":"c9a87a03-edda-42be-9c0d-1e7d2d311816","argument":"<|TOPIC|>Australian republic vs. monarchy<|ARGUMENT|>The monarchy is a direct reflection of Australia's past as a British colony and continues to symbolize Australia's subservience to the British crown. Such symbolism has a powerfully negative effect on Australians' sense of independence and identity. Ending the monarchy and establishing a republic would constitute a substantial stride in the direction of creating a greater sense of independence and national pride and identity.<|TARGETS|>Such symbolism, The monarchy, Ending the monarchy and establishing a republic<|CONCLUSION|>","conclusion":"Ending the monarchy would foster an independent identity in Australia"} ``` ### Data Fields - `id`: a string identifier for each example. - `argument`: the argumentative text. - `conclusion`: the conclusion of the argumentative text. ### Data Splits The data is split into train, validation, and test splits for each variation of the dataset (including base). | | Train | Validation | Test | |--------- |--------- |------------ |------ | | Base | 116,922 | 12,224 | 1373 | | Aspects | 120,142 | 12,174 | 1357 | | Targets | 109,376 | 11,053 | 1237 | | Topic | 121,588 | 12,335 | 1372 | ## Dataset Creation ### Curation Rationale ConcluGen was built as a first step towards argument summarization technology. The [rules of the subreddit](https://old.reddit.com/r/changemyview/wiki/rules) ensure high quality data suitable for the task. ### Source Data #### Initial Data Collection and Normalization Reddit [ChangeMyView](https://old.reddit.com/r/changemyview/) #### Who are the source language producers? Users of the subreddit [r/changemyview](https://old.reddit.com/r/changemyview/). Further demographic information is unavailable from the data source. ### Annotations The dataset is augmented with automatically extracted knowledge such as the argument's aspects, the discussion topic, and possible conclusion targets. #### Annotation process [N/A] #### Who are the annotators? [N/A] ### Personal and Sensitive Information Only the argumentative text and its conclusion are provided. No personal information of the posters is included. ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information The licensing status of the dataset hinges on the legal status of the [Pushshift.io](https://files.pushshift.io/reddit/) data which is unclear. ### Citation Information ``` @inproceedings{syed:2021, author = {Shahbaz Syed and Khalid Al Khatib and Milad Alshomary and Henning Wachsmuth and Martin Potthast}, editor = {Chengqing Zong and Fei Xia and Wenjie Li and Roberto Navigli}, title = {Generating Informative Conclusions for Argumentative Texts}, booktitle = {Findings of the Association for Computational Linguistics: {ACL/IJCNLP} 2021, Online Event, August 1-6, 2021}, pages = {3482--3493}, publisher = {Association for Computational Linguistics}, year = {2021}, url = {https://doi.org/10.18653/v1/2021.findings-acl.306}, doi = {10.18653/v1/2021.findings-acl.306} } ```
webis/conclugen
[ "region:us" ]
2022-03-02T23:29:22+00:00
{}
2022-05-03T05:18:33+00:00
[]
[]
TAGS #region-us
Dataset Card for ConcluGen ========================== Table of Contents ----------------- * Dataset Card for ConcluGen + Table of Contents + Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages + Dataset Structure - Data Instances - Data Fields - Data Splits + Dataset Creation - Curation Rationale - Source Data * Initial Data Collection and Normalization * Who are the source language producers? - Annotations * Annotation process * Who are the annotators? - Personal and Sensitive Information + Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations + Additional Information - Dataset Curators - Licensing Information - Citation Information Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: Generating Informative Conclusions for Argumentative Texts * Leaderboard: [N/A] * Point of Contact: Shahbaz Syed ### Dataset Summary The ConcluGen corpus is constructed for the task of argument summarization. It consists of 136,996 pairs of argumentative texts and their conclusions collected from the ChangeMyView subreddit, a web portal for argumentative discussions on controversial topics. The corpus has three variants: topics, aspects, and targets. Each variation encodes the corresponding information via control codes. These provide additional argumentative knowledge for generating more informative conclusions. ### Supported Tasks and Leaderboards Argument Summarization, Conclusion Generation ### Languages English ('en') as spoken by Reddit users on the r/changemyview subreddits. Dataset Structure ----------------- ### Data Instances An example consists of a unique 'id', an 'argument', and its 'conclusion'. base Contains only the argument and its conclusion. topic Argument encoded with the discussion topic. aspects Argument encoded with the discussion topic and argument's aspects. targets Argument encoded with the discussion topic and possible conclusion targets. ### Data Fields * 'id': a string identifier for each example. * 'argument': the argumentative text. * 'conclusion': the conclusion of the argumentative text. ### Data Splits The data is split into train, validation, and test splits for each variation of the dataset (including base). Dataset Creation ---------------- ### Curation Rationale ConcluGen was built as a first step towards argument summarization technology. The rules of the subreddit ensure high quality data suitable for the task. ### Source Data #### Initial Data Collection and Normalization Reddit ChangeMyView #### Who are the source language producers? Users of the subreddit r/changemyview. Further demographic information is unavailable from the data source. ### Annotations The dataset is augmented with automatically extracted knowledge such as the argument's aspects, the discussion topic, and possible conclusion targets. #### Annotation process [N/A] #### Who are the annotators? [N/A] ### Personal and Sensitive Information Only the argumentative text and its conclusion are provided. No personal information of the posters is included. Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information The licensing status of the dataset hinges on the legal status of the URL data which is unclear.
[ "### Dataset Summary\n\n\nThe ConcluGen corpus is constructed for the task of argument summarization. It consists of 136,996 pairs of argumentative texts and their conclusions collected from the ChangeMyView subreddit, a web portal for argumentative discussions on controversial topics.\n\n\nThe corpus has three variants: topics, aspects, and targets. Each variation encodes the corresponding information via control codes. These provide additional argumentative knowledge for generating more informative conclusions.", "### Supported Tasks and Leaderboards\n\n\nArgument Summarization, Conclusion Generation", "### Languages\n\n\nEnglish ('en') as spoken by Reddit users on the r/changemyview subreddits.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example consists of a unique 'id', an 'argument', and its 'conclusion'.\n\n\nbase\n\n\nContains only the argument and its conclusion.\n\n\ntopic\n\n\nArgument encoded with the discussion topic.\n\n\naspects\n\n\nArgument encoded with the discussion topic and argument's aspects.\n\n\ntargets\n\n\nArgument encoded with the discussion topic and possible conclusion targets.", "### Data Fields\n\n\n* 'id': a string identifier for each example.\n* 'argument': the argumentative text.\n* 'conclusion': the conclusion of the argumentative text.", "### Data Splits\n\n\nThe data is split into train, validation, and test splits for each variation of the dataset (including base).\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nConcluGen was built as a first step towards argument summarization technology. The rules of the subreddit ensure high quality data suitable for the task.", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nReddit ChangeMyView", "#### Who are the source language producers?\n\n\nUsers of the subreddit r/changemyview. Further demographic information is unavailable from the data source.", "### Annotations\n\n\nThe dataset is augmented with automatically extracted knowledge such as the argument's aspects, the discussion topic, and possible conclusion targets.", "#### Annotation process\n\n\n[N/A]", "#### Who are the annotators?\n\n\n[N/A]", "### Personal and Sensitive Information\n\n\nOnly the argumentative text and its conclusion are provided. No personal information of the posters is included.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nThe licensing status of the dataset hinges on the legal status of the URL data which is unclear." ]
[ "TAGS\n#region-us \n", "### Dataset Summary\n\n\nThe ConcluGen corpus is constructed for the task of argument summarization. It consists of 136,996 pairs of argumentative texts and their conclusions collected from the ChangeMyView subreddit, a web portal for argumentative discussions on controversial topics.\n\n\nThe corpus has three variants: topics, aspects, and targets. Each variation encodes the corresponding information via control codes. These provide additional argumentative knowledge for generating more informative conclusions.", "### Supported Tasks and Leaderboards\n\n\nArgument Summarization, Conclusion Generation", "### Languages\n\n\nEnglish ('en') as spoken by Reddit users on the r/changemyview subreddits.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example consists of a unique 'id', an 'argument', and its 'conclusion'.\n\n\nbase\n\n\nContains only the argument and its conclusion.\n\n\ntopic\n\n\nArgument encoded with the discussion topic.\n\n\naspects\n\n\nArgument encoded with the discussion topic and argument's aspects.\n\n\ntargets\n\n\nArgument encoded with the discussion topic and possible conclusion targets.", "### Data Fields\n\n\n* 'id': a string identifier for each example.\n* 'argument': the argumentative text.\n* 'conclusion': the conclusion of the argumentative text.", "### Data Splits\n\n\nThe data is split into train, validation, and test splits for each variation of the dataset (including base).\n\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nConcluGen was built as a first step towards argument summarization technology. The rules of the subreddit ensure high quality data suitable for the task.", "### Source Data", "#### Initial Data Collection and Normalization\n\n\nReddit ChangeMyView", "#### Who are the source language producers?\n\n\nUsers of the subreddit r/changemyview. Further demographic information is unavailable from the data source.", "### Annotations\n\n\nThe dataset is augmented with automatically extracted knowledge such as the argument's aspects, the discussion topic, and possible conclusion targets.", "#### Annotation process\n\n\n[N/A]", "#### Who are the annotators?\n\n\n[N/A]", "### Personal and Sensitive Information\n\n\nOnly the argumentative text and its conclusion are provided. No personal information of the posters is included.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nThe licensing status of the dataset hinges on the legal status of the URL data which is unclear." ]
fcf173ce5e3d36bf92c1010e040e93b514ea9685
# Webis MS MARCO Anchor Text 2022 The [Webis MS MARCO Anchor Text 2022 dataset](https://webis.de/data/webis-ms-marco-anchor-text-22.html) enriches Version 1 and 2 of the document collection of [MS MARCO](https://microsoft.github.io/msmarco/) with anchor text extracted from six [Common Crawl](https://commoncrawl.org/) snapshots. The six Common Crawl snapshots cover the years 2016 to 2021 (between 1.7-3.4 billion documents each). We sampled 1,000 anchor texts for documents with more than 1,000 anchor texts at random and all anchor texts for documents with less than 1,000 anchor texts (this sampling yields that all anchor text is included for 94% of the documents in Version 1 and 97% of documents for Version 2). Overall, the MS MARCO Anchor Text 2022 dataset enriches 1,703,834 documents for Version 1 and 4,821,244 documents for Version 2 with anchor text. Cleaned versions of the MS MARCO Anchor Text 2022 dataset are available in [ir_datasets](https://github.com/allenai/ir_datasets/issues/154), [Zenodo](https://zenodo.org/record/5883456) and [Hugging Face](https://huggingface.co/datasets/webis/ms-marco-anchor-text). The raw dataset with additional information and all metadata for the extracted anchor texts (roughly 100GB) is available on [Hugging Face](https://huggingface.co/datasets/webis/ms-marco-anchor-text/tree/main/ms-marco-v1/anchor-text) and [files.webis.de](https://files.webis.de/data-in-progress/ecir22-anchor-text/anchor-text-samples/). The details of the construction of the Webis MS MARCO Anchor Text 2022 dataset are described in the [associated paper](https://webis.de/publications.html#froebe_2022a). If you use this dataset, please cite ``` @InProceedings{froebe:2022a, address = {Berlin Heidelberg New York}, author = {Maik Fr{\"o}be and Sebastian G{\"u}nther and Maximilian Probst and Martin Potthast and Matthias Hagen}, booktitle = {Advances in Information Retrieval. 44th European Conference on IR Research (ECIR 2022)}, editor = {Matthias Hagen and Suzan Verberne and Craig Macdonald and Christin Seifert and Krisztian Balog and Kjetil N{\o}rv\r{a}g and Vinay Setty}, month = apr, publisher = {Springer}, series = {Lecture Notes in Computer Science}, site = {Stavanger, Norway}, title = {{The Power of Anchor Text in the Neural Retrieval Era}}, year = 2022 } ```
webis/ms-marco-anchor-text
[ "region:us" ]
2022-03-02T23:29:22+00:00
{}
2022-01-30T19:19:02+00:00
[]
[]
TAGS #region-us
# Webis MS MARCO Anchor Text 2022 The Webis MS MARCO Anchor Text 2022 dataset enriches Version 1 and 2 of the document collection of MS MARCO with anchor text extracted from six Common Crawl snapshots. The six Common Crawl snapshots cover the years 2016 to 2021 (between 1.7-3.4 billion documents each). We sampled 1,000 anchor texts for documents with more than 1,000 anchor texts at random and all anchor texts for documents with less than 1,000 anchor texts (this sampling yields that all anchor text is included for 94% of the documents in Version 1 and 97% of documents for Version 2). Overall, the MS MARCO Anchor Text 2022 dataset enriches 1,703,834 documents for Version 1 and 4,821,244 documents for Version 2 with anchor text. Cleaned versions of the MS MARCO Anchor Text 2022 dataset are available in ir_datasets, Zenodo and Hugging Face. The raw dataset with additional information and all metadata for the extracted anchor texts (roughly 100GB) is available on Hugging Face and URL. The details of the construction of the Webis MS MARCO Anchor Text 2022 dataset are described in the associated paper. If you use this dataset, please cite
[ "# Webis MS MARCO Anchor Text 2022\n\nThe Webis MS MARCO Anchor Text 2022 dataset enriches Version 1 and 2 of the document collection of MS MARCO with anchor text extracted from six Common Crawl snapshots. The six Common Crawl snapshots cover the years 2016 to 2021 (between 1.7-3.4 billion documents each). We sampled 1,000 anchor texts for documents with more than 1,000 anchor texts at random and all anchor texts for documents with less than 1,000 anchor texts (this sampling yields that all anchor text is included for 94% of the documents in Version 1 and 97% of documents for Version 2). Overall, the MS MARCO Anchor Text 2022 dataset enriches 1,703,834 documents for Version 1 and 4,821,244 documents for Version 2 with anchor text.\n\nCleaned versions of the MS MARCO Anchor Text 2022 dataset are available in ir_datasets, Zenodo and Hugging Face. The raw dataset with additional information and all metadata for the extracted anchor texts (roughly 100GB) is available on Hugging Face and URL.\n\nThe details of the construction of the Webis MS MARCO Anchor Text 2022 dataset are described in the associated paper. If you use this dataset, please cite" ]
[ "TAGS\n#region-us \n", "# Webis MS MARCO Anchor Text 2022\n\nThe Webis MS MARCO Anchor Text 2022 dataset enriches Version 1 and 2 of the document collection of MS MARCO with anchor text extracted from six Common Crawl snapshots. The six Common Crawl snapshots cover the years 2016 to 2021 (between 1.7-3.4 billion documents each). We sampled 1,000 anchor texts for documents with more than 1,000 anchor texts at random and all anchor texts for documents with less than 1,000 anchor texts (this sampling yields that all anchor text is included for 94% of the documents in Version 1 and 97% of documents for Version 2). Overall, the MS MARCO Anchor Text 2022 dataset enriches 1,703,834 documents for Version 1 and 4,821,244 documents for Version 2 with anchor text.\n\nCleaned versions of the MS MARCO Anchor Text 2022 dataset are available in ir_datasets, Zenodo and Hugging Face. The raw dataset with additional information and all metadata for the extracted anchor texts (roughly 100GB) is available on Hugging Face and URL.\n\nThe details of the construction of the Webis MS MARCO Anchor Text 2022 dataset are described in the associated paper. If you use this dataset, please cite" ]
b04c8d1ceb2f5cd4588862100d08de323dccfbaa
# Dataset Card for Wikimedia Wikipedia ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://dumps.wikimedia.org](https://dumps.wikimedia.org) - **Repository:** - **Paper:** - **Point of Contact:** ### Dataset Summary Wikipedia dataset containing cleaned articles of all languages. The dataset is built from the Wikipedia dumps (https://dumps.wikimedia.org/) with one subset per language, each containing a single train split. Each example contains the content of one full Wikipedia article with cleaning to strip markdown and unwanted sections (references, etc.). All language subsets have already been processed for recent dump, and you can load them per date and language this way: ```python from datasets import load_dataset ds = load_dataset("wikimedia/wikipedia", "20231101.en") ``` #### Data Visualization Click the [Nomic Atlas](https://atlas.nomic.ai/map/475c26d7-b142-4795-9887-02b6eeb18dc0/0d312be6-a3bb-4586-b6b7-53dcd0cbefa5) map below to visualize the 6.4 million samples in the `20231101.en` split. <a href="https://atlas.nomic.ai/map/475c26d7-b142-4795-9887-02b6eeb18dc0/0d312be6-a3bb-4586-b6b7-53dcd0cbefa5"> <img src="https://cdn-uploads.huggingface.co/production/uploads/6480c476cacb1c4a0696eeb8/sZNN6Vubc0Oue83vKaJUu.webp" alt="Nomic-Atlas Wikipedia Map" width="25%"/> </a> ### Supported Tasks and Leaderboards The dataset is generally used for Language Modeling. ### Languages You can find the list of languages here: https://meta.wikimedia.org/wiki/List_of_Wikipedias ## Dataset Structure ### Data Instances An example looks as follows: ``` {'id': '1', 'url': 'https://simple.wikipedia.org/wiki/April', 'title': 'April', 'text': 'April is the fourth month...' } ``` ### Data Fields The data fields are the same among all configurations: - `id` (`str`): ID of the article. - `url` (`str`): URL of the article. - `title` (`str`): Title of the article. - `text` (`str`): Text content of the article. ### Data Splits All configurations contain a single `train` split. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization The dataset is built from the Wikipedia dumps: https://dumps.wikimedia.org You can find the full list of languages and dates here: https://dumps.wikimedia.org/backup-index.html The articles have been parsed using the [`mwparserfromhell`](https://mwparserfromhell.readthedocs.io) tool. When uploading the data files for the 20231101 dump, we noticed that the Wikimedia Dumps website does not contain this date dump for the "bbc", "dga", nor "zgh" Wikipedias. We have reported the issue to the Wikimedia Phabricator: https://phabricator.wikimedia.org/T351761 #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Copyright licensing information: https://dumps.wikimedia.org/legal.html All original textual content is licensed under the [GNU Free Documentation License](https://www.gnu.org/licenses/fdl-1.3.html) (GFDL) and the [Creative Commons Attribution-Share-Alike 3.0 License](https://creativecommons.org/licenses/by-sa/3.0/). Some text may be available only under the Creative Commons license; see their [Terms of Use](https://foundation.wikimedia.org/wiki/Policy:Terms_of_Use) for details. Text written by some authors may be released under additional licenses or into the public domain. ### Citation Information ``` @ONLINE{wikidump, author = "Wikimedia Foundation", title = "Wikimedia Downloads", url = "https://dumps.wikimedia.org" } ```
wikimedia/wikipedia
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "size_categories:n<1K", "size_categories:1K<n<10K", "size_categories:10K<n<100K", "size_categories:100K<n<1M", "size_categories:1M<n<10M", "language:ab", "language:ace", "language:ady", "language:af", "language:alt", "language:am", "language:ami", "language:an", "language:ang", "language:anp", "language:ar", "language:arc", "language:ary", "language:arz", "language:as", "language:ast", "language:atj", "language:av", "language:avk", "language:awa", "language:ay", "language:az", "language:azb", "language:ba", "language:ban", "language:bar", "language:bbc", "language:bcl", "language:be", "language:bg", "language:bh", "language:bi", "language:bjn", "language:blk", "language:bm", "language:bn", "language:bo", "language:bpy", "language:br", "language:bs", "language:bug", "language:bxr", "language:ca", "language:cbk", "language:cdo", "language:ce", "language:ceb", "language:ch", "language:chr", "language:chy", "language:ckb", "language:co", "language:cr", "language:crh", "language:cs", "language:csb", "language:cu", "language:cv", "language:cy", "language:da", "language:dag", "language:de", "language:dga", "language:din", "language:diq", "language:dsb", "language:dty", "language:dv", "language:dz", "language:ee", "language:el", "language:eml", "language:en", "language:eo", "language:es", "language:et", "language:eu", "language:ext", "language:fa", "language:fat", "language:ff", "language:fi", "language:fj", "language:fo", "language:fon", "language:fr", "language:frp", "language:frr", "language:fur", "language:fy", "language:ga", "language:gag", "language:gan", "language:gcr", "language:gd", "language:gl", "language:glk", "language:gn", "language:gom", "language:gor", "language:got", "language:gpe", "language:gsw", "language:gu", "language:guc", "language:gur", "language:guw", "language:gv", "language:ha", "language:hak", "language:haw", "language:hbs", "language:he", "language:hi", "language:hif", "language:hr", "language:hsb", "language:ht", "language:hu", "language:hy", "language:hyw", "language:ia", "language:id", "language:ie", "language:ig", "language:ik", "language:ilo", "language:inh", "language:io", "language:is", "language:it", "language:iu", "language:ja", "language:jam", "language:jbo", "language:jv", "language:ka", "language:kaa", "language:kab", "language:kbd", "language:kbp", "language:kcg", "language:kg", "language:ki", "language:kk", "language:kl", "language:km", "language:kn", "language:ko", "language:koi", "language:krc", "language:ks", "language:ksh", "language:ku", "language:kv", "language:kw", "language:ky", "language:la", "language:lad", "language:lb", "language:lbe", "language:lez", "language:lfn", "language:lg", "language:li", "language:lij", "language:lld", "language:lmo", "language:ln", "language:lo", "language:lt", "language:ltg", "language:lv", "language:lzh", "language:mad", "language:mai", "language:map", "language:mdf", "language:mg", "language:mhr", "language:mi", "language:min", "language:mk", "language:ml", "language:mn", "language:mni", "language:mnw", "language:mr", "language:mrj", "language:ms", "language:mt", "language:mwl", "language:my", "language:myv", "language:mzn", "language:nah", "language:nan", "language:nap", "language:nds", "language:ne", "language:new", "language:nia", "language:nl", "language:nn", "language:no", "language:nov", "language:nqo", "language:nrf", "language:nso", "language:nv", "language:ny", "language:oc", "language:olo", "language:om", "language:or", "language:os", "language:pa", "language:pag", "language:pam", "language:pap", "language:pcd", "language:pcm", "language:pdc", "language:pfl", "language:pi", "language:pih", "language:pl", "language:pms", "language:pnb", "language:pnt", "language:ps", "language:pt", "language:pwn", "language:qu", "language:rm", "language:rmy", "language:rn", "language:ro", "language:ru", "language:rue", "language:rup", "language:rw", "language:sa", "language:sah", "language:sat", "language:sc", "language:scn", "language:sco", "language:sd", "language:se", "language:sg", "language:sgs", "language:shi", "language:shn", "language:si", "language:sk", "language:skr", "language:sl", "language:sm", "language:smn", "language:sn", "language:so", "language:sq", "language:sr", "language:srn", "language:ss", "language:st", "language:stq", "language:su", "language:sv", "language:sw", "language:szl", "language:szy", "language:ta", "language:tay", "language:tcy", "language:te", "language:tet", "language:tg", "language:th", "language:ti", "language:tk", "language:tl", "language:tly", "language:tn", "language:to", "language:tpi", "language:tr", "language:trv", "language:ts", "language:tt", "language:tum", "language:tw", "language:ty", "language:tyv", "language:udm", "language:ug", "language:uk", "language:ur", "language:uz", "language:ve", "language:vec", "language:vep", "language:vi", "language:vls", "language:vo", "language:vro", "language:wa", "language:war", "language:wo", "language:wuu", "language:xal", "language:xh", "language:xmf", "language:yi", "language:yo", "language:yue", "language:za", "language:zea", "language:zgh", "language:zh", "language:zu", "license:cc-by-sa-3.0", "license:gfdl", "region:us" ]
2022-03-02T23:29:22+00:00
{"language": ["ab", "ace", "ady", "af", "alt", "am", "ami", "an", "ang", "anp", "ar", "arc", "ary", "arz", "as", "ast", "atj", "av", "avk", "awa", "ay", "az", "azb", "ba", "ban", "bar", "bbc", "bcl", "be", "bg", "bh", "bi", "bjn", "blk", "bm", "bn", "bo", "bpy", "br", "bs", "bug", "bxr", "ca", "cbk", "cdo", "ce", "ceb", "ch", "chr", "chy", "ckb", "co", "cr", "crh", "cs", "csb", "cu", "cv", "cy", "da", "dag", "de", "dga", "din", "diq", "dsb", "dty", "dv", "dz", "ee", "el", "eml", "en", "eo", "es", "et", "eu", "ext", "fa", "fat", "ff", "fi", "fj", "fo", "fon", "fr", "frp", "frr", "fur", "fy", "ga", "gag", "gan", "gcr", "gd", "gl", "glk", "gn", "gom", "gor", "got", "gpe", "gsw", "gu", "guc", "gur", "guw", "gv", "ha", "hak", "haw", "hbs", "he", "hi", "hif", "hr", "hsb", "ht", "hu", "hy", "hyw", "ia", "id", "ie", "ig", "ik", "ilo", "inh", "io", "is", "it", "iu", "ja", "jam", "jbo", "jv", "ka", "kaa", "kab", "kbd", "kbp", "kcg", "kg", "ki", "kk", "kl", "km", "kn", "ko", "koi", "krc", "ks", "ksh", "ku", "kv", "kw", "ky", "la", "lad", "lb", "lbe", "lez", "lfn", "lg", "li", "lij", "lld", "lmo", "ln", "lo", "lt", "ltg", "lv", "lzh", "mad", "mai", "map", "mdf", "mg", "mhr", "mi", "min", "mk", "ml", "mn", "mni", "mnw", "mr", "mrj", "ms", "mt", "mwl", "my", "myv", "mzn", "nah", "nan", "nap", "nds", "ne", "new", "nia", "nl", "nn", "no", "nov", "nqo", "nrf", "nso", "nv", "ny", "oc", "olo", "om", "or", "os", "pa", "pag", "pam", "pap", "pcd", "pcm", "pdc", "pfl", "pi", "pih", "pl", "pms", "pnb", "pnt", "ps", "pt", "pwn", "qu", "rm", "rmy", "rn", "ro", "ru", "rue", "rup", "rw", "sa", "sah", "sat", "sc", "scn", "sco", "sd", "se", "sg", "sgs", "shi", "shn", "si", "sk", "skr", "sl", "sm", "smn", "sn", "so", "sq", "sr", "srn", "ss", "st", "stq", "su", "sv", "sw", "szl", "szy", "ta", "tay", "tcy", "te", "tet", "tg", "th", "ti", "tk", "tl", "tly", "tn", "to", "tpi", "tr", "trv", "ts", "tt", "tum", "tw", "ty", "tyv", "udm", "ug", "uk", "ur", "uz", "ve", "vec", "vep", "vi", "vls", "vo", "vro", "wa", "war", "wo", "wuu", "xal", "xh", "xmf", "yi", "yo", "yue", "za", "zea", "zgh", "zh", "zu"], "license": ["cc-by-sa-3.0", "gfdl"], "size_categories": ["n<1K", "1K<n<10K", "10K<n<100K", "100K<n<1M", "1M<n<10M"], "task_categories": ["text-generation", "fill-mask"], "task_ids": ["language-modeling", "masked-language-modeling"], "configs": [{"config_name": "20231101.ab", "data_files": [{"split": "train", "path": "20231101.ab/train-*"}]}, {"config_name": "20231101.ace", "data_files": [{"split": "train", "path": "20231101.ace/train-*"}]}, {"config_name": "20231101.ady", "data_files": [{"split": "train", "path": "20231101.ady/train-*"}]}, {"config_name": "20231101.af", "data_files": [{"split": "train", "path": "20231101.af/train-*"}]}, {"config_name": "20231101.als", "data_files": [{"split": "train", "path": "20231101.als/train-*"}]}, {"config_name": "20231101.alt", "data_files": [{"split": "train", "path": "20231101.alt/train-*"}]}, {"config_name": "20231101.am", "data_files": [{"split": "train", "path": "20231101.am/train-*"}]}, {"config_name": "20231101.ami", "data_files": [{"split": "train", "path": "20231101.ami/train-*"}]}, {"config_name": "20231101.an", "data_files": [{"split": "train", "path": "20231101.an/train-*"}]}, {"config_name": "20231101.ang", "data_files": [{"split": "train", "path": "20231101.ang/train-*"}]}, {"config_name": "20231101.anp", "data_files": [{"split": "train", "path": "20231101.anp/train-*"}]}, {"config_name": "20231101.ar", "data_files": [{"split": "train", "path": "20231101.ar/train-*"}]}, {"config_name": "20231101.arc", "data_files": [{"split": "train", "path": "20231101.arc/train-*"}]}, {"config_name": "20231101.ary", "data_files": [{"split": "train", "path": "20231101.ary/train-*"}]}, {"config_name": "20231101.arz", "data_files": [{"split": "train", "path": "20231101.arz/train-*"}]}, {"config_name": "20231101.as", "data_files": [{"split": "train", "path": "20231101.as/train-*"}]}, {"config_name": "20231101.ast", "data_files": [{"split": "train", "path": "20231101.ast/train-*"}]}, {"config_name": "20231101.atj", "data_files": [{"split": "train", "path": "20231101.atj/train-*"}]}, {"config_name": "20231101.av", "data_files": [{"split": "train", "path": "20231101.av/train-*"}]}, {"config_name": "20231101.avk", "data_files": [{"split": "train", "path": "20231101.avk/train-*"}]}, {"config_name": "20231101.awa", "data_files": [{"split": "train", "path": "20231101.awa/train-*"}]}, {"config_name": "20231101.ay", "data_files": [{"split": "train", "path": "20231101.ay/train-*"}]}, {"config_name": "20231101.az", "data_files": [{"split": "train", "path": "20231101.az/train-*"}]}, {"config_name": "20231101.azb", "data_files": [{"split": "train", "path": "20231101.azb/train-*"}]}, {"config_name": "20231101.ba", "data_files": [{"split": "train", "path": "20231101.ba/train-*"}]}, {"config_name": "20231101.ban", "data_files": [{"split": "train", "path": "20231101.ban/train-*"}]}, {"config_name": "20231101.bar", "data_files": [{"split": "train", "path": "20231101.bar/train-*"}]}, {"config_name": "20231101.bat-smg", "data_files": [{"split": "train", "path": "20231101.bat-smg/train-*"}]}, {"config_name": "20231101.bcl", "data_files": [{"split": "train", "path": "20231101.bcl/train-*"}]}, {"config_name": "20231101.be", "data_files": [{"split": "train", "path": "20231101.be/train-*"}]}, {"config_name": "20231101.be-x-old", "data_files": [{"split": "train", "path": "20231101.be-x-old/train-*"}]}, {"config_name": "20231101.bg", "data_files": [{"split": "train", "path": "20231101.bg/train-*"}]}, {"config_name": "20231101.bh", "data_files": [{"split": "train", "path": "20231101.bh/train-*"}]}, {"config_name": "20231101.bi", "data_files": [{"split": "train", "path": "20231101.bi/train-*"}]}, {"config_name": "20231101.bjn", "data_files": [{"split": "train", "path": "20231101.bjn/train-*"}]}, {"config_name": "20231101.blk", "data_files": [{"split": "train", "path": "20231101.blk/train-*"}]}, {"config_name": "20231101.bm", "data_files": [{"split": "train", "path": "20231101.bm/train-*"}]}, {"config_name": "20231101.bn", "data_files": [{"split": "train", "path": "20231101.bn/train-*"}]}, {"config_name": "20231101.bo", "data_files": [{"split": "train", "path": "20231101.bo/train-*"}]}, {"config_name": "20231101.bpy", "data_files": [{"split": "train", "path": "20231101.bpy/train-*"}]}, {"config_name": "20231101.br", "data_files": [{"split": "train", "path": "20231101.br/train-*"}]}, {"config_name": "20231101.bs", "data_files": [{"split": "train", "path": "20231101.bs/train-*"}]}, {"config_name": "20231101.bug", "data_files": [{"split": "train", "path": "20231101.bug/train-*"}]}, {"config_name": "20231101.bxr", "data_files": [{"split": "train", "path": "20231101.bxr/train-*"}]}, {"config_name": "20231101.ca", "data_files": [{"split": "train", "path": "20231101.ca/train-*"}]}, {"config_name": "20231101.cbk-zam", "data_files": [{"split": "train", "path": "20231101.cbk-zam/train-*"}]}, {"config_name": "20231101.cdo", "data_files": [{"split": "train", "path": "20231101.cdo/train-*"}]}, {"config_name": "20231101.ce", "data_files": [{"split": "train", "path": "20231101.ce/train-*"}]}, {"config_name": "20231101.ceb", "data_files": [{"split": "train", "path": "20231101.ceb/train-*"}]}, {"config_name": "20231101.ch", "data_files": [{"split": "train", "path": "20231101.ch/train-*"}]}, {"config_name": "20231101.chr", "data_files": [{"split": "train", "path": "20231101.chr/train-*"}]}, {"config_name": "20231101.chy", "data_files": [{"split": "train", "path": "20231101.chy/train-*"}]}, {"config_name": "20231101.ckb", "data_files": [{"split": "train", "path": "20231101.ckb/train-*"}]}, {"config_name": "20231101.co", "data_files": [{"split": "train", "path": "20231101.co/train-*"}]}, {"config_name": "20231101.cr", "data_files": [{"split": "train", "path": "20231101.cr/train-*"}]}, {"config_name": "20231101.crh", "data_files": [{"split": "train", "path": "20231101.crh/train-*"}]}, {"config_name": "20231101.cs", "data_files": [{"split": "train", "path": "20231101.cs/train-*"}]}, {"config_name": "20231101.csb", "data_files": [{"split": "train", "path": "20231101.csb/train-*"}]}, {"config_name": "20231101.cu", "data_files": [{"split": "train", "path": "20231101.cu/train-*"}]}, {"config_name": "20231101.cv", "data_files": [{"split": "train", "path": "20231101.cv/train-*"}]}, {"config_name": "20231101.cy", "data_files": [{"split": "train", "path": "20231101.cy/train-*"}]}, {"config_name": "20231101.da", "data_files": [{"split": "train", "path": "20231101.da/train-*"}]}, {"config_name": "20231101.dag", "data_files": [{"split": "train", "path": "20231101.dag/train-*"}]}, {"config_name": "20231101.de", "data_files": [{"split": "train", "path": "20231101.de/train-*"}]}, {"config_name": "20231101.din", "data_files": [{"split": "train", "path": "20231101.din/train-*"}]}, {"config_name": "20231101.diq", "data_files": [{"split": "train", "path": "20231101.diq/train-*"}]}, {"config_name": "20231101.dsb", "data_files": [{"split": "train", "path": "20231101.dsb/train-*"}]}, {"config_name": "20231101.dty", "data_files": [{"split": "train", "path": "20231101.dty/train-*"}]}, {"config_name": "20231101.dv", "data_files": [{"split": "train", "path": "20231101.dv/train-*"}]}, {"config_name": "20231101.dz", "data_files": [{"split": "train", "path": "20231101.dz/train-*"}]}, {"config_name": "20231101.ee", "data_files": [{"split": "train", "path": "20231101.ee/train-*"}]}, {"config_name": "20231101.el", "data_files": [{"split": "train", "path": "20231101.el/train-*"}]}, {"config_name": "20231101.eml", "data_files": [{"split": "train", "path": "20231101.eml/train-*"}]}, {"config_name": "20231101.en", "data_files": [{"split": "train", "path": "20231101.en/train-*"}]}, {"config_name": "20231101.eo", "data_files": [{"split": "train", "path": "20231101.eo/train-*"}]}, {"config_name": "20231101.es", "data_files": [{"split": "train", "path": "20231101.es/train-*"}]}, {"config_name": "20231101.et", "data_files": [{"split": "train", "path": "20231101.et/train-*"}]}, {"config_name": "20231101.eu", "data_files": [{"split": "train", "path": "20231101.eu/train-*"}]}, {"config_name": "20231101.ext", "data_files": [{"split": "train", "path": "20231101.ext/train-*"}]}, {"config_name": "20231101.fa", "data_files": [{"split": "train", "path": "20231101.fa/train-*"}]}, {"config_name": "20231101.fat", "data_files": [{"split": "train", "path": "20231101.fat/train-*"}]}, {"config_name": "20231101.ff", "data_files": [{"split": "train", "path": "20231101.ff/train-*"}]}, {"config_name": "20231101.fi", "data_files": [{"split": "train", "path": "20231101.fi/train-*"}]}, {"config_name": "20231101.fiu-vro", "data_files": [{"split": "train", "path": "20231101.fiu-vro/train-*"}]}, {"config_name": "20231101.fj", "data_files": [{"split": "train", "path": "20231101.fj/train-*"}]}, {"config_name": "20231101.fo", "data_files": [{"split": "train", "path": "20231101.fo/train-*"}]}, {"config_name": "20231101.fon", "data_files": [{"split": "train", "path": "20231101.fon/train-*"}]}, {"config_name": "20231101.fr", "data_files": [{"split": "train", "path": "20231101.fr/train-*"}]}, {"config_name": "20231101.frp", "data_files": [{"split": "train", "path": "20231101.frp/train-*"}]}, {"config_name": "20231101.frr", "data_files": [{"split": "train", "path": "20231101.frr/train-*"}]}, {"config_name": "20231101.fur", "data_files": [{"split": "train", "path": "20231101.fur/train-*"}]}, {"config_name": "20231101.fy", "data_files": [{"split": "train", "path": "20231101.fy/train-*"}]}, {"config_name": "20231101.ga", "data_files": [{"split": "train", "path": "20231101.ga/train-*"}]}, {"config_name": "20231101.gag", "data_files": [{"split": "train", "path": "20231101.gag/train-*"}]}, {"config_name": "20231101.gan", "data_files": [{"split": "train", "path": "20231101.gan/train-*"}]}, {"config_name": "20231101.gcr", "data_files": [{"split": "train", "path": "20231101.gcr/train-*"}]}, {"config_name": "20231101.gd", "data_files": [{"split": "train", "path": "20231101.gd/train-*"}]}, {"config_name": "20231101.gl", "data_files": [{"split": "train", "path": "20231101.gl/train-*"}]}, {"config_name": "20231101.glk", "data_files": [{"split": "train", "path": "20231101.glk/train-*"}]}, {"config_name": "20231101.gn", "data_files": [{"split": "train", "path": "20231101.gn/train-*"}]}, {"config_name": "20231101.gom", "data_files": [{"split": "train", "path": "20231101.gom/train-*"}]}, {"config_name": "20231101.gor", "data_files": [{"split": "train", "path": "20231101.gor/train-*"}]}, {"config_name": "20231101.got", "data_files": [{"split": "train", "path": "20231101.got/train-*"}]}, {"config_name": "20231101.gpe", "data_files": [{"split": "train", "path": "20231101.gpe/train-*"}]}, {"config_name": "20231101.gu", "data_files": [{"split": "train", "path": "20231101.gu/train-*"}]}, {"config_name": "20231101.guc", "data_files": [{"split": "train", "path": "20231101.guc/train-*"}]}, {"config_name": "20231101.gur", "data_files": [{"split": "train", "path": "20231101.gur/train-*"}]}, {"config_name": "20231101.guw", "data_files": [{"split": "train", "path": "20231101.guw/train-*"}]}, {"config_name": "20231101.gv", "data_files": [{"split": "train", "path": "20231101.gv/train-*"}]}, {"config_name": "20231101.ha", "data_files": [{"split": "train", "path": "20231101.ha/train-*"}]}, {"config_name": "20231101.hak", "data_files": [{"split": "train", "path": "20231101.hak/train-*"}]}, {"config_name": "20231101.haw", "data_files": [{"split": "train", "path": "20231101.haw/train-*"}]}, {"config_name": "20231101.he", "data_files": [{"split": "train", "path": "20231101.he/train-*"}]}, {"config_name": "20231101.hi", "data_files": [{"split": "train", "path": "20231101.hi/train-*"}]}, {"config_name": "20231101.hif", "data_files": [{"split": "train", "path": "20231101.hif/train-*"}]}, {"config_name": "20231101.hr", "data_files": [{"split": "train", "path": "20231101.hr/train-*"}]}, {"config_name": "20231101.hsb", "data_files": [{"split": "train", "path": "20231101.hsb/train-*"}]}, {"config_name": "20231101.ht", "data_files": [{"split": "train", "path": "20231101.ht/train-*"}]}, {"config_name": "20231101.hu", "data_files": [{"split": "train", "path": "20231101.hu/train-*"}]}, {"config_name": "20231101.hy", "data_files": [{"split": "train", "path": "20231101.hy/train-*"}]}, {"config_name": "20231101.hyw", "data_files": [{"split": "train", "path": "20231101.hyw/train-*"}]}, {"config_name": "20231101.ia", "data_files": [{"split": "train", "path": "20231101.ia/train-*"}]}, {"config_name": "20231101.id", "data_files": [{"split": "train", "path": "20231101.id/train-*"}]}, {"config_name": "20231101.ie", "data_files": [{"split": "train", "path": "20231101.ie/train-*"}]}, {"config_name": "20231101.ig", "data_files": [{"split": "train", "path": "20231101.ig/train-*"}]}, {"config_name": "20231101.ik", "data_files": [{"split": "train", "path": "20231101.ik/train-*"}]}, {"config_name": "20231101.ilo", "data_files": [{"split": "train", "path": "20231101.ilo/train-*"}]}, {"config_name": "20231101.inh", "data_files": [{"split": "train", "path": "20231101.inh/train-*"}]}, {"config_name": "20231101.io", "data_files": [{"split": "train", "path": "20231101.io/train-*"}]}, {"config_name": "20231101.is", "data_files": [{"split": "train", "path": "20231101.is/train-*"}]}, {"config_name": "20231101.it", "data_files": [{"split": "train", "path": "20231101.it/train-*"}]}, {"config_name": "20231101.iu", "data_files": [{"split": "train", "path": "20231101.iu/train-*"}]}, {"config_name": "20231101.ja", "data_files": [{"split": "train", "path": "20231101.ja/train-*"}]}, {"config_name": "20231101.jam", "data_files": [{"split": "train", "path": "20231101.jam/train-*"}]}, {"config_name": "20231101.jbo", "data_files": [{"split": "train", "path": "20231101.jbo/train-*"}]}, {"config_name": "20231101.jv", "data_files": [{"split": "train", "path": "20231101.jv/train-*"}]}, {"config_name": "20231101.ka", "data_files": [{"split": "train", "path": "20231101.ka/train-*"}]}, {"config_name": "20231101.kaa", "data_files": [{"split": "train", "path": "20231101.kaa/train-*"}]}, {"config_name": "20231101.kab", "data_files": [{"split": "train", "path": "20231101.kab/train-*"}]}, {"config_name": "20231101.kbd", "data_files": [{"split": "train", "path": "20231101.kbd/train-*"}]}, {"config_name": "20231101.kbp", "data_files": [{"split": "train", "path": "20231101.kbp/train-*"}]}, {"config_name": "20231101.kcg", "data_files": [{"split": "train", "path": "20231101.kcg/train-*"}]}, {"config_name": "20231101.kg", "data_files": [{"split": "train", "path": "20231101.kg/train-*"}]}, {"config_name": "20231101.ki", "data_files": [{"split": "train", "path": "20231101.ki/train-*"}]}, {"config_name": "20231101.kk", "data_files": [{"split": "train", "path": "20231101.kk/train-*"}]}, {"config_name": "20231101.kl", "data_files": [{"split": "train", "path": "20231101.kl/train-*"}]}, {"config_name": "20231101.km", "data_files": [{"split": "train", "path": "20231101.km/train-*"}]}, {"config_name": "20231101.kn", "data_files": [{"split": "train", "path": "20231101.kn/train-*"}]}, {"config_name": "20231101.ko", "data_files": [{"split": "train", "path": "20231101.ko/train-*"}]}, {"config_name": "20231101.koi", "data_files": [{"split": "train", "path": "20231101.koi/train-*"}]}, {"config_name": "20231101.krc", "data_files": [{"split": "train", "path": "20231101.krc/train-*"}]}, {"config_name": "20231101.ks", "data_files": [{"split": "train", "path": "20231101.ks/train-*"}]}, {"config_name": "20231101.ksh", "data_files": [{"split": "train", "path": "20231101.ksh/train-*"}]}, {"config_name": "20231101.ku", "data_files": [{"split": "train", "path": "20231101.ku/train-*"}]}, {"config_name": "20231101.kv", "data_files": [{"split": "train", "path": "20231101.kv/train-*"}]}, {"config_name": "20231101.kw", "data_files": [{"split": "train", "path": "20231101.kw/train-*"}]}, {"config_name": "20231101.ky", "data_files": [{"split": "train", "path": "20231101.ky/train-*"}]}, {"config_name": "20231101.la", "data_files": [{"split": "train", "path": "20231101.la/train-*"}]}, {"config_name": "20231101.lad", "data_files": [{"split": "train", "path": "20231101.lad/train-*"}]}, {"config_name": "20231101.lb", "data_files": [{"split": "train", "path": "20231101.lb/train-*"}]}, {"config_name": "20231101.lbe", "data_files": [{"split": "train", "path": "20231101.lbe/train-*"}]}, {"config_name": "20231101.lez", "data_files": [{"split": "train", "path": "20231101.lez/train-*"}]}, {"config_name": "20231101.lfn", "data_files": [{"split": "train", "path": "20231101.lfn/train-*"}]}, {"config_name": "20231101.lg", "data_files": [{"split": "train", "path": "20231101.lg/train-*"}]}, {"config_name": "20231101.li", "data_files": [{"split": "train", "path": "20231101.li/train-*"}]}, {"config_name": "20231101.lij", "data_files": [{"split": "train", "path": "20231101.lij/train-*"}]}, {"config_name": "20231101.lld", "data_files": [{"split": "train", "path": "20231101.lld/train-*"}]}, {"config_name": "20231101.lmo", "data_files": [{"split": "train", "path": "20231101.lmo/train-*"}]}, {"config_name": "20231101.ln", "data_files": [{"split": "train", "path": "20231101.ln/train-*"}]}, {"config_name": "20231101.lo", "data_files": [{"split": "train", "path": "20231101.lo/train-*"}]}, {"config_name": "20231101.lt", "data_files": [{"split": "train", "path": "20231101.lt/train-*"}]}, {"config_name": "20231101.ltg", "data_files": [{"split": "train", "path": "20231101.ltg/train-*"}]}, {"config_name": "20231101.lv", "data_files": [{"split": "train", "path": "20231101.lv/train-*"}]}, {"config_name": "20231101.mad", "data_files": [{"split": "train", "path": "20231101.mad/train-*"}]}, {"config_name": "20231101.mai", "data_files": [{"split": "train", "path": "20231101.mai/train-*"}]}, {"config_name": "20231101.map-bms", "data_files": [{"split": "train", "path": "20231101.map-bms/train-*"}]}, {"config_name": "20231101.mdf", "data_files": [{"split": "train", "path": "20231101.mdf/train-*"}]}, {"config_name": "20231101.mg", "data_files": [{"split": "train", "path": "20231101.mg/train-*"}]}, {"config_name": "20231101.mhr", "data_files": [{"split": "train", "path": "20231101.mhr/train-*"}]}, {"config_name": "20231101.mi", "data_files": [{"split": "train", "path": "20231101.mi/train-*"}]}, {"config_name": "20231101.min", "data_files": [{"split": "train", "path": "20231101.min/train-*"}]}, {"config_name": "20231101.mk", "data_files": [{"split": "train", "path": "20231101.mk/train-*"}]}, {"config_name": "20231101.ml", "data_files": [{"split": "train", "path": "20231101.ml/train-*"}]}, {"config_name": "20231101.mn", "data_files": [{"split": "train", "path": "20231101.mn/train-*"}]}, {"config_name": "20231101.mni", "data_files": [{"split": "train", "path": "20231101.mni/train-*"}]}, {"config_name": "20231101.mnw", "data_files": [{"split": "train", "path": "20231101.mnw/train-*"}]}, {"config_name": "20231101.mr", "data_files": [{"split": "train", "path": "20231101.mr/train-*"}]}, {"config_name": "20231101.mrj", "data_files": [{"split": "train", "path": "20231101.mrj/train-*"}]}, {"config_name": "20231101.ms", "data_files": [{"split": "train", "path": "20231101.ms/train-*"}]}, {"config_name": "20231101.mt", "data_files": [{"split": "train", "path": "20231101.mt/train-*"}]}, {"config_name": "20231101.mwl", "data_files": [{"split": "train", "path": "20231101.mwl/train-*"}]}, {"config_name": "20231101.my", "data_files": [{"split": "train", "path": "20231101.my/train-*"}]}, {"config_name": "20231101.myv", "data_files": [{"split": "train", "path": "20231101.myv/train-*"}]}, {"config_name": "20231101.mzn", "data_files": [{"split": "train", "path": "20231101.mzn/train-*"}]}, {"config_name": "20231101.nah", "data_files": [{"split": "train", "path": "20231101.nah/train-*"}]}, {"config_name": "20231101.nap", "data_files": [{"split": "train", "path": "20231101.nap/train-*"}]}, {"config_name": "20231101.nds", "data_files": [{"split": "train", "path": "20231101.nds/train-*"}]}, {"config_name": "20231101.nds-nl", "data_files": [{"split": "train", "path": "20231101.nds-nl/train-*"}]}, {"config_name": "20231101.ne", "data_files": [{"split": "train", "path": "20231101.ne/train-*"}]}, {"config_name": "20231101.new", "data_files": [{"split": "train", "path": "20231101.new/train-*"}]}, {"config_name": "20231101.nia", "data_files": [{"split": "train", "path": "20231101.nia/train-*"}]}, {"config_name": "20231101.nl", "data_files": [{"split": "train", "path": "20231101.nl/train-*"}]}, {"config_name": "20231101.nn", "data_files": [{"split": "train", "path": "20231101.nn/train-*"}]}, {"config_name": "20231101.no", "data_files": [{"split": "train", "path": "20231101.no/train-*"}]}, {"config_name": "20231101.nov", "data_files": [{"split": "train", "path": "20231101.nov/train-*"}]}, {"config_name": "20231101.nqo", "data_files": [{"split": "train", "path": "20231101.nqo/train-*"}]}, {"config_name": "20231101.nrm", "data_files": [{"split": "train", "path": "20231101.nrm/train-*"}]}, {"config_name": "20231101.nso", "data_files": [{"split": "train", "path": "20231101.nso/train-*"}]}, {"config_name": "20231101.nv", "data_files": [{"split": "train", "path": "20231101.nv/train-*"}]}, {"config_name": "20231101.ny", "data_files": [{"split": "train", "path": "20231101.ny/train-*"}]}, {"config_name": "20231101.oc", "data_files": [{"split": "train", "path": "20231101.oc/train-*"}]}, {"config_name": "20231101.olo", "data_files": [{"split": "train", "path": "20231101.olo/train-*"}]}, {"config_name": "20231101.om", "data_files": [{"split": "train", "path": "20231101.om/train-*"}]}, {"config_name": "20231101.or", "data_files": [{"split": "train", "path": "20231101.or/train-*"}]}, {"config_name": "20231101.os", "data_files": [{"split": "train", "path": "20231101.os/train-*"}]}, {"config_name": "20231101.pa", "data_files": [{"split": "train", "path": "20231101.pa/train-*"}]}, {"config_name": "20231101.pag", "data_files": [{"split": "train", "path": "20231101.pag/train-*"}]}, {"config_name": "20231101.pam", "data_files": [{"split": "train", "path": "20231101.pam/train-*"}]}, {"config_name": "20231101.pap", "data_files": [{"split": "train", "path": "20231101.pap/train-*"}]}, {"config_name": "20231101.pcd", "data_files": [{"split": "train", "path": "20231101.pcd/train-*"}]}, {"config_name": "20231101.pcm", "data_files": [{"split": "train", "path": "20231101.pcm/train-*"}]}, {"config_name": "20231101.pdc", "data_files": [{"split": "train", "path": "20231101.pdc/train-*"}]}, {"config_name": "20231101.pfl", "data_files": [{"split": "train", "path": "20231101.pfl/train-*"}]}, {"config_name": "20231101.pi", "data_files": [{"split": "train", "path": "20231101.pi/train-*"}]}, {"config_name": "20231101.pih", "data_files": [{"split": "train", "path": "20231101.pih/train-*"}]}, {"config_name": "20231101.pl", "data_files": [{"split": "train", "path": "20231101.pl/train-*"}]}, {"config_name": "20231101.pms", "data_files": [{"split": "train", "path": "20231101.pms/train-*"}]}, {"config_name": "20231101.pnb", "data_files": [{"split": "train", "path": "20231101.pnb/train-*"}]}, {"config_name": "20231101.pnt", "data_files": [{"split": "train", "path": "20231101.pnt/train-*"}]}, {"config_name": "20231101.ps", "data_files": [{"split": "train", "path": "20231101.ps/train-*"}]}, {"config_name": "20231101.pt", "data_files": [{"split": "train", "path": "20231101.pt/train-*"}]}, {"config_name": "20231101.pwn", "data_files": [{"split": "train", "path": "20231101.pwn/train-*"}]}, {"config_name": "20231101.qu", "data_files": [{"split": "train", "path": "20231101.qu/train-*"}]}, {"config_name": "20231101.rm", "data_files": [{"split": "train", "path": "20231101.rm/train-*"}]}, {"config_name": "20231101.rmy", "data_files": [{"split": "train", "path": "20231101.rmy/train-*"}]}, {"config_name": "20231101.rn", "data_files": [{"split": "train", "path": "20231101.rn/train-*"}]}, {"config_name": "20231101.ro", "data_files": [{"split": "train", "path": "20231101.ro/train-*"}]}, {"config_name": "20231101.roa-rup", "data_files": [{"split": "train", "path": "20231101.roa-rup/train-*"}]}, {"config_name": "20231101.roa-tara", "data_files": [{"split": "train", "path": "20231101.roa-tara/train-*"}]}, {"config_name": "20231101.ru", "data_files": [{"split": "train", "path": "20231101.ru/train-*"}]}, {"config_name": "20231101.rue", "data_files": [{"split": "train", "path": "20231101.rue/train-*"}]}, {"config_name": "20231101.rw", "data_files": [{"split": "train", "path": "20231101.rw/train-*"}]}, {"config_name": "20231101.sa", "data_files": [{"split": "train", "path": "20231101.sa/train-*"}]}, {"config_name": "20231101.sah", "data_files": [{"split": "train", "path": "20231101.sah/train-*"}]}, {"config_name": "20231101.sat", "data_files": [{"split": "train", "path": "20231101.sat/train-*"}]}, {"config_name": "20231101.sc", "data_files": [{"split": "train", "path": "20231101.sc/train-*"}]}, {"config_name": "20231101.scn", "data_files": [{"split": "train", "path": "20231101.scn/train-*"}]}, {"config_name": "20231101.sco", "data_files": [{"split": "train", "path": "20231101.sco/train-*"}]}, {"config_name": "20231101.sd", "data_files": [{"split": "train", "path": "20231101.sd/train-*"}]}, {"config_name": "20231101.se", "data_files": [{"split": "train", "path": "20231101.se/train-*"}]}, {"config_name": "20231101.sg", "data_files": [{"split": "train", "path": "20231101.sg/train-*"}]}, {"config_name": "20231101.sh", "data_files": [{"split": "train", "path": "20231101.sh/train-*"}]}, {"config_name": "20231101.shi", "data_files": [{"split": "train", "path": "20231101.shi/train-*"}]}, {"config_name": "20231101.shn", "data_files": [{"split": "train", "path": "20231101.shn/train-*"}]}, {"config_name": "20231101.si", "data_files": [{"split": "train", "path": "20231101.si/train-*"}]}, {"config_name": "20231101.simple", "data_files": [{"split": "train", "path": "20231101.simple/train-*"}]}, {"config_name": "20231101.sk", "data_files": [{"split": "train", "path": "20231101.sk/train-*"}]}, {"config_name": "20231101.skr", "data_files": [{"split": "train", "path": "20231101.skr/train-*"}]}, {"config_name": "20231101.sl", "data_files": [{"split": "train", "path": "20231101.sl/train-*"}]}, {"config_name": "20231101.sm", "data_files": [{"split": "train", "path": "20231101.sm/train-*"}]}, {"config_name": "20231101.smn", "data_files": [{"split": "train", "path": "20231101.smn/train-*"}]}, {"config_name": "20231101.sn", "data_files": [{"split": "train", "path": "20231101.sn/train-*"}]}, {"config_name": "20231101.so", "data_files": [{"split": "train", "path": "20231101.so/train-*"}]}, {"config_name": "20231101.sq", "data_files": [{"split": "train", "path": "20231101.sq/train-*"}]}, {"config_name": "20231101.sr", "data_files": [{"split": "train", "path": "20231101.sr/train-*"}]}, {"config_name": "20231101.srn", "data_files": [{"split": "train", "path": "20231101.srn/train-*"}]}, {"config_name": "20231101.ss", "data_files": [{"split": "train", "path": "20231101.ss/train-*"}]}, {"config_name": "20231101.st", "data_files": [{"split": "train", "path": "20231101.st/train-*"}]}, {"config_name": "20231101.stq", "data_files": [{"split": "train", "path": "20231101.stq/train-*"}]}, {"config_name": "20231101.su", "data_files": [{"split": "train", "path": "20231101.su/train-*"}]}, {"config_name": "20231101.sv", "data_files": [{"split": "train", "path": "20231101.sv/train-*"}]}, {"config_name": "20231101.sw", "data_files": [{"split": "train", "path": "20231101.sw/train-*"}]}, {"config_name": "20231101.szl", "data_files": [{"split": "train", "path": "20231101.szl/train-*"}]}, {"config_name": "20231101.szy", "data_files": [{"split": "train", "path": "20231101.szy/train-*"}]}, {"config_name": "20231101.ta", "data_files": [{"split": "train", "path": "20231101.ta/train-*"}]}, {"config_name": "20231101.tay", "data_files": [{"split": "train", "path": "20231101.tay/train-*"}]}, {"config_name": "20231101.tcy", "data_files": [{"split": "train", "path": "20231101.tcy/train-*"}]}, {"config_name": "20231101.te", "data_files": [{"split": "train", "path": "20231101.te/train-*"}]}, {"config_name": "20231101.tet", "data_files": [{"split": "train", "path": "20231101.tet/train-*"}]}, {"config_name": "20231101.tg", "data_files": [{"split": "train", "path": "20231101.tg/train-*"}]}, {"config_name": "20231101.th", "data_files": [{"split": "train", "path": "20231101.th/train-*"}]}, {"config_name": "20231101.ti", "data_files": [{"split": "train", "path": "20231101.ti/train-*"}]}, {"config_name": "20231101.tk", "data_files": [{"split": "train", "path": "20231101.tk/train-*"}]}, {"config_name": "20231101.tl", "data_files": [{"split": "train", "path": "20231101.tl/train-*"}]}, {"config_name": "20231101.tly", "data_files": [{"split": "train", "path": "20231101.tly/train-*"}]}, {"config_name": "20231101.tn", "data_files": [{"split": "train", "path": "20231101.tn/train-*"}]}, {"config_name": "20231101.to", "data_files": [{"split": "train", "path": "20231101.to/train-*"}]}, {"config_name": "20231101.tpi", "data_files": [{"split": "train", "path": "20231101.tpi/train-*"}]}, {"config_name": "20231101.tr", "data_files": [{"split": "train", "path": "20231101.tr/train-*"}]}, {"config_name": "20231101.trv", "data_files": [{"split": "train", "path": "20231101.trv/train-*"}]}, {"config_name": "20231101.ts", "data_files": [{"split": "train", "path": "20231101.ts/train-*"}]}, {"config_name": "20231101.tt", "data_files": [{"split": "train", "path": "20231101.tt/train-*"}]}, {"config_name": "20231101.tum", "data_files": [{"split": "train", "path": "20231101.tum/train-*"}]}, {"config_name": "20231101.tw", "data_files": [{"split": "train", "path": "20231101.tw/train-*"}]}, {"config_name": "20231101.ty", "data_files": [{"split": "train", "path": "20231101.ty/train-*"}]}, {"config_name": "20231101.tyv", "data_files": [{"split": "train", "path": "20231101.tyv/train-*"}]}, {"config_name": "20231101.udm", "data_files": [{"split": "train", "path": "20231101.udm/train-*"}]}, {"config_name": "20231101.ug", "data_files": [{"split": "train", "path": "20231101.ug/train-*"}]}, {"config_name": "20231101.uk", "data_files": [{"split": "train", "path": "20231101.uk/train-*"}]}, {"config_name": "20231101.ur", "data_files": [{"split": "train", "path": "20231101.ur/train-*"}]}, {"config_name": "20231101.uz", "data_files": [{"split": "train", "path": "20231101.uz/train-*"}]}, {"config_name": "20231101.ve", "data_files": [{"split": "train", "path": "20231101.ve/train-*"}]}, {"config_name": "20231101.vec", "data_files": [{"split": "train", "path": "20231101.vec/train-*"}]}, {"config_name": "20231101.vep", "data_files": [{"split": "train", "path": "20231101.vep/train-*"}]}, {"config_name": "20231101.vi", "data_files": [{"split": "train", "path": "20231101.vi/train-*"}]}, {"config_name": "20231101.vls", "data_files": [{"split": "train", "path": "20231101.vls/train-*"}]}, {"config_name": "20231101.vo", "data_files": [{"split": "train", "path": "20231101.vo/train-*"}]}, {"config_name": "20231101.wa", "data_files": [{"split": "train", "path": "20231101.wa/train-*"}]}, {"config_name": "20231101.war", "data_files": [{"split": "train", "path": "20231101.war/train-*"}]}, {"config_name": "20231101.wo", "data_files": [{"split": "train", "path": "20231101.wo/train-*"}]}, {"config_name": "20231101.wuu", "data_files": [{"split": "train", "path": "20231101.wuu/train-*"}]}, {"config_name": "20231101.xal", "data_files": [{"split": "train", "path": "20231101.xal/train-*"}]}, {"config_name": "20231101.xh", "data_files": [{"split": "train", "path": "20231101.xh/train-*"}]}, {"config_name": "20231101.xmf", "data_files": [{"split": "train", "path": "20231101.xmf/train-*"}]}, {"config_name": "20231101.yi", "data_files": [{"split": "train", "path": "20231101.yi/train-*"}]}, {"config_name": "20231101.yo", "data_files": [{"split": "train", "path": "20231101.yo/train-*"}]}, {"config_name": "20231101.za", "data_files": [{"split": "train", "path": "20231101.za/train-*"}]}, {"config_name": "20231101.zea", "data_files": [{"split": "train", "path": "20231101.zea/train-*"}]}, {"config_name": "20231101.zh", "data_files": [{"split": "train", "path": "20231101.zh/train-*"}]}, {"config_name": "20231101.zh-classical", "data_files": [{"split": "train", "path": "20231101.zh-classical/train-*"}]}, {"config_name": "20231101.zh-min-nan", "data_files": [{"split": "train", "path": "20231101.zh-min-nan/train-*"}]}, {"config_name": "20231101.zh-yue", "data_files": [{"split": "train", "path": "20231101.zh-yue/train-*"}]}, {"config_name": "20231101.zu", "data_files": [{"split": "train", "path": "20231101.zu/train-*"}]}], "dataset_info": [{"config_name": "20231101.ab", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4334455, "num_examples": 6152}], "download_size": 1237796, "dataset_size": 4334455}, {"config_name": "20231101.ace", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5065801, "num_examples": 13003}], "download_size": 1574258, "dataset_size": 5065801}, {"config_name": "20231101.ady", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 765030, "num_examples": 706}], "download_size": 347450, "dataset_size": 765030}, {"config_name": "20231101.af", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 226672176, "num_examples": 112518}], "download_size": 124485544, "dataset_size": 226672176}, {"config_name": "20231101.als", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 81450196, "num_examples": 30013}], "download_size": 49452211, "dataset_size": 81450196}, {"config_name": "20231101.alt", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6819963, "num_examples": 1087}], "download_size": 2910477, "dataset_size": 6819963}, {"config_name": "20231101.am", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 24218002, "num_examples": 13906}], "download_size": 10720027, "dataset_size": 24218002}, {"config_name": "20231101.ami", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4460174, "num_examples": 1628}], "download_size": 2261859, "dataset_size": 4460174}, {"config_name": "20231101.an", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 57572050, "num_examples": 44249}], "download_size": 29573020, "dataset_size": 57572050}, {"config_name": "20231101.ang", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2913906, "num_examples": 4121}], "download_size": 1789811, "dataset_size": 2913906}, {"config_name": "20231101.anp", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 9226211, "num_examples": 2749}], "download_size": 3355979, "dataset_size": 9226211}, {"config_name": "20231101.ar", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3124486159, "num_examples": 1219201}], "download_size": 1323304271, "dataset_size": 3124486159}, {"config_name": "20231101.arc", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 849731, "num_examples": 1936}], "download_size": 369584, "dataset_size": 849731}, {"config_name": "20231101.ary", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 12049878, "num_examples": 8087}], "download_size": 4672257, "dataset_size": 12049878}, {"config_name": "20231101.arz", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1402294447, "num_examples": 1620194}], "download_size": 317231585, "dataset_size": 1402294447}, {"config_name": "20231101.as", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 90312333, "num_examples": 12338}], "download_size": 34581561, "dataset_size": 90312333}, {"config_name": "20231101.ast", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 470575521, "num_examples": 133419}], "download_size": 271196430, "dataset_size": 470575521}, {"config_name": "20231101.atj", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1012467, "num_examples": 1971}], "download_size": 513962, "dataset_size": 1012467}, {"config_name": "20231101.av", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6084045, "num_examples": 3426}], "download_size": 2573436, "dataset_size": 6084045}, {"config_name": "20231101.avk", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 32119428, "num_examples": 28353}], "download_size": 7984474, "dataset_size": 32119428}, {"config_name": "20231101.awa", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3703396, "num_examples": 3679}], "download_size": 1269824, "dataset_size": 3703396}, {"config_name": "20231101.ay", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4395813, "num_examples": 5384}], "download_size": 1756131, "dataset_size": 4395813}, {"config_name": "20231101.az", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 433663157, "num_examples": 196158}], "download_size": 230064038, "dataset_size": 433663157}, {"config_name": "20231101.azb", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 187041147, "num_examples": 243376}], "download_size": 46739926, "dataset_size": 187041147}, {"config_name": "20231101.ba", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 297738837, "num_examples": 63319}], "download_size": 122595805, "dataset_size": 297738837}, {"config_name": "20231101.ban", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 18012727, "num_examples": 20986}], "download_size": 6715876, "dataset_size": 18012727}, {"config_name": "20231101.bar", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 36317102, "num_examples": 27096}], "download_size": 21799389, "dataset_size": 36317102}, {"config_name": "20231101.bat-smg", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7212849, "num_examples": 17221}], "download_size": 3348765, "dataset_size": 7212849}, {"config_name": "20231101.bcl", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 20394331, "num_examples": 15743}], "download_size": 11369234, "dataset_size": 20394331}, {"config_name": "20231101.be", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 624718980, "num_examples": 236165}], "download_size": 284921288, "dataset_size": 624718980}, {"config_name": "20231101.be-x-old", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 252510447, "num_examples": 84361}], "download_size": 114318588, "dataset_size": 252510447}, {"config_name": "20231101.bg", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1103334425, "num_examples": 294275}], "download_size": 512344058, "dataset_size": 1103334425}, {"config_name": "20231101.bh", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 16675295, "num_examples": 8612}], "download_size": 5880458, "dataset_size": 16675295}, {"config_name": "20231101.bi", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 404249, "num_examples": 1548}], "download_size": 203610, "dataset_size": 404249}, {"config_name": "20231101.bjn", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6884860, "num_examples": 10519}], "download_size": 3323032, "dataset_size": 6884860}, {"config_name": "20231101.blk", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 26566991, "num_examples": 2946}], "download_size": 8028430, "dataset_size": 26566991}, {"config_name": "20231101.bm", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 623659, "num_examples": 1258}], "download_size": 343812, "dataset_size": 623659}, {"config_name": "20231101.bn", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 962624238, "num_examples": 143069}], "download_size": 343885999, "dataset_size": 962624238}, {"config_name": "20231101.bo", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 132723880, "num_examples": 12881}], "download_size": 38851784, "dataset_size": 132723880}, {"config_name": "20231101.bpy", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 42975314, "num_examples": 25165}], "download_size": 6568483, "dataset_size": 42975314}, {"config_name": "20231101.br", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 85635744, "num_examples": 84340}], "download_size": 49768597, "dataset_size": 85635744}, {"config_name": "20231101.bs", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 193734399, "num_examples": 92596}], "download_size": 107858627, "dataset_size": 193734399}, {"config_name": "20231101.bug", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3434889, "num_examples": 15880}], "download_size": 817034, "dataset_size": 3434889}, {"config_name": "20231101.bxr", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6687172, "num_examples": 2791}], "download_size": 3078699, "dataset_size": 6687172}, {"config_name": "20231101.ca", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1958810542, "num_examples": 737409}], "download_size": 1116799343, "dataset_size": 1958810542}, {"config_name": "20231101.cbk-zam", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2061944, "num_examples": 3285}], "download_size": 825899, "dataset_size": 2061944}, {"config_name": "20231101.cdo", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5109207, "num_examples": 16449}], "download_size": 1982914, "dataset_size": 5109207}, {"config_name": "20231101.ce", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 730387049, "num_examples": 601271}], "download_size": 88393330, "dataset_size": 730387049}, {"config_name": "20231101.ceb", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4568256711, "num_examples": 6122708}], "download_size": 828085216, "dataset_size": 4568256711}, {"config_name": "20231101.ch", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 178002, "num_examples": 576}], "download_size": 89277, "dataset_size": 178002}, {"config_name": "20231101.chr", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 767618, "num_examples": 1113}], "download_size": 343140, "dataset_size": 767618}, {"config_name": "20231101.chy", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 148139, "num_examples": 802}], "download_size": 75865, "dataset_size": 148139}, {"config_name": "20231101.ckb", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 107150420, "num_examples": 52024}], "download_size": 42964544, "dataset_size": 107150420}, {"config_name": "20231101.co", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 11104243, "num_examples": 7799}], "download_size": 5794731, "dataset_size": 11104243}, {"config_name": "20231101.cr", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 57257, "num_examples": 187}], "download_size": 36081, "dataset_size": 57257}, {"config_name": "20231101.crh", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 9689171, "num_examples": 27691}], "download_size": 3654461, "dataset_size": 9689171}, {"config_name": "20231101.cs", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1566286962, "num_examples": 534044}], "download_size": 976484249, "dataset_size": 1566286962}, {"config_name": "20231101.csb", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3748643, "num_examples": 5480}], "download_size": 2055233, "dataset_size": 3748643}, {"config_name": "20231101.cu", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 981592, "num_examples": 1235}], "download_size": 398252, "dataset_size": 981592}, {"config_name": "20231101.cv", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 81873026, "num_examples": 51863}], "download_size": 29640641, "dataset_size": 81873026}, {"config_name": "20231101.cy", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 305837783, "num_examples": 279455}], "download_size": 112257456, "dataset_size": 305837783}, {"config_name": "20231101.da", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 547068330, "num_examples": 295347}], "download_size": 327688122, "dataset_size": 547068330}, {"config_name": "20231101.dag", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 21618973, "num_examples": 10071}], "download_size": 9026986, "dataset_size": 21618973}, {"config_name": "20231101.de", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 9622925305, "num_examples": 2845308}], "download_size": 5771317942, "dataset_size": 9622925305}, {"config_name": "20231101.din", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 564398, "num_examples": 512}], "download_size": 340530, "dataset_size": 564398}, {"config_name": "20231101.diq", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 19671441, "num_examples": 41775}], "download_size": 7616839, "dataset_size": 19671441}, {"config_name": "20231101.dsb", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3315228, "num_examples": 3379}], "download_size": 1931937, "dataset_size": 3315228}, {"config_name": "20231101.dty", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7030648, "num_examples": 3632}], "download_size": 2521250, "dataset_size": 7030648}, {"config_name": "20231101.dv", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 13934393, "num_examples": 4352}], "download_size": 5283133, "dataset_size": 13934393}, {"config_name": "20231101.dz", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8855969, "num_examples": 788}], "download_size": 2583520, "dataset_size": 8855969}, {"config_name": "20231101.ee", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 898491, "num_examples": 1181}], "download_size": 492813, "dataset_size": 898491}, {"config_name": "20231101.el", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1345589075, "num_examples": 226834}], "download_size": 637372489, "dataset_size": 1345589075}, {"config_name": "20231101.eml", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3625415, "num_examples": 12961}], "download_size": 1689575, "dataset_size": 3625415}, {"config_name": "20231101.en", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 20200062385, "num_examples": 6407814}], "download_size": 11630929031, "dataset_size": 20200062385}, {"config_name": "20231101.eo", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 523113804, "num_examples": 344851}], "download_size": 297738138, "dataset_size": 523113804}, {"config_name": "20231101.es", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6033536133, "num_examples": 1841155}], "download_size": 3493595869, "dataset_size": 6033536133}, {"config_name": "20231101.et", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 440177170, "num_examples": 240397}], "download_size": 265444734, "dataset_size": 440177170}, {"config_name": "20231101.eu", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 565567318, "num_examples": 416347}], "download_size": 270355505, "dataset_size": 565567318}, {"config_name": "20231101.ext", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4389633, "num_examples": 3785}], "download_size": 2761099, "dataset_size": 4389633}, {"config_name": "20231101.fa", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1899154938, "num_examples": 979869}], "download_size": 759368283, "dataset_size": 1899154938}, {"config_name": "20231101.fat", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2032812, "num_examples": 1122}], "download_size": 1124684, "dataset_size": 2032812}, {"config_name": "20231101.ff", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1867995, "num_examples": 2419}], "download_size": 1087702, "dataset_size": 1867995}, {"config_name": "20231101.fi", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1146146663, "num_examples": 561598}], "download_size": 680512230, "dataset_size": 1146146663}, {"config_name": "20231101.fiu-vro", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4636361, "num_examples": 6590}], "download_size": 2434159, "dataset_size": 4636361}, {"config_name": "20231101.fj", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 604791, "num_examples": 1294}], "download_size": 328059, "dataset_size": 604791}, {"config_name": "20231101.fo", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 15415249, "num_examples": 14080}], "download_size": 8857239, "dataset_size": 15415249}, {"config_name": "20231101.fon", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 592216, "num_examples": 705}], "download_size": 317444, "dataset_size": 592216}, {"config_name": "20231101.fr", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8065794826, "num_examples": 2564646}], "download_size": 4614488286, "dataset_size": 8065794826}, {"config_name": "20231101.frp", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3676441, "num_examples": 5766}], "download_size": 1914046, "dataset_size": 3676441}, {"config_name": "20231101.frr", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 10819914, "num_examples": 18666}], "download_size": 5317694, "dataset_size": 10819914}, {"config_name": "20231101.fur", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4090412, "num_examples": 4001}], "download_size": 2421238, "dataset_size": 4090412}, {"config_name": "20231101.fy", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 134196708, "num_examples": 52416}], "download_size": 76002257, "dataset_size": 134196708}, {"config_name": "20231101.ga", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 60640820, "num_examples": 59156}], "download_size": 34136733, "dataset_size": 60640820}, {"config_name": "20231101.gag", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2428849, "num_examples": 2968}], "download_size": 1331866, "dataset_size": 2428849}, {"config_name": "20231101.gan", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2915229, "num_examples": 6743}], "download_size": 1508844, "dataset_size": 2915229}, {"config_name": "20231101.gcr", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2338277, "num_examples": 2399}], "download_size": 1345482, "dataset_size": 2338277}, {"config_name": "20231101.gd", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 14051607, "num_examples": 15979}], "download_size": 7190137, "dataset_size": 14051607}, {"config_name": "20231101.gl", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 493905881, "num_examples": 200092}], "download_size": 291104907, "dataset_size": 493905881}, {"config_name": "20231101.glk", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6086185, "num_examples": 7049}], "download_size": 2382997, "dataset_size": 6086185}, {"config_name": "20231101.gn", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6921948, "num_examples": 5519}], "download_size": 3806548, "dataset_size": 6921948}, {"config_name": "20231101.gom", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 30889533, "num_examples": 4259}], "download_size": 11306217, "dataset_size": 30889533}, {"config_name": "20231101.gor", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6369540, "num_examples": 15359}], "download_size": 2101154, "dataset_size": 6369540}, {"config_name": "20231101.got", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1533770, "num_examples": 1013}], "download_size": 636307, "dataset_size": 1533770}, {"config_name": "20231101.gpe", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2017667, "num_examples": 1110}], "download_size": 1141261, "dataset_size": 2017667}, {"config_name": "20231101.gu", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 121282557, "num_examples": 30445}], "download_size": 39554078, "dataset_size": 121282557}, {"config_name": "20231101.guc", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 978923, "num_examples": 679}], "download_size": 578311, "dataset_size": 978923}, {"config_name": "20231101.gur", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2325435, "num_examples": 1383}], "download_size": 1068954, "dataset_size": 2325435}, {"config_name": "20231101.guw", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1913143, "num_examples": 1312}], "download_size": 1042328, "dataset_size": 1913143}, {"config_name": "20231101.gv", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6307253, "num_examples": 6206}], "download_size": 3347095, "dataset_size": 6307253}, {"config_name": "20231101.ha", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 77906472, "num_examples": 36492}], "download_size": 43131815, "dataset_size": 77906472}, {"config_name": "20231101.hak", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4523680, "num_examples": 10246}], "download_size": 1878558, "dataset_size": 4523680}, {"config_name": "20231101.haw", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1677790, "num_examples": 2612}], "download_size": 696781, "dataset_size": 1677790}, {"config_name": "20231101.he", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1950200381, "num_examples": 333874}], "download_size": 979183998, "dataset_size": 1950200381}, {"config_name": "20231101.hi", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 672817362, "num_examples": 163093}], "download_size": 237834604, "dataset_size": 672817362}, {"config_name": "20231101.hif", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5685329, "num_examples": 10986}], "download_size": 2715682, "dataset_size": 5685329}, {"config_name": "20231101.hr", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 443636903, "num_examples": 202848}], "download_size": 275245343, "dataset_size": 443636903}, {"config_name": "20231101.hsb", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 15667118, "num_examples": 13957}], "download_size": 7437491, "dataset_size": 15667118}, {"config_name": "20231101.ht", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 55088040, "num_examples": 70159}], "download_size": 21993952, "dataset_size": 55088040}, {"config_name": "20231101.hu", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1515899113, "num_examples": 532427}], "download_size": 904857314, "dataset_size": 1515899113}, {"config_name": "20231101.hy", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1179459973, "num_examples": 303036}], "download_size": 490121120, "dataset_size": 1179459973}, {"config_name": "20231101.hyw", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 59564550, "num_examples": 11725}], "download_size": 27450541, "dataset_size": 59564550}, {"config_name": "20231101.ia", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 16409449, "num_examples": 28247}], "download_size": 8237640, "dataset_size": 16409449}, {"config_name": "20231101.id", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1125928594, "num_examples": 665622}], "download_size": 583801799, "dataset_size": 1125928594}, {"config_name": "20231101.ie", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6737711, "num_examples": 11877}], "download_size": 3019044, "dataset_size": 6737711}, {"config_name": "20231101.ig", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 66086115, "num_examples": 22908}], "download_size": 34663540, "dataset_size": 66086115}, {"config_name": "20231101.ik", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 199773, "num_examples": 846}], "download_size": 115758, "dataset_size": 199773}, {"config_name": "20231101.ilo", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 16854494, "num_examples": 15371}], "download_size": 7352572, "dataset_size": 16854494}, {"config_name": "20231101.inh", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2727253, "num_examples": 2123}], "download_size": 1279524, "dataset_size": 2727253}, {"config_name": "20231101.io", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 38735196, "num_examples": 40930}], "download_size": 17106040, "dataset_size": 38735196}, {"config_name": "20231101.is", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 87856729, "num_examples": 57453}], "download_size": 52286137, "dataset_size": 87856729}, {"config_name": "20231101.it", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4924856310, "num_examples": 1833639}], "download_size": 2931265519, "dataset_size": 4924856310}, {"config_name": "20231101.iu", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 291185, "num_examples": 562}], "download_size": 136987, "dataset_size": 291185}, {"config_name": "20231101.ja", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7039610767, "num_examples": 1389467}], "download_size": 3941998526, "dataset_size": 7039610767}, {"config_name": "20231101.jam", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1142348, "num_examples": 1780}], "download_size": 702664, "dataset_size": 1142348}, {"config_name": "20231101.jbo", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2523538, "num_examples": 1394}], "download_size": 890356, "dataset_size": 2523538}, {"config_name": "20231101.jv", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 72786688, "num_examples": 73380}], "download_size": 36852134, "dataset_size": 72786688}, {"config_name": "20231101.ka", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 699872960, "num_examples": 169602}], "download_size": 239987665, "dataset_size": 699872960}, {"config_name": "20231101.kaa", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5139436, "num_examples": 4074}], "download_size": 2913134, "dataset_size": 5139436}, {"config_name": "20231101.kab", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4392542, "num_examples": 5830}], "download_size": 2580584, "dataset_size": 4392542}, {"config_name": "20231101.kbd", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3014575, "num_examples": 1670}], "download_size": 1304580, "dataset_size": 3014575}, {"config_name": "20231101.kbp", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3584563, "num_examples": 1931}], "download_size": 1806400, "dataset_size": 3584563}, {"config_name": "20231101.kcg", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 914665, "num_examples": 1151}], "download_size": 513904, "dataset_size": 914665}, {"config_name": "20231101.kg", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 390163, "num_examples": 1329}], "download_size": 209059, "dataset_size": 390163}, {"config_name": "20231101.ki", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 760980, "num_examples": 1668}], "download_size": 427003, "dataset_size": 760980}, {"config_name": "20231101.kk", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 497917145, "num_examples": 238615}], "download_size": 180750520, "dataset_size": 497917145}, {"config_name": "20231101.kl", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 313658, "num_examples": 301}], "download_size": 193719, "dataset_size": 313658}, {"config_name": "20231101.km", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 103252582, "num_examples": 11994}], "download_size": 35567417, "dataset_size": 103252582}, {"config_name": "20231101.kn", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 402848197, "num_examples": 31437}], "download_size": 147156434, "dataset_size": 402848197}, {"config_name": "20231101.ko", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1412099944, "num_examples": 647897}], "download_size": 782677061, "dataset_size": 1412099944}, {"config_name": "20231101.koi", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5103799, "num_examples": 3504}], "download_size": 1888392, "dataset_size": 5103799}, {"config_name": "20231101.krc", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4589808, "num_examples": 2100}], "download_size": 2022144, "dataset_size": 4589808}, {"config_name": "20231101.ks", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2868186, "num_examples": 4307}], "download_size": 1094458, "dataset_size": 2868186}, {"config_name": "20231101.ksh", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3117003, "num_examples": 2945}], "download_size": 2009928, "dataset_size": 3117003}, {"config_name": "20231101.ku", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 44523131, "num_examples": 63076}], "download_size": 22938233, "dataset_size": 44523131}, {"config_name": "20231101.kv", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 9245577, "num_examples": 5595}], "download_size": 3690978, "dataset_size": 9245577}, {"config_name": "20231101.kw", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4687165, "num_examples": 6995}], "download_size": 2711398, "dataset_size": 4687165}, {"config_name": "20231101.ky", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 166911089, "num_examples": 79438}], "download_size": 63947035, "dataset_size": 166911089}, {"config_name": "20231101.la", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 141080163, "num_examples": 138263}], "download_size": 76588430, "dataset_size": 141080163}, {"config_name": "20231101.lad", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4901343, "num_examples": 3663}], "download_size": 2754531, "dataset_size": 4901343}, {"config_name": "20231101.lb", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 88826996, "num_examples": 62414}], "download_size": 50515020, "dataset_size": 88826996}, {"config_name": "20231101.lbe", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 745140, "num_examples": 1279}], "download_size": 304394, "dataset_size": 745140}, {"config_name": "20231101.lez", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 9794637, "num_examples": 4264}], "download_size": 3864848, "dataset_size": 9794637}, {"config_name": "20231101.lfn", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8870685, "num_examples": 4832}], "download_size": 5207546, "dataset_size": 8870685}, {"config_name": "20231101.lg", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6891539, "num_examples": 4048}], "download_size": 3708097, "dataset_size": 6891539}, {"config_name": "20231101.li", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 29633678, "num_examples": 14849}], "download_size": 17727918, "dataset_size": 29633678}, {"config_name": "20231101.lij", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 11448686, "num_examples": 11203}], "download_size": 6255409, "dataset_size": 11448686}, {"config_name": "20231101.lld", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 50163974, "num_examples": 180677}], "download_size": 13866243, "dataset_size": 50163974}, {"config_name": "20231101.lmo", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 43496783, "num_examples": 73510}], "download_size": 19142356, "dataset_size": 43496783}, {"config_name": "20231101.ln", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2035050, "num_examples": 3534}], "download_size": 1122138, "dataset_size": 2035050}, {"config_name": "20231101.lo", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 15283258, "num_examples": 5014}], "download_size": 5646554, "dataset_size": 15283258}, {"config_name": "20231101.lt", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 336559824, "num_examples": 211292}], "download_size": 194873569, "dataset_size": 336559824}, {"config_name": "20231101.ltg", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 915364, "num_examples": 1070}], "download_size": 530299, "dataset_size": 915364}, {"config_name": "20231101.lv", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 227272112, "num_examples": 123413}], "download_size": 129739227, "dataset_size": 227272112}, {"config_name": "20231101.mad", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1596836, "num_examples": 1192}], "download_size": 908630, "dataset_size": 1596836}, {"config_name": "20231101.mai", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 21562856, "num_examples": 14714}], "download_size": 6180231, "dataset_size": 21562856}, {"config_name": "20231101.map-bms", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5341068, "num_examples": 13580}], "download_size": 2377123, "dataset_size": 5341068}, {"config_name": "20231101.mdf", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4694770, "num_examples": 4257}], "download_size": 1725294, "dataset_size": 4694770}, {"config_name": "20231101.mg", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 73767229, "num_examples": 96316}], "download_size": 22117304, "dataset_size": 73767229}, {"config_name": "20231101.mhr", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 19249450, "num_examples": 11347}], "download_size": 6902162, "dataset_size": 19249450}, {"config_name": "20231101.mi", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4169094, "num_examples": 7919}], "download_size": 1044444, "dataset_size": 4169094}, {"config_name": "20231101.min", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 118995918, "num_examples": 227143}], "download_size": 25691303, "dataset_size": 118995918}, {"config_name": "20231101.mk", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 651422351, "num_examples": 139559}], "download_size": 271265486, "dataset_size": 651422351}, {"config_name": "20231101.ml", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 494135127, "num_examples": 85791}], "download_size": 183071274, "dataset_size": 494135127}, {"config_name": "20231101.mn", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 91943210, "num_examples": 24048}], "download_size": 41521786, "dataset_size": 91943210}, {"config_name": "20231101.mni", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 9820483, "num_examples": 10894}], "download_size": 2208525, "dataset_size": 9820483}, {"config_name": "20231101.mnw", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 47237206, "num_examples": 3295}], "download_size": 13765461, "dataset_size": 47237206}, {"config_name": "20231101.mr", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 261879018, "num_examples": 94133}], "download_size": 81991233, "dataset_size": 261879018}, {"config_name": "20231101.mrj", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8732281, "num_examples": 10542}], "download_size": 3283618, "dataset_size": 8732281}, {"config_name": "20231101.ms", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 423352360, "num_examples": 368628}], "download_size": 210149264, "dataset_size": 423352360}, {"config_name": "20231101.mt", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 32009639, "num_examples": 5743}], "download_size": 18686521, "dataset_size": 32009639}, {"config_name": "20231101.mwl", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 19353725, "num_examples": 4500}], "download_size": 11521563, "dataset_size": 19353725}, {"config_name": "20231101.my", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 314417700, "num_examples": 109310}], "download_size": 85497205, "dataset_size": 314417700}, {"config_name": "20231101.myv", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 11145865, "num_examples": 7958}], "download_size": 4600620, "dataset_size": 11145865}, {"config_name": "20231101.mzn", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 16335757, "num_examples": 18717}], "download_size": 5419390, "dataset_size": 16335757}, {"config_name": "20231101.nah", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2503320, "num_examples": 6218}], "download_size": 1191779, "dataset_size": 2503320}, {"config_name": "20231101.nap", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6395706, "num_examples": 14884}], "download_size": 3188122, "dataset_size": 6395706}, {"config_name": "20231101.nds", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 92990126, "num_examples": 84285}], "download_size": 48106879, "dataset_size": 92990126}, {"config_name": "20231101.nds-nl", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 13582403, "num_examples": 7847}], "download_size": 8354427, "dataset_size": 13582403}, {"config_name": "20231101.ne", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 109032486, "num_examples": 32885}], "download_size": 37548833, "dataset_size": 109032486}, {"config_name": "20231101.new", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 159095610, "num_examples": 73003}], "download_size": 20517810, "dataset_size": 159095610}, {"config_name": "20231101.nia", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2117902, "num_examples": 1714}], "download_size": 1086670, "dataset_size": 2117902}, {"config_name": "20231101.nl", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2646316266, "num_examples": 2135977}], "download_size": 1436843432, "dataset_size": 2646316266}, {"config_name": "20231101.nn", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 237467406, "num_examples": 167653}], "download_size": 134751873, "dataset_size": 237467406}, {"config_name": "20231101.no", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1033188011, "num_examples": 617937}], "download_size": 590970350, "dataset_size": 1033188011}, {"config_name": "20231101.nov", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 965640, "num_examples": 1693}], "download_size": 493500, "dataset_size": 965640}, {"config_name": "20231101.nqo", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8261058, "num_examples": 1580}], "download_size": 3508645, "dataset_size": 8261058}, {"config_name": "20231101.nrm", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3216817, "num_examples": 4902}], "download_size": 1507257, "dataset_size": 3216817}, {"config_name": "20231101.nso", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2796467, "num_examples": 8650}], "download_size": 936349, "dataset_size": 2796467}, {"config_name": "20231101.nv", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 16993060, "num_examples": 22460}], "download_size": 3304031, "dataset_size": 16993060}, {"config_name": "20231101.ny", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1691825, "num_examples": 1129}], "download_size": 938621, "dataset_size": 1691825}, {"config_name": "20231101.oc", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 120092607, "num_examples": 89101}], "download_size": 64043588, "dataset_size": 120092607}, {"config_name": "20231101.olo", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3173332, "num_examples": 4640}], "download_size": 1724315, "dataset_size": 3173332}, {"config_name": "20231101.om", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3604768, "num_examples": 1970}], "download_size": 1982849, "dataset_size": 3604768}, {"config_name": "20231101.or", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 75078226, "num_examples": 17375}], "download_size": 26706212, "dataset_size": 75078226}, {"config_name": "20231101.os", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 13182881, "num_examples": 17663}], "download_size": 5572799, "dataset_size": 13182881}, {"config_name": "20231101.pa", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 212972877, "num_examples": 51423}], "download_size": 81452929, "dataset_size": 212972877}, {"config_name": "20231101.pag", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1391816, "num_examples": 2665}], "download_size": 455808, "dataset_size": 1391816}, {"config_name": "20231101.pam", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8294902, "num_examples": 9006}], "download_size": 4277038, "dataset_size": 8294902}, {"config_name": "20231101.pap", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4251480, "num_examples": 3520}], "download_size": 2435005, "dataset_size": 4251480}, {"config_name": "20231101.pcd", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5704321, "num_examples": 5717}], "download_size": 3145572, "dataset_size": 5704321}, {"config_name": "20231101.pcm", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1886987, "num_examples": 1238}], "download_size": 1160762, "dataset_size": 1886987}, {"config_name": "20231101.pdc", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1225978, "num_examples": 2176}], "download_size": 698254, "dataset_size": 1225978}, {"config_name": "20231101.pfl", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3694464, "num_examples": 2762}], "download_size": 1971214, "dataset_size": 3694464}, {"config_name": "20231101.pi", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1144100, "num_examples": 3057}], "download_size": 200764, "dataset_size": 1144100}, {"config_name": "20231101.pih", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 278139, "num_examples": 934}], "download_size": 177092, "dataset_size": 278139}, {"config_name": "20231101.pl", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2950148809, "num_examples": 1587721}], "download_size": 1765059986, "dataset_size": 2950148809}, {"config_name": "20231101.pms", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 34340217, "num_examples": 67980}], "download_size": 12008880, "dataset_size": 34340217}, {"config_name": "20231101.pnb", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 304117649, "num_examples": 72307}], "download_size": 133266242, "dataset_size": 304117649}, {"config_name": "20231101.pnt", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 630636, "num_examples": 533}], "download_size": 275639, "dataset_size": 630636}, {"config_name": "20231101.ps", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 114259737, "num_examples": 20529}], "download_size": 53312545, "dataset_size": 114259737}, {"config_name": "20231101.pt", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2758783436, "num_examples": 1112246}], "download_size": 1579641059, "dataset_size": 2758783436}, {"config_name": "20231101.pwn", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 811954, "num_examples": 408}], "download_size": 444109, "dataset_size": 811954}, {"config_name": "20231101.qu", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 16828457, "num_examples": 24196}], "download_size": 7688106, "dataset_size": 16828457}, {"config_name": "20231101.rm", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 18053014, "num_examples": 3822}], "download_size": 10483970, "dataset_size": 18053014}, {"config_name": "20231101.rmy", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 611778, "num_examples": 1279}], "download_size": 356457, "dataset_size": 611778}, {"config_name": "20231101.rn", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 530318, "num_examples": 819}], "download_size": 301252, "dataset_size": 530318}, {"config_name": "20231101.ro", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 847410736, "num_examples": 442389}], "download_size": 466937380, "dataset_size": 847410736}, {"config_name": "20231101.roa-rup", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1687829, "num_examples": 1432}], "download_size": 951677, "dataset_size": 1687829}, {"config_name": "20231101.roa-tara", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7470331, "num_examples": 9367}], "download_size": 4003095, "dataset_size": 7470331}, {"config_name": "20231101.ru", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 10277958919, "num_examples": 1945063}], "download_size": 4876849588, "dataset_size": 10277958919}, {"config_name": "20231101.rue", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 13128572, "num_examples": 8759}], "download_size": 6346106, "dataset_size": 13128572}, {"config_name": "20231101.rw", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 11898854, "num_examples": 8063}], "download_size": 6623388, "dataset_size": 11898854}, {"config_name": "20231101.sa", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 69854997, "num_examples": 12156}], "download_size": 23850161, "dataset_size": 69854997}, {"config_name": "20231101.sah", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 48562374, "num_examples": 17098}], "download_size": 21675888, "dataset_size": 48562374}, {"config_name": "20231101.sat", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 45247783, "num_examples": 9767}], "download_size": 15428584, "dataset_size": 45247783}, {"config_name": "20231101.sc", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 12776438, "num_examples": 7586}], "download_size": 7711996, "dataset_size": 12776438}, {"config_name": "20231101.scn", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 17685098, "num_examples": 26530}], "download_size": 10223816, "dataset_size": 17685098}, {"config_name": "20231101.sco", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 42808738, "num_examples": 35276}], "download_size": 24287944, "dataset_size": 42808738}, {"config_name": "20231101.sd", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 37021659, "num_examples": 16928}], "download_size": 17591997, "dataset_size": 37021659}, {"config_name": "20231101.se", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3600527, "num_examples": 8043}], "download_size": 1816006, "dataset_size": 3600527}, {"config_name": "20231101.sg", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 140127, "num_examples": 564}], "download_size": 72486, "dataset_size": 140127}, {"config_name": "20231101.sh", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 569225870, "num_examples": 458392}], "download_size": 266379293, "dataset_size": 569225870}, {"config_name": "20231101.shi", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2369002, "num_examples": 1779}], "download_size": 1359828, "dataset_size": 2369002}, {"config_name": "20231101.shn", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 33553593, "num_examples": 13945}], "download_size": 8163231, "dataset_size": 33553593}, {"config_name": "20231101.si", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 138806443, "num_examples": 23065}], "download_size": 54229127, "dataset_size": 138806443}, {"config_name": "20231101.simple", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 291254232, "num_examples": 241787}], "download_size": 156885218, "dataset_size": 291254232}, {"config_name": "20231101.sk", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 416804817, "num_examples": 242235}], "download_size": 239513292, "dataset_size": 416804817}, {"config_name": "20231101.skr", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 22705446, "num_examples": 5819}], "download_size": 9978607, "dataset_size": 22705446}, {"config_name": "20231101.sl", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 454829910, "num_examples": 183006}], "download_size": 267485569, "dataset_size": 454829910}, {"config_name": "20231101.sm", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 902927, "num_examples": 1151}], "download_size": 492349, "dataset_size": 902927}, {"config_name": "20231101.smn", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5764244, "num_examples": 5383}], "download_size": 2813872, "dataset_size": 5764244}, {"config_name": "20231101.sn", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 9790528, "num_examples": 11621}], "download_size": 4979456, "dataset_size": 9790528}, {"config_name": "20231101.so", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 13663784, "num_examples": 9021}], "download_size": 7940363, "dataset_size": 13663784}, {"config_name": "20231101.sq", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 208779652, "num_examples": 104854}], "download_size": 116945494, "dataset_size": 208779652}, {"config_name": "20231101.sr", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1721596392, "num_examples": 676605}], "download_size": 697391786, "dataset_size": 1721596392}, {"config_name": "20231101.srn", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 649317, "num_examples": 1219}], "download_size": 215103, "dataset_size": 649317}, {"config_name": "20231101.ss", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1076102, "num_examples": 945}], "download_size": 600997, "dataset_size": 1076102}, {"config_name": "20231101.st", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 968161, "num_examples": 1099}], "download_size": 530165, "dataset_size": 968161}, {"config_name": "20231101.stq", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4942784, "num_examples": 4134}], "download_size": 2884429, "dataset_size": 4942784}, {"config_name": "20231101.su", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 48066965, "num_examples": 61555}], "download_size": 19806020, "dataset_size": 48066965}, {"config_name": "20231101.sv", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2153690744, "num_examples": 2574513}], "download_size": 974261228, "dataset_size": 2153690744}, {"config_name": "20231101.sw", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 73119299, "num_examples": 78587}], "download_size": 35936177, "dataset_size": 73119299}, {"config_name": "20231101.szl", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 21439309, "num_examples": 57035}], "download_size": 7347967, "dataset_size": 21439309}, {"config_name": "20231101.szy", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 11355780, "num_examples": 4885}], "download_size": 6192815, "dataset_size": 11355780}, {"config_name": "20231101.ta", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 810734099, "num_examples": 160651}], "download_size": 265652020, "dataset_size": 810734099}, {"config_name": "20231101.tay", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2974229, "num_examples": 2747}], "download_size": 1232811, "dataset_size": 2974229}, {"config_name": "20231101.tcy", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 12166612, "num_examples": 2202}], "download_size": 4611006, "dataset_size": 12166612}, {"config_name": "20231101.te", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 730376585, "num_examples": 87854}], "download_size": 215097076, "dataset_size": 730376585}, {"config_name": "20231101.tet", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1466200, "num_examples": 1468}], "download_size": 744390, "dataset_size": 1466200}, {"config_name": "20231101.tg", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 148256281, "num_examples": 110962}], "download_size": 49825647, "dataset_size": 148256281}, {"config_name": "20231101.th", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1014547923, "num_examples": 159719}], "download_size": 371916105, "dataset_size": 1014547923}, {"config_name": "20231101.ti", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 729995, "num_examples": 435}], "download_size": 363723, "dataset_size": 729995}, {"config_name": "20231101.tk", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 13326412, "num_examples": 7918}], "download_size": 7383654, "dataset_size": 13326412}, {"config_name": "20231101.tl", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 85794472, "num_examples": 45341}], "download_size": 45797527, "dataset_size": 85794472}, {"config_name": "20231101.tly", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2590482, "num_examples": 8086}], "download_size": 1070456, "dataset_size": 2590482}, {"config_name": "20231101.tn", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4380768, "num_examples": 1585}], "download_size": 1708110, "dataset_size": 4380768}, {"config_name": "20231101.to", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1090611, "num_examples": 1887}], "download_size": 518244, "dataset_size": 1090611}, {"config_name": "20231101.tpi", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 460420, "num_examples": 1399}], "download_size": 241908, "dataset_size": 460420}, {"config_name": "20231101.tr", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 997254242, "num_examples": 534988}], "download_size": 552923659, "dataset_size": 997254242}, {"config_name": "20231101.trv", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4971204, "num_examples": 1880}], "download_size": 2706664, "dataset_size": 4971204}, {"config_name": "20231101.ts", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 847032, "num_examples": 785}], "download_size": 455648, "dataset_size": 847032}, {"config_name": "20231101.tt", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 681325421, "num_examples": 501116}], "download_size": 129141056, "dataset_size": 681325421}, {"config_name": "20231101.tum", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 13429984, "num_examples": 18708}], "download_size": 5459856, "dataset_size": 13429984}, {"config_name": "20231101.tw", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7982767, "num_examples": 3978}], "download_size": 4118530, "dataset_size": 7982767}, {"config_name": "20231101.ty", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 338743, "num_examples": 1355}], "download_size": 150963, "dataset_size": 338743}, {"config_name": "20231101.tyv", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 14324694, "num_examples": 3491}], "download_size": 6528290, "dataset_size": 14324694}, {"config_name": "20231101.udm", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7036113, "num_examples": 5677}], "download_size": 2982821, "dataset_size": 7036113}, {"config_name": "20231101.ug", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 42254159, "num_examples": 8634}], "download_size": 17741860, "dataset_size": 42254159}, {"config_name": "20231101.uk", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4969483901, "num_examples": 1294720}], "download_size": 2276769383, "dataset_size": 4969483901}, {"config_name": "20231101.ur", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 410511855, "num_examples": 200154}], "download_size": 167627869, "dataset_size": 410511855}, {"config_name": "20231101.uz", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 397176774, "num_examples": 246729}], "download_size": 210262652, "dataset_size": 397176774}, {"config_name": "20231101.ve", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 359542, "num_examples": 840}], "download_size": 163318, "dataset_size": 359542}, {"config_name": "20231101.vec", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 37917528, "num_examples": 69268}], "download_size": 16179506, "dataset_size": 37917528}, {"config_name": "20231101.vep", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 11643856, "num_examples": 6960}], "download_size": 6423002, "dataset_size": 11643856}, {"config_name": "20231101.vi", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1617830227, "num_examples": 1288680}], "download_size": 729557588, "dataset_size": 1617830227}, {"config_name": "20231101.vls", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 11336278, "num_examples": 7872}], "download_size": 6985406, "dataset_size": 11336278}, {"config_name": "20231101.vo", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 19521708, "num_examples": 35193}], "download_size": 6582571, "dataset_size": 19521708}, {"config_name": "20231101.wa", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 12268826, "num_examples": 12038}], "download_size": 7327616, "dataset_size": 12268826}, {"config_name": "20231101.war", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 467647882, "num_examples": 1266394}], "download_size": 104588442, "dataset_size": 467647882}, {"config_name": "20231101.wo", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3525303, "num_examples": 1746}], "download_size": 2094574, "dataset_size": 3525303}, {"config_name": "20231101.wuu", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 25029545, "num_examples": 43010}], "download_size": 15985963, "dataset_size": 25029545}, {"config_name": "20231101.xal", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1391731, "num_examples": 2295}], "download_size": 507198, "dataset_size": 1391731}, {"config_name": "20231101.xh", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3665998, "num_examples": 1883}], "download_size": 2505472, "dataset_size": 3665998}, {"config_name": "20231101.xmf", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 37712629, "num_examples": 18099}], "download_size": 12948576, "dataset_size": 37712629}, {"config_name": "20231101.yi", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 36038273, "num_examples": 15179}], "download_size": 16218296, "dataset_size": 36038273}, {"config_name": "20231101.yo", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 19081408, "num_examples": 33819}], "download_size": 8861465, "dataset_size": 19081408}, {"config_name": "20231101.za", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1365300, "num_examples": 2993}], "download_size": 666521, "dataset_size": 1365300}, {"config_name": "20231101.zea", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5224563, "num_examples": 6082}], "download_size": 2620396, "dataset_size": 5224563}, {"config_name": "20231101.zh", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2790577882, "num_examples": 1384748}], "download_size": 1721150260, "dataset_size": 2790577882}, {"config_name": "20231101.zh-classical", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 14869227, "num_examples": 12708}], "download_size": 10098073, "dataset_size": 14869227}, {"config_name": "20231101.zh-min-nan", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 153672031, "num_examples": 432798}], "download_size": 37122048, "dataset_size": 153672031}, {"config_name": "20231101.zh-yue", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 109936351, "num_examples": 134140}], "download_size": 64950815, "dataset_size": 109936351}, {"config_name": "20231101.zu", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7088246, "num_examples": 11561}], "download_size": 3792429, "dataset_size": 7088246}], "language_bcp47": ["be-tarask", "en-simple"]}
2024-01-09T09:40:51+00:00
[]
[ "ab", "ace", "ady", "af", "alt", "am", "ami", "an", "ang", "anp", "ar", "arc", "ary", "arz", "as", "ast", "atj", "av", "avk", "awa", "ay", "az", "azb", "ba", "ban", "bar", "bbc", "bcl", "be", "bg", "bh", "bi", "bjn", "blk", "bm", "bn", "bo", "bpy", "br", "bs", "bug", "bxr", "ca", "cbk", "cdo", "ce", "ceb", "ch", "chr", "chy", "ckb", "co", "cr", "crh", "cs", "csb", "cu", "cv", "cy", "da", "dag", "de", "dga", "din", "diq", "dsb", "dty", "dv", "dz", "ee", "el", "eml", "en", "eo", "es", "et", "eu", "ext", "fa", "fat", "ff", "fi", "fj", "fo", "fon", "fr", "frp", "frr", "fur", "fy", "ga", "gag", "gan", "gcr", "gd", "gl", "glk", "gn", "gom", "gor", "got", "gpe", "gsw", "gu", "guc", "gur", "guw", "gv", "ha", "hak", "haw", "hbs", "he", "hi", "hif", "hr", "hsb", "ht", "hu", "hy", "hyw", "ia", "id", "ie", "ig", "ik", "ilo", "inh", "io", "is", "it", "iu", "ja", "jam", "jbo", "jv", "ka", "kaa", "kab", "kbd", "kbp", "kcg", "kg", "ki", "kk", "kl", "km", "kn", "ko", "koi", "krc", "ks", "ksh", "ku", "kv", "kw", "ky", "la", "lad", "lb", "lbe", "lez", "lfn", "lg", "li", "lij", "lld", "lmo", "ln", "lo", "lt", "ltg", "lv", "lzh", "mad", "mai", "map", "mdf", "mg", "mhr", "mi", "min", "mk", "ml", "mn", "mni", "mnw", "mr", "mrj", "ms", "mt", "mwl", "my", "myv", "mzn", "nah", "nan", "nap", "nds", "ne", "new", "nia", "nl", "nn", "no", "nov", "nqo", "nrf", "nso", "nv", "ny", "oc", "olo", "om", "or", "os", "pa", "pag", "pam", "pap", "pcd", "pcm", "pdc", "pfl", "pi", "pih", "pl", "pms", "pnb", "pnt", "ps", "pt", "pwn", "qu", "rm", "rmy", "rn", "ro", "ru", "rue", "rup", "rw", "sa", "sah", "sat", "sc", "scn", "sco", "sd", "se", "sg", "sgs", "shi", "shn", "si", "sk", "skr", "sl", "sm", "smn", "sn", "so", "sq", "sr", "srn", "ss", "st", "stq", "su", "sv", "sw", "szl", "szy", "ta", "tay", "tcy", "te", "tet", "tg", "th", "ti", "tk", "tl", "tly", "tn", "to", "tpi", "tr", "trv", "ts", "tt", "tum", "tw", "ty", "tyv", "udm", "ug", "uk", "ur", "uz", "ve", "vec", "vep", "vi", "vls", "vo", "vro", "wa", "war", "wo", "wuu", "xal", "xh", "xmf", "yi", "yo", "yue", "za", "zea", "zgh", "zh", "zu" ]
TAGS #task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #size_categories-n<1K #size_categories-1K<n<10K #size_categories-10K<n<100K #size_categories-100K<n<1M #size_categories-1M<n<10M #language-Abkhazian #language-Achinese #language-Adyghe #language-Afrikaans #language-Southern Altai #language-Amharic #language-Amis #language-Aragonese #language-Old English (ca. 450-1100) #language-Angika #language-Arabic #language-Official Aramaic (700-300 BCE) #language-Moroccan Arabic #language-Egyptian Arabic #language-Assamese #language-Asturian #language-Atikamekw #language-Avaric #language-Kotava #language-Awadhi #language-Aymara #language-Azerbaijani #language-South Azerbaijani #language-Bashkir #language-Balinese #language-Bavarian #language-Batak Toba #language-Central Bikol #language-Belarusian #language-Bulgarian #language-bh #language-Bislama #language-Banjar #language-Pa'o Karen #language-Bambara #language-Bengali #language-Tibetan #language-Bishnupriya #language-Breton #language-Bosnian #language-Buginese #language-Russia Buriat #language-Catalan #language-Chavacano #language-Min Dong Chinese #language-Chechen #language-Cebuano #language-Chamorro #language-Cherokee #language-Cheyenne #language-Central Kurdish #language-Corsican #language-Cree #language-Crimean Tatar #language-Czech #language-Kashubian #language-Church Slavic #language-Chuvash #language-Welsh #language-Danish #language-Dagbani #language-German #language-Southern Dagaare #language-Dinka #language-Dimli (individual language) #language-Lower Sorbian #language-Dotyali #language-Dhivehi #language-Dzongkha #language-Ewe #language-Modern Greek (1453-) #language-Emiliano-Romagnolo #language-English #language-Esperanto #language-Spanish #language-Estonian #language-Basque #language-Extremaduran #language-Persian #language-Fanti #language-Fulah #language-Finnish #language-Fijian #language-Faroese #language-Fon #language-French #language-Arpitan #language-Northern Frisian #language-Friulian #language-Western Frisian #language-Irish #language-Gagauz #language-Gan Chinese #language-Guianese Creole French #language-Scottish Gaelic #language-Galician #language-Gilaki #language-Guarani #language-Goan Konkani #language-Gorontalo #language-Gothic #language-Ghanaian Pidgin English #language-Swiss German #language-Gujarati #language-Wayuu #language-Farefare #language-Gun #language-Manx #language-Hausa #language-Hakka Chinese #language-Hawaiian #language-Serbo-Croatian #language-Hebrew #language-Hindi #language-Fiji Hindi #language-Croatian #language-Upper Sorbian #language-Haitian #language-Hungarian #language-Armenian #language-Western Armenian #language-Interlingua (International Auxiliary Language Association) #language-Indonesian #language-Interlingue #language-Igbo #language-Inupiaq #language-Iloko #language-Ingush #language-Ido #language-Icelandic #language-Italian #language-Inuktitut #language-Japanese #language-Jamaican Creole English #language-Lojban #language-Javanese #language-Georgian #language-Kara-Kalpak #language-Kabyle #language-Kabardian #language-Kabiyè #language-Tyap #language-Kongo #language-Kikuyu #language-Kazakh #language-Kalaallisut #language-Khmer #language-Kannada #language-Korean #language-Komi-Permyak #language-Karachay-Balkar #language-Kashmiri #language-Kölsch #language-Kurdish #language-Komi #language-Cornish #language-Kirghiz #language-Latin #language-Ladino #language-Luxembourgish #language-Lak #language-Lezghian #language-Lingua Franca Nova #language-Ganda #language-Limburgan #language-Ligurian #language-Ladin #language-Lombard #language-Lingala #language-Lao #language-Lithuanian #language-Latgalian #language-Latvian #language-Literary Chinese #language-Madurese #language-Maithili #language-map #language-Moksha #language-Malagasy #language-Eastern Mari #language-Maori #language-Minangkabau #language-Macedonian #language-Malayalam #language-Mongolian #language-Manipuri #language-Mon #language-Marathi #language-Western Mari #language-Malay (macrolanguage) #language-Maltese #language-Mirandese #language-Burmese #language-Erzya #language-Mazanderani #language-nah #language-Min Nan Chinese #language-Neapolitan #language-Low German #language-Nepali (macrolanguage) #language-Newari #language-Nias #language-Dutch #language-Norwegian Nynorsk #language-Norwegian #language-Novial #language-N'Ko #language-Jèrriais #language-Pedi #language-Navajo #language-Nyanja #language-Occitan (post 1500) #language-Livvi #language-Oromo #language-Oriya (macrolanguage) #language-Ossetian #language-Panjabi #language-Pangasinan #language-Pampanga #language-Papiamento #language-Picard #language-Nigerian Pidgin #language-Pennsylvania German #language-Pfaelzisch #language-Pali #language-Pitcairn-Norfolk #language-Polish #language-Piemontese #language-Western Panjabi #language-Pontic #language-Pushto #language-Portuguese #language-Paiwan #language-Quechua #language-Romansh #language-Vlax Romani #language-Rundi #language-Romanian #language-Russian #language-Rusyn #language-Macedo-Romanian #language-Kinyarwanda #language-Sanskrit #language-Yakut #language-Santali #language-Sardinian #language-Sicilian #language-Scots #language-Sindhi #language-Northern Sami #language-Sango #language-Samogitian #language-Tachelhit #language-Shan #language-Sinhala #language-Slovak #language-Saraiki #language-Slovenian #language-Samoan #language-Inari Sami #language-Shona #language-Somali #language-Albanian #language-Serbian #language-Sranan Tongo #language-Swati #language-Southern Sotho #language-Saterfriesisch #language-Sundanese #language-Swedish #language-Swahili (macrolanguage) #language-Silesian #language-Sakizaya #language-Tamil #language-Atayal #language-Tulu #language-Telugu #language-Tetum #language-Tajik #language-Thai #language-Tigrinya #language-Turkmen #language-Tagalog #language-Talysh #language-Tswana #language-Tonga (Tonga Islands) #language-Tok Pisin #language-Turkish #language-Sediq #language-Tsonga #language-Tatar #language-Tumbuka #language-Twi #language-Tahitian #language-Tuvinian #language-Udmurt #language-Uighur #language-Ukrainian #language-Urdu #language-Uzbek #language-Venda #language-Venetian #language-Veps #language-Vietnamese #language-Vlaams #language-Volapük #language-Võro #language-Walloon #language-Waray (Philippines) #language-Wolof #language-Wu Chinese #language-Kalmyk #language-Xhosa #language-Mingrelian #language-Yiddish #language-Yoruba #language-Yue Chinese #language-Zhuang #language-Zeeuws #language-Standard Moroccan Tamazight #language-Chinese #language-Zulu #license-cc-by-sa-3.0 #license-gfdl #region-us
# Dataset Card for Wikimedia Wikipedia ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: - Paper: - Point of Contact: ### Dataset Summary Wikipedia dataset containing cleaned articles of all languages. The dataset is built from the Wikipedia dumps (URL with one subset per language, each containing a single train split. Each example contains the content of one full Wikipedia article with cleaning to strip markdown and unwanted sections (references, etc.). All language subsets have already been processed for recent dump, and you can load them per date and language this way: #### Data Visualization Click the Nomic Atlas map below to visualize the 6.4 million samples in the 'URL' split. <a href="URL <img src="URL alt="Nomic-Atlas Wikipedia Map" width="25%"/> </a> ### Supported Tasks and Leaderboards The dataset is generally used for Language Modeling. ### Languages You can find the list of languages here: URL ## Dataset Structure ### Data Instances An example looks as follows: ### Data Fields The data fields are the same among all configurations: - 'id' ('str'): ID of the article. - 'url' ('str'): URL of the article. - 'title' ('str'): Title of the article. - 'text' ('str'): Text content of the article. ### Data Splits All configurations contain a single 'train' split. ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization The dataset is built from the Wikipedia dumps: URL You can find the full list of languages and dates here: URL The articles have been parsed using the 'mwparserfromhell' tool. When uploading the data files for the 20231101 dump, we noticed that the Wikimedia Dumps website does not contain this date dump for the "bbc", "dga", nor "zgh" Wikipedias. We have reported the issue to the Wikimedia Phabricator: URL #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information Copyright licensing information: URL All original textual content is licensed under the GNU Free Documentation License (GFDL) and the Creative Commons Attribution-Share-Alike 3.0 License. Some text may be available only under the Creative Commons license; see their Terms of Use for details. Text written by some authors may be released under additional licenses or into the public domain.
[ "# Dataset Card for Wikimedia Wikipedia", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Point of Contact:", "### Dataset Summary\n\nWikipedia dataset containing cleaned articles of all languages.\n\nThe dataset is built from the Wikipedia dumps (URL\nwith one subset per language, each containing a single train split.\n\nEach example contains the content of one full Wikipedia article with cleaning to strip\nmarkdown and unwanted sections (references, etc.).\n\n\nAll language subsets have already been processed for recent dump, and you can load them per date and language this way:", "#### Data Visualization\nClick the Nomic Atlas map below to visualize the 6.4 million samples in the 'URL' split.\n\n<a href=\"URL\n <img src=\"URL alt=\"Nomic-Atlas Wikipedia Map\" width=\"25%\"/>\n</a>", "### Supported Tasks and Leaderboards\n\nThe dataset is generally used for Language Modeling.", "### Languages\n\nYou can find the list of languages here: URL", "## Dataset Structure", "### Data Instances\n\nAn example looks as follows:", "### Data Fields\n\nThe data fields are the same among all configurations:\n- 'id' ('str'): ID of the article.\n- 'url' ('str'): URL of the article.\n- 'title' ('str'): Title of the article.\n- 'text' ('str'): Text content of the article.", "### Data Splits\n\nAll configurations contain a single 'train' split.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\nThe dataset is built from the Wikipedia dumps: URL\n\nYou can find the full list of languages and dates here: URL\n\nThe articles have been parsed using the 'mwparserfromhell' tool.\n\nWhen uploading the data files for the 20231101 dump, we noticed that the Wikimedia Dumps website does not contain this date dump\nfor the \"bbc\", \"dga\", nor \"zgh\" Wikipedias. We have reported the issue to the Wikimedia Phabricator: URL", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nCopyright licensing information: URL\n\nAll original textual content is licensed under the GNU Free Documentation License (GFDL)\nand the Creative Commons Attribution-Share-Alike 3.0 License.\nSome text may be available only under the Creative Commons license; see their Terms of Use for details.\nText written by some authors may be released under additional licenses or into the public domain." ]
[ "TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #size_categories-n<1K #size_categories-1K<n<10K #size_categories-10K<n<100K #size_categories-100K<n<1M #size_categories-1M<n<10M #language-Abkhazian #language-Achinese #language-Adyghe #language-Afrikaans #language-Southern Altai #language-Amharic #language-Amis #language-Aragonese #language-Old English (ca. 450-1100) #language-Angika #language-Arabic #language-Official Aramaic (700-300 BCE) #language-Moroccan Arabic #language-Egyptian Arabic #language-Assamese #language-Asturian #language-Atikamekw #language-Avaric #language-Kotava #language-Awadhi #language-Aymara #language-Azerbaijani #language-South Azerbaijani #language-Bashkir #language-Balinese #language-Bavarian #language-Batak Toba #language-Central Bikol #language-Belarusian #language-Bulgarian #language-bh #language-Bislama #language-Banjar #language-Pa'o Karen #language-Bambara #language-Bengali #language-Tibetan #language-Bishnupriya #language-Breton #language-Bosnian #language-Buginese #language-Russia Buriat #language-Catalan #language-Chavacano #language-Min Dong Chinese #language-Chechen #language-Cebuano #language-Chamorro #language-Cherokee #language-Cheyenne #language-Central Kurdish #language-Corsican #language-Cree #language-Crimean Tatar #language-Czech #language-Kashubian #language-Church Slavic #language-Chuvash #language-Welsh #language-Danish #language-Dagbani #language-German #language-Southern Dagaare #language-Dinka #language-Dimli (individual language) #language-Lower Sorbian #language-Dotyali #language-Dhivehi #language-Dzongkha #language-Ewe #language-Modern Greek (1453-) #language-Emiliano-Romagnolo #language-English #language-Esperanto #language-Spanish #language-Estonian #language-Basque #language-Extremaduran #language-Persian #language-Fanti #language-Fulah #language-Finnish #language-Fijian #language-Faroese #language-Fon #language-French #language-Arpitan #language-Northern Frisian #language-Friulian #language-Western Frisian #language-Irish #language-Gagauz #language-Gan Chinese #language-Guianese Creole French #language-Scottish Gaelic #language-Galician #language-Gilaki #language-Guarani #language-Goan Konkani #language-Gorontalo #language-Gothic #language-Ghanaian Pidgin English #language-Swiss German #language-Gujarati #language-Wayuu #language-Farefare #language-Gun #language-Manx #language-Hausa #language-Hakka Chinese #language-Hawaiian #language-Serbo-Croatian #language-Hebrew #language-Hindi #language-Fiji Hindi #language-Croatian #language-Upper Sorbian #language-Haitian #language-Hungarian #language-Armenian #language-Western Armenian #language-Interlingua (International Auxiliary Language Association) #language-Indonesian #language-Interlingue #language-Igbo #language-Inupiaq #language-Iloko #language-Ingush #language-Ido #language-Icelandic #language-Italian #language-Inuktitut #language-Japanese #language-Jamaican Creole English #language-Lojban #language-Javanese #language-Georgian #language-Kara-Kalpak #language-Kabyle #language-Kabardian #language-Kabiyè #language-Tyap #language-Kongo #language-Kikuyu #language-Kazakh #language-Kalaallisut #language-Khmer #language-Kannada #language-Korean #language-Komi-Permyak #language-Karachay-Balkar #language-Kashmiri #language-Kölsch #language-Kurdish #language-Komi #language-Cornish #language-Kirghiz #language-Latin #language-Ladino #language-Luxembourgish #language-Lak #language-Lezghian #language-Lingua Franca Nova #language-Ganda #language-Limburgan #language-Ligurian #language-Ladin #language-Lombard #language-Lingala #language-Lao #language-Lithuanian #language-Latgalian #language-Latvian #language-Literary Chinese #language-Madurese #language-Maithili #language-map #language-Moksha #language-Malagasy #language-Eastern Mari #language-Maori #language-Minangkabau #language-Macedonian #language-Malayalam #language-Mongolian #language-Manipuri #language-Mon #language-Marathi #language-Western Mari #language-Malay (macrolanguage) #language-Maltese #language-Mirandese #language-Burmese #language-Erzya #language-Mazanderani #language-nah #language-Min Nan Chinese #language-Neapolitan #language-Low German #language-Nepali (macrolanguage) #language-Newari #language-Nias #language-Dutch #language-Norwegian Nynorsk #language-Norwegian #language-Novial #language-N'Ko #language-Jèrriais #language-Pedi #language-Navajo #language-Nyanja #language-Occitan (post 1500) #language-Livvi #language-Oromo #language-Oriya (macrolanguage) #language-Ossetian #language-Panjabi #language-Pangasinan #language-Pampanga #language-Papiamento #language-Picard #language-Nigerian Pidgin #language-Pennsylvania German #language-Pfaelzisch #language-Pali #language-Pitcairn-Norfolk #language-Polish #language-Piemontese #language-Western Panjabi #language-Pontic #language-Pushto #language-Portuguese #language-Paiwan #language-Quechua #language-Romansh #language-Vlax Romani #language-Rundi #language-Romanian #language-Russian #language-Rusyn #language-Macedo-Romanian #language-Kinyarwanda #language-Sanskrit #language-Yakut #language-Santali #language-Sardinian #language-Sicilian #language-Scots #language-Sindhi #language-Northern Sami #language-Sango #language-Samogitian #language-Tachelhit #language-Shan #language-Sinhala #language-Slovak #language-Saraiki #language-Slovenian #language-Samoan #language-Inari Sami #language-Shona #language-Somali #language-Albanian #language-Serbian #language-Sranan Tongo #language-Swati #language-Southern Sotho #language-Saterfriesisch #language-Sundanese #language-Swedish #language-Swahili (macrolanguage) #language-Silesian #language-Sakizaya #language-Tamil #language-Atayal #language-Tulu #language-Telugu #language-Tetum #language-Tajik #language-Thai #language-Tigrinya #language-Turkmen #language-Tagalog #language-Talysh #language-Tswana #language-Tonga (Tonga Islands) #language-Tok Pisin #language-Turkish #language-Sediq #language-Tsonga #language-Tatar #language-Tumbuka #language-Twi #language-Tahitian #language-Tuvinian #language-Udmurt #language-Uighur #language-Ukrainian #language-Urdu #language-Uzbek #language-Venda #language-Venetian #language-Veps #language-Vietnamese #language-Vlaams #language-Volapük #language-Võro #language-Walloon #language-Waray (Philippines) #language-Wolof #language-Wu Chinese #language-Kalmyk #language-Xhosa #language-Mingrelian #language-Yiddish #language-Yoruba #language-Yue Chinese #language-Zhuang #language-Zeeuws #language-Standard Moroccan Tamazight #language-Chinese #language-Zulu #license-cc-by-sa-3.0 #license-gfdl #region-us \n", "# Dataset Card for Wikimedia Wikipedia", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Point of Contact:", "### Dataset Summary\n\nWikipedia dataset containing cleaned articles of all languages.\n\nThe dataset is built from the Wikipedia dumps (URL\nwith one subset per language, each containing a single train split.\n\nEach example contains the content of one full Wikipedia article with cleaning to strip\nmarkdown and unwanted sections (references, etc.).\n\n\nAll language subsets have already been processed for recent dump, and you can load them per date and language this way:", "#### Data Visualization\nClick the Nomic Atlas map below to visualize the 6.4 million samples in the 'URL' split.\n\n<a href=\"URL\n <img src=\"URL alt=\"Nomic-Atlas Wikipedia Map\" width=\"25%\"/>\n</a>", "### Supported Tasks and Leaderboards\n\nThe dataset is generally used for Language Modeling.", "### Languages\n\nYou can find the list of languages here: URL", "## Dataset Structure", "### Data Instances\n\nAn example looks as follows:", "### Data Fields\n\nThe data fields are the same among all configurations:\n- 'id' ('str'): ID of the article.\n- 'url' ('str'): URL of the article.\n- 'title' ('str'): Title of the article.\n- 'text' ('str'): Text content of the article.", "### Data Splits\n\nAll configurations contain a single 'train' split.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\nThe dataset is built from the Wikipedia dumps: URL\n\nYou can find the full list of languages and dates here: URL\n\nThe articles have been parsed using the 'mwparserfromhell' tool.\n\nWhen uploading the data files for the 20231101 dump, we noticed that the Wikimedia Dumps website does not contain this date dump\nfor the \"bbc\", \"dga\", nor \"zgh\" Wikipedias. We have reported the issue to the Wikimedia Phabricator: URL", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nCopyright licensing information: URL\n\nAll original textual content is licensed under the GNU Free Documentation License (GFDL)\nand the Creative Commons Attribution-Share-Alike 3.0 License.\nSome text may be available only under the Creative Commons license; see their Terms of Use for details.\nText written by some authors may be released under additional licenses or into the public domain." ]
f31a033f5f3d2107b3e864e578710df104a00baa
# Dataset Card for Wikimedia Wikisource ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://dumps.wikimedia.org - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary Wikisource dataset containing cleaned articles of all languages. The dataset is built from the Wikisource dumps (https://dumps.wikimedia.org/) with one subset per language, each containing a single train split. Each example contains the content of one full Wikisource text with cleaning to strip markdown and unwanted sections (references, etc.). All language subsets have already been processed for recent dump, and you can load them by date and language like this: ```python from datasets import load_dataset ds = load_dataset("wikimedia/wikisource", "20231201.en") ``` ### Supported Tasks and Leaderboards The dataset is generally used for Language Modeling. ### Languages You can find the list of all languages here: https://meta.wikimedia.org/wiki/Wikisource#List_of_Wikisources Note that the wiki code "www" contains multilingual texts. You can find the list of languages at the "www" Multilingual Wikisource here: https://wikisource.org/wiki/Wikisource:Languages ## Dataset Structure ### Data Instances An example looks as follows: ``` {'id': '36', 'url': 'https://ca.wikisource.org/wiki/Comunicat%20de%20Berl%C3%ADn', 'title': 'Comunicat de Berlín', 'text': "\n\nPreàmbul \nEl 19 de juny de 1999, un any després de la Declaració de la Sorbona,..." } ``` ### Data Fields The data fields are the same among all language configurations: - `id` (`str`): ID of the text. - `url` (`str`): URL of the text. - `title` (`str`): Title of the text. - `text` (`str`): Content of the text. ### Data Splits All language configurations contain a single `train` split. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization The dataset is built from the Wikisource dumps: https://dumps.wikimedia.org You can find the full list of languages and dates here: https://dumps.wikimedia.org/backup-index.html The articles have been parsed using the [`mwparserfromhell`](https://mwparserfromhell.readthedocs.io) tool. #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Copyright licensing information: https://dumps.wikimedia.org/legal.html All original textual content is licensed under the [GNU Free Documentation License](https://www.gnu.org/licenses/fdl-1.3.html) (GFDL) and the [Creative Commons Attribution-Share-Alike 3.0 License](https://creativecommons.org/licenses/by-sa/3.0/). Some text may be available only under the Creative Commons license; see their [Terms of Use](https://foundation.wikimedia.org/wiki/Policy:Terms_of_Use) for details. Text written by some authors may be released under additional licenses or into the public domain. ### Citation Information ``` @ONLINE{wikidump, author = "Wikimedia Foundation", title = "Wikimedia Downloads", url = "https://dumps.wikimedia.org" } ``` ### Contributions Thanks to [@albertvillanova](https://huggingface.co/albertvillanova) for adding this dataset.
wikimedia/wikisource
[ "task_categories:text-generation", "task_categories:fill-mask", "task_ids:language-modeling", "task_ids:masked-language-modeling", "size_categories:n<1K", "size_categories:1K<n<10K", "size_categories:10K<n<100K", "size_categories:100K<n<1M", "language:ar", "language:as", "language:az", "language:ban", "language:be", "language:bg", "language:bn", "language:br", "language:bs", "language:ca", "language:cs", "language:cy", "language:da", "language:de", "language:el", "language:en", "language:eo", "language:es", "language:et", "language:eu", "language:fa", "language:fi", "language:fo", "language:fr", "language:gl", "language:gu", "language:he", "language:hi", "language:hr", "language:hu", "language:hy", "language:id", "language:is", "language:it", "language:ja", "language:jv", "language:kn", "language:ko", "language:la", "language:li", "language:lij", "language:lt", "language:mk", "language:ml", "language:mr", "language:nan", "language:nap", "language:nl", "language:no", "language:or", "language:pa", "language:pl", "language:pms", "language:pt", "language:ro", "language:ru", "language:sa", "language:sah", "language:sk", "language:sl", "language:sr", "language:su", "language:sv", "language:ta", "language:te", "language:th", "language:tr", "language:uk", "language:vec", "language:vi", "language:wa", "language:yi", "language:zh", "license:cc-by-sa-3.0", "license:gfdl", "region:us" ]
2022-03-02T23:29:22+00:00
{"language": ["ar", "as", "az", "ban", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fo", "fr", "gl", "gu", "he", "hi", "hr", "hu", "hy", "id", "is", "it", "ja", "jv", "kn", "ko", "la", "li", "lij", "lt", "mk", "ml", "mr", "nan", "nap", "nl", "no", "or", "pa", "pl", "pms", "pt", "ro", "ru", "sa", "sah", "sk", "sl", "sr", "su", "sv", "ta", "te", "th", "tr", "uk", "vec", "vi", "wa", "yi", "zh"], "license": ["cc-by-sa-3.0", "gfdl"], "size_categories": ["n<1K", "1K<n<10K", "10K<n<100K", "100K<n<1M"], "task_categories": ["text-generation", "fill-mask"], "task_ids": ["language-modeling", "masked-language-modeling"], "dataset_info": [{"config_name": "20231201.ar", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1027384499, "num_examples": 38235}], "download_size": 471633595, "dataset_size": 1027384499}, {"config_name": "20231201.as", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 10334689, "num_examples": 1191}], "download_size": 3976908, "dataset_size": 10334689}, {"config_name": "20231201.az", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 37612618, "num_examples": 9706}], "download_size": 20953203, "dataset_size": 37612618}, {"config_name": "20231201.ban", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 468189, "num_examples": 591}], "download_size": 169732, "dataset_size": 468189}, {"config_name": "20231201.be", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 52555230, "num_examples": 4876}], "download_size": 26356864, "dataset_size": 52555230}, {"config_name": "20231201.bg", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 33320786, "num_examples": 2316}], "download_size": 14416495, "dataset_size": 33320786}, {"config_name": "20231201.bn", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 12256, "num_examples": 5}], "download_size": 11958, "dataset_size": 12256}, {"config_name": "20231201.br", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 179457, "num_examples": 314}], "download_size": 89388, "dataset_size": 179457}, {"config_name": "20231201.bs", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 15735639, "num_examples": 1918}], "download_size": 9427044, "dataset_size": 15735639}, {"config_name": "20231201.ca", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 9470138, "num_examples": 1229}], "download_size": 5021947, "dataset_size": 9470138}, {"config_name": "20231201.cs", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 190358421, "num_examples": 42735}], "download_size": 124249346, "dataset_size": 190358421}, {"config_name": "20231201.cy", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2161046, "num_examples": 1090}], "download_size": 1251259, "dataset_size": 2161046}, {"config_name": "20231201.da", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 18564343, "num_examples": 1043}], "download_size": 10957998, "dataset_size": 18564343}, {"config_name": "20231201.de", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 527146739, "num_examples": 141657}], "download_size": 312816088, "dataset_size": 527146739}, {"config_name": "20231201.el", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 215554120, "num_examples": 8024}], "download_size": 103217935, "dataset_size": 215554120}, {"config_name": "20231201.en", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2470274084, "num_examples": 208279}], "download_size": 1382960909, "dataset_size": 2470274084}, {"config_name": "20231201.eo", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3207308, "num_examples": 384}], "download_size": 2009128, "dataset_size": 3207308}, {"config_name": "20231201.es", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 381152287, "num_examples": 37831}], "download_size": 224097690, "dataset_size": 381152287}, {"config_name": "20231201.et", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3906488, "num_examples": 722}], "download_size": 2316406, "dataset_size": 3906488}, {"config_name": "20231201.eu", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 17014224, "num_examples": 923}], "download_size": 9473130, "dataset_size": 17014224}, {"config_name": "20231201.fa", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 79812303, "num_examples": 5751}], "download_size": 33916994, "dataset_size": 79812303}, {"config_name": "20231201.fi", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 55271379, "num_examples": 13414}], "download_size": 33265827, "dataset_size": 55271379}, {"config_name": "20231201.fo", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 174113, "num_examples": 62}], "download_size": 112092, "dataset_size": 174113}, {"config_name": "20231201.fr", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 90126375, "num_examples": 23201}], "download_size": 49429480, "dataset_size": 90126375}, {"config_name": "20231201.gl", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6679826, "num_examples": 747}], "download_size": 3712275, "dataset_size": 6679826}, {"config_name": "20231201.gu", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2430315, "num_examples": 797}], "download_size": 948872, "dataset_size": 2430315}, {"config_name": "20231201.he", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1166312812, "num_examples": 107248}], "download_size": 519792862, "dataset_size": 1166312812}, {"config_name": "20231201.hi", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2203936, "num_examples": 3494}], "download_size": 443194, "dataset_size": 2203936}, {"config_name": "20231201.hr", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 61069921, "num_examples": 8278}], "download_size": 38797697, "dataset_size": 61069921}, {"config_name": "20231201.hu", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 94429364, "num_examples": 20846}], "download_size": 62012894, "dataset_size": 94429364}, {"config_name": "20231201.hy", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 39941751, "num_examples": 2248}], "download_size": 18574182, "dataset_size": 39941751}, {"config_name": "20231201.id", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 40100527, "num_examples": 2234}], "download_size": 18175030, "dataset_size": 40100527}, {"config_name": "20231201.is", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 20657687, "num_examples": 4880}], "download_size": 11620112, "dataset_size": 20657687}, {"config_name": "20231201.it", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 211472789, "num_examples": 65047}], "download_size": 115227856, "dataset_size": 211472789}, {"config_name": "20231201.ja", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 202476246, "num_examples": 11879}], "download_size": 90838204, "dataset_size": 202476246}, {"config_name": "20231201.jv", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6977954, "num_examples": 534}], "download_size": 3409151, "dataset_size": 6977954}, {"config_name": "20231201.kn", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 202914626, "num_examples": 14980}], "download_size": 73290389, "dataset_size": 202914626}, {"config_name": "20231201.ko", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 661997703, "num_examples": 24858}], "download_size": 302950424, "dataset_size": 661997703}, {"config_name": "20231201.la", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 583348181, "num_examples": 11032}], "download_size": 351767028, "dataset_size": 583348181}, {"config_name": "20231201.li", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2143869, "num_examples": 1857}], "download_size": 1191398, "dataset_size": 2143869}, {"config_name": "20231201.lij", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1281480, "num_examples": 1185}], "download_size": 651083, "dataset_size": 1281480}, {"config_name": "20231201.lt", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7513991, "num_examples": 1874}], "download_size": 4637316, "dataset_size": 7513991}, {"config_name": "20231201.mk", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 12706090, "num_examples": 2166}], "download_size": 5077478, "dataset_size": 12706090}, {"config_name": "20231201.ml", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 81611041, "num_examples": 6052}], "download_size": 29462281, "dataset_size": 81611041}, {"config_name": "20231201.mr", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 35302346, "num_examples": 1485}], "download_size": 13300483, "dataset_size": 35302346}, {"config_name": "20231201.nap", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 91852, "num_examples": 155}], "download_size": 53478, "dataset_size": 91852}, {"config_name": "20231201.nl", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 48325965, "num_examples": 5260}], "download_size": 27915130, "dataset_size": 48325965}, {"config_name": "20231201.no", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2289098, "num_examples": 379}], "download_size": 1397633, "dataset_size": 2289098}, {"config_name": "20231201.or", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 18535382, "num_examples": 693}], "download_size": 7348706, "dataset_size": 18535382}, {"config_name": "20231201.pa", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6536266, "num_examples": 107}], "download_size": 2583902, "dataset_size": 6536266}, {"config_name": "20231201.pl", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 56457491, "num_examples": 12020}], "download_size": 34312764, "dataset_size": 56457491}, {"config_name": "20231201.pms", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 16256157, "num_examples": 4093}], "download_size": 9703819, "dataset_size": 16256157}, {"config_name": "20231201.pt", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 106619107, "num_examples": 23171}], "download_size": 62791422, "dataset_size": 106619107}, {"config_name": "20231201.ro", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 134273629, "num_examples": 12921}], "download_size": 81375524, "dataset_size": 134273629}, {"config_name": "20231201.ru", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 9393299725, "num_examples": 372768}], "download_size": 4601162148, "dataset_size": 9393299725}, {"config_name": "20231201.sa", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 636225055, "num_examples": 22986}], "download_size": 231955608, "dataset_size": 636225055}, {"config_name": "20231201.sah", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 17305188, "num_examples": 903}], "download_size": 7654932, "dataset_size": 17305188}, {"config_name": "20231201.sk", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3532173, "num_examples": 390}], "download_size": 2217851, "dataset_size": 3532173}, {"config_name": "20231201.sl", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 366151991, "num_examples": 17267}], "download_size": 242655257, "dataset_size": 366151991}, {"config_name": "20231201.sr", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 195710992, "num_examples": 38987}], "download_size": 86833442, "dataset_size": 195710992}, {"config_name": "20231201.su", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 356902, "num_examples": 20}], "download_size": 220452, "dataset_size": 356902}, {"config_name": "20231201.sv", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 27912113, "num_examples": 6296}], "download_size": 16513469, "dataset_size": 27912113}, {"config_name": "20231201.ta", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 113836204, "num_examples": 4702}], "download_size": 40070603, "dataset_size": 113836204}, {"config_name": "20231201.te", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 94840451, "num_examples": 9012}], "download_size": 36668092, "dataset_size": 94840451}, {"config_name": "20231201.th", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 73437990, "num_examples": 2383}], "download_size": 23644914, "dataset_size": 73437990}, {"config_name": "20231201.tr", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 64957772, "num_examples": 7220}], "download_size": 34039502, "dataset_size": 64957772}, {"config_name": "20231201.uk", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 46059083, "num_examples": 4171}], "download_size": 21135029, "dataset_size": 46059083}, {"config_name": "20231201.vec", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5700371, "num_examples": 3492}], "download_size": 3097037, "dataset_size": 5700371}, {"config_name": "20231201.vi", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 48099940, "num_examples": 5471}], "download_size": 17336608, "dataset_size": 48099940}, {"config_name": "20231201.wa", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3735624, "num_examples": 897}], "download_size": 2222694, "dataset_size": 3735624}, {"config_name": "20231201.yi", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 24802558, "num_examples": 1669}], "download_size": 10686751, "dataset_size": 24802558}, {"config_name": "20231201.zh", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5070438246, "num_examples": 265669}], "download_size": 3309500049, "dataset_size": 5070438246}, {"config_name": "20231201.zh-min-nan", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 21109492, "num_examples": 2360}], "download_size": 10288524, "dataset_size": 21109492}], "configs": [{"config_name": "20231201.ar", "data_files": [{"split": "train", "path": "20231201.ar/train-*"}]}, {"config_name": "20231201.as", "data_files": [{"split": "train", "path": "20231201.as/train-*"}]}, {"config_name": "20231201.az", "data_files": [{"split": "train", "path": "20231201.az/train-*"}]}, {"config_name": "20231201.ban", "data_files": [{"split": "train", "path": "20231201.ban/train-*"}]}, {"config_name": "20231201.be", "data_files": [{"split": "train", "path": "20231201.be/train-*"}]}, {"config_name": "20231201.bg", "data_files": [{"split": "train", "path": "20231201.bg/train-*"}]}, {"config_name": "20231201.bn", "data_files": [{"split": "train", "path": "20231201.bn/train-*"}]}, {"config_name": "20231201.br", "data_files": [{"split": "train", "path": "20231201.br/train-*"}]}, {"config_name": "20231201.bs", "data_files": [{"split": "train", "path": "20231201.bs/train-*"}]}, {"config_name": "20231201.ca", "data_files": [{"split": "train", "path": "20231201.ca/train-*"}]}, {"config_name": "20231201.cs", "data_files": [{"split": "train", "path": "20231201.cs/train-*"}]}, {"config_name": "20231201.cy", "data_files": [{"split": "train", "path": "20231201.cy/train-*"}]}, {"config_name": "20231201.da", "data_files": [{"split": "train", "path": "20231201.da/train-*"}]}, {"config_name": "20231201.de", "data_files": [{"split": "train", "path": "20231201.de/train-*"}]}, {"config_name": "20231201.el", "data_files": [{"split": "train", "path": "20231201.el/train-*"}]}, {"config_name": "20231201.en", "data_files": [{"split": "train", "path": "20231201.en/train-*"}]}, {"config_name": "20231201.eo", "data_files": [{"split": "train", "path": "20231201.eo/train-*"}]}, {"config_name": "20231201.es", "data_files": [{"split": "train", "path": "20231201.es/train-*"}]}, {"config_name": "20231201.et", "data_files": [{"split": "train", "path": "20231201.et/train-*"}]}, {"config_name": "20231201.eu", "data_files": [{"split": "train", "path": "20231201.eu/train-*"}]}, {"config_name": "20231201.fa", "data_files": [{"split": "train", "path": "20231201.fa/train-*"}]}, {"config_name": "20231201.fi", "data_files": [{"split": "train", "path": "20231201.fi/train-*"}]}, {"config_name": "20231201.fo", "data_files": [{"split": "train", "path": "20231201.fo/train-*"}]}, {"config_name": "20231201.fr", "data_files": [{"split": "train", "path": "20231201.fr/train-*"}]}, {"config_name": "20231201.gl", "data_files": [{"split": "train", "path": "20231201.gl/train-*"}]}, {"config_name": "20231201.gu", "data_files": [{"split": "train", "path": "20231201.gu/train-*"}]}, {"config_name": "20231201.he", "data_files": [{"split": "train", "path": "20231201.he/train-*"}]}, {"config_name": "20231201.hi", "data_files": [{"split": "train", "path": "20231201.hi/train-*"}]}, {"config_name": "20231201.hr", "data_files": [{"split": "train", "path": "20231201.hr/train-*"}]}, {"config_name": "20231201.hu", "data_files": [{"split": "train", "path": "20231201.hu/train-*"}]}, {"config_name": "20231201.hy", "data_files": [{"split": "train", "path": "20231201.hy/train-*"}]}, {"config_name": "20231201.id", "data_files": [{"split": "train", "path": "20231201.id/train-*"}]}, {"config_name": "20231201.is", "data_files": [{"split": "train", "path": "20231201.is/train-*"}]}, {"config_name": "20231201.it", "data_files": [{"split": "train", "path": "20231201.it/train-*"}]}, {"config_name": "20231201.ja", "data_files": [{"split": "train", "path": "20231201.ja/train-*"}]}, {"config_name": "20231201.jv", "data_files": [{"split": "train", "path": "20231201.jv/train-*"}]}, {"config_name": "20231201.kn", "data_files": [{"split": "train", "path": "20231201.kn/train-*"}]}, {"config_name": "20231201.ko", "data_files": [{"split": "train", "path": "20231201.ko/train-*"}]}, {"config_name": "20231201.la", "data_files": [{"split": "train", "path": "20231201.la/train-*"}]}, {"config_name": "20231201.li", "data_files": [{"split": "train", "path": "20231201.li/train-*"}]}, {"config_name": "20231201.lij", "data_files": [{"split": "train", "path": "20231201.lij/train-*"}]}, {"config_name": "20231201.lt", "data_files": [{"split": "train", "path": "20231201.lt/train-*"}]}, {"config_name": "20231201.mk", "data_files": [{"split": "train", "path": "20231201.mk/train-*"}]}, {"config_name": "20231201.ml", "data_files": [{"split": "train", "path": "20231201.ml/train-*"}]}, {"config_name": "20231201.mr", "data_files": [{"split": "train", "path": "20231201.mr/train-*"}]}, {"config_name": "20231201.nap", "data_files": [{"split": "train", "path": "20231201.nap/train-*"}]}, {"config_name": "20231201.nl", "data_files": [{"split": "train", "path": "20231201.nl/train-*"}]}, {"config_name": "20231201.no", "data_files": [{"split": "train", "path": "20231201.no/train-*"}]}, {"config_name": "20231201.or", "data_files": [{"split": "train", "path": "20231201.or/train-*"}]}, {"config_name": "20231201.pa", "data_files": [{"split": "train", "path": "20231201.pa/train-*"}]}, {"config_name": "20231201.pl", "data_files": [{"split": "train", "path": "20231201.pl/train-*"}]}, {"config_name": "20231201.pms", "data_files": [{"split": "train", "path": "20231201.pms/train-*"}]}, {"config_name": "20231201.pt", "data_files": [{"split": "train", "path": "20231201.pt/train-*"}]}, {"config_name": "20231201.ro", "data_files": [{"split": "train", "path": "20231201.ro/train-*"}]}, {"config_name": "20231201.ru", "data_files": [{"split": "train", "path": "20231201.ru/train-*"}]}, {"config_name": "20231201.sa", "data_files": [{"split": "train", "path": "20231201.sa/train-*"}]}, {"config_name": "20231201.sah", "data_files": [{"split": "train", "path": "20231201.sah/train-*"}]}, {"config_name": "20231201.sk", "data_files": [{"split": "train", "path": "20231201.sk/train-*"}]}, {"config_name": "20231201.sl", "data_files": [{"split": "train", "path": "20231201.sl/train-*"}]}, {"config_name": "20231201.sr", "data_files": [{"split": "train", "path": "20231201.sr/train-*"}]}, {"config_name": "20231201.su", "data_files": [{"split": "train", "path": "20231201.su/train-*"}]}, {"config_name": "20231201.sv", "data_files": [{"split": "train", "path": "20231201.sv/train-*"}]}, {"config_name": "20231201.ta", "data_files": [{"split": "train", "path": "20231201.ta/train-*"}]}, {"config_name": "20231201.te", "data_files": [{"split": "train", "path": "20231201.te/train-*"}]}, {"config_name": "20231201.th", "data_files": [{"split": "train", "path": "20231201.th/train-*"}]}, {"config_name": "20231201.tr", "data_files": [{"split": "train", "path": "20231201.tr/train-*"}]}, {"config_name": "20231201.uk", "data_files": [{"split": "train", "path": "20231201.uk/train-*"}]}, {"config_name": "20231201.vec", "data_files": [{"split": "train", "path": "20231201.vec/train-*"}]}, {"config_name": "20231201.vi", "data_files": [{"split": "train", "path": "20231201.vi/train-*"}]}, {"config_name": "20231201.wa", "data_files": [{"split": "train", "path": "20231201.wa/train-*"}]}, {"config_name": "20231201.yi", "data_files": [{"split": "train", "path": "20231201.yi/train-*"}]}, {"config_name": "20231201.zh", "data_files": [{"split": "train", "path": "20231201.zh/train-*"}]}, {"config_name": "20231201.zh-min-nan", "data_files": [{"split": "train", "path": "20231201.zh-min-nan/train-*"}]}]}
2023-12-08T13:36:41+00:00
[]
[ "ar", "as", "az", "ban", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fo", "fr", "gl", "gu", "he", "hi", "hr", "hu", "hy", "id", "is", "it", "ja", "jv", "kn", "ko", "la", "li", "lij", "lt", "mk", "ml", "mr", "nan", "nap", "nl", "no", "or", "pa", "pl", "pms", "pt", "ro", "ru", "sa", "sah", "sk", "sl", "sr", "su", "sv", "ta", "te", "th", "tr", "uk", "vec", "vi", "wa", "yi", "zh" ]
TAGS #task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #size_categories-n<1K #size_categories-1K<n<10K #size_categories-10K<n<100K #size_categories-100K<n<1M #language-Arabic #language-Assamese #language-Azerbaijani #language-Balinese #language-Belarusian #language-Bulgarian #language-Bengali #language-Breton #language-Bosnian #language-Catalan #language-Czech #language-Welsh #language-Danish #language-German #language-Modern Greek (1453-) #language-English #language-Esperanto #language-Spanish #language-Estonian #language-Basque #language-Persian #language-Finnish #language-Faroese #language-French #language-Galician #language-Gujarati #language-Hebrew #language-Hindi #language-Croatian #language-Hungarian #language-Armenian #language-Indonesian #language-Icelandic #language-Italian #language-Japanese #language-Javanese #language-Kannada #language-Korean #language-Latin #language-Limburgan #language-Ligurian #language-Lithuanian #language-Macedonian #language-Malayalam #language-Marathi #language-Min Nan Chinese #language-Neapolitan #language-Dutch #language-Norwegian #language-Oriya (macrolanguage) #language-Panjabi #language-Polish #language-Piemontese #language-Portuguese #language-Romanian #language-Russian #language-Sanskrit #language-Yakut #language-Slovak #language-Slovenian #language-Serbian #language-Sundanese #language-Swedish #language-Tamil #language-Telugu #language-Thai #language-Turkish #language-Ukrainian #language-Venetian #language-Vietnamese #language-Walloon #language-Yiddish #language-Chinese #license-cc-by-sa-3.0 #license-gfdl #region-us
# Dataset Card for Wikimedia Wikisource ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary Wikisource dataset containing cleaned articles of all languages. The dataset is built from the Wikisource dumps (URL with one subset per language, each containing a single train split. Each example contains the content of one full Wikisource text with cleaning to strip markdown and unwanted sections (references, etc.). All language subsets have already been processed for recent dump, and you can load them by date and language like this: ### Supported Tasks and Leaderboards The dataset is generally used for Language Modeling. ### Languages You can find the list of all languages here: URL Note that the wiki code "www" contains multilingual texts. You can find the list of languages at the "www" Multilingual Wikisource here: URL ## Dataset Structure ### Data Instances An example looks as follows: ### Data Fields The data fields are the same among all language configurations: - 'id' ('str'): ID of the text. - 'url' ('str'): URL of the text. - 'title' ('str'): Title of the text. - 'text' ('str'): Content of the text. ### Data Splits All language configurations contain a single 'train' split. ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization The dataset is built from the Wikisource dumps: URL You can find the full list of languages and dates here: URL The articles have been parsed using the 'mwparserfromhell' tool. #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information Copyright licensing information: URL All original textual content is licensed under the GNU Free Documentation License (GFDL) and the Creative Commons Attribution-Share-Alike 3.0 License. Some text may be available only under the Creative Commons license; see their Terms of Use for details. Text written by some authors may be released under additional licenses or into the public domain. ### Contributions Thanks to @albertvillanova for adding this dataset.
[ "# Dataset Card for Wikimedia Wikisource", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nWikisource dataset containing cleaned articles of all languages.\n\nThe dataset is built from the Wikisource dumps (URL\nwith one subset per language, each containing a single train split.\n\nEach example contains the content of one full Wikisource text with cleaning to strip\nmarkdown and unwanted sections (references, etc.).\n\n\nAll language subsets have already been processed for recent dump, and you can load them by date and language like this:", "### Supported Tasks and Leaderboards\n\nThe dataset is generally used for Language Modeling.", "### Languages\n\nYou can find the list of all languages here: URL\n\nNote that the wiki code \"www\" contains multilingual texts. You can find the list of languages at the \"www\" Multilingual\nWikisource here: URL", "## Dataset Structure", "### Data Instances\n\nAn example looks as follows:", "### Data Fields\n\nThe data fields are the same among all language configurations:\n- 'id' ('str'): ID of the text.\n- 'url' ('str'): URL of the text.\n- 'title' ('str'): Title of the text.\n- 'text' ('str'): Content of the text.", "### Data Splits\n\nAll language configurations contain a single 'train' split.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\nThe dataset is built from the Wikisource dumps: URL\n\nYou can find the full list of languages and dates here: URL\n\nThe articles have been parsed using the 'mwparserfromhell' tool.", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nCopyright licensing information: URL\n\nAll original textual content is licensed under the GNU Free Documentation License (GFDL)\nand the Creative Commons Attribution-Share-Alike 3.0 License.\nSome text may be available only under the Creative Commons license; see their Terms of Use for details.\nText written by some authors may be released under additional licenses or into the public domain.", "### Contributions\n\nThanks to @albertvillanova for adding this dataset." ]
[ "TAGS\n#task_categories-text-generation #task_categories-fill-mask #task_ids-language-modeling #task_ids-masked-language-modeling #size_categories-n<1K #size_categories-1K<n<10K #size_categories-10K<n<100K #size_categories-100K<n<1M #language-Arabic #language-Assamese #language-Azerbaijani #language-Balinese #language-Belarusian #language-Bulgarian #language-Bengali #language-Breton #language-Bosnian #language-Catalan #language-Czech #language-Welsh #language-Danish #language-German #language-Modern Greek (1453-) #language-English #language-Esperanto #language-Spanish #language-Estonian #language-Basque #language-Persian #language-Finnish #language-Faroese #language-French #language-Galician #language-Gujarati #language-Hebrew #language-Hindi #language-Croatian #language-Hungarian #language-Armenian #language-Indonesian #language-Icelandic #language-Italian #language-Japanese #language-Javanese #language-Kannada #language-Korean #language-Latin #language-Limburgan #language-Ligurian #language-Lithuanian #language-Macedonian #language-Malayalam #language-Marathi #language-Min Nan Chinese #language-Neapolitan #language-Dutch #language-Norwegian #language-Oriya (macrolanguage) #language-Panjabi #language-Polish #language-Piemontese #language-Portuguese #language-Romanian #language-Russian #language-Sanskrit #language-Yakut #language-Slovak #language-Slovenian #language-Serbian #language-Sundanese #language-Swedish #language-Tamil #language-Telugu #language-Thai #language-Turkish #language-Ukrainian #language-Venetian #language-Vietnamese #language-Walloon #language-Yiddish #language-Chinese #license-cc-by-sa-3.0 #license-gfdl #region-us \n", "# Dataset Card for Wikimedia Wikisource", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nWikisource dataset containing cleaned articles of all languages.\n\nThe dataset is built from the Wikisource dumps (URL\nwith one subset per language, each containing a single train split.\n\nEach example contains the content of one full Wikisource text with cleaning to strip\nmarkdown and unwanted sections (references, etc.).\n\n\nAll language subsets have already been processed for recent dump, and you can load them by date and language like this:", "### Supported Tasks and Leaderboards\n\nThe dataset is generally used for Language Modeling.", "### Languages\n\nYou can find the list of all languages here: URL\n\nNote that the wiki code \"www\" contains multilingual texts. You can find the list of languages at the \"www\" Multilingual\nWikisource here: URL", "## Dataset Structure", "### Data Instances\n\nAn example looks as follows:", "### Data Fields\n\nThe data fields are the same among all language configurations:\n- 'id' ('str'): ID of the text.\n- 'url' ('str'): URL of the text.\n- 'title' ('str'): Title of the text.\n- 'text' ('str'): Content of the text.", "### Data Splits\n\nAll language configurations contain a single 'train' split.", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization\n\nThe dataset is built from the Wikisource dumps: URL\n\nYou can find the full list of languages and dates here: URL\n\nThe articles have been parsed using the 'mwparserfromhell' tool.", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nCopyright licensing information: URL\n\nAll original textual content is licensed under the GNU Free Documentation License (GFDL)\nand the Creative Commons Attribution-Share-Alike 3.0 License.\nSome text may be available only under the Creative Commons license; see their Terms of Use for details.\nText written by some authors may be released under additional licenses or into the public domain.", "### Contributions\n\nThanks to @albertvillanova for adding this dataset." ]
4bd9103ad5f24d44db806d40c94c8b9ec116ad05
# Dataset This dataset contains positive , negative and notr sentences from several data sources given in the references. In the most sentiment models , there are only two labels; positive and negative. However , user input can be totally notr sentence. For such cases there were no data I could find. Therefore I created this dataset with 3 class. Positive and negative sentences are listed below. Notr examples are extraced from turkish wiki dump. In addition, added some random text inputs like "Lorem ipsum dolor sit amet.". There are 492.782 labeled sentences. %10 of them used for testing. # Türkçe Duygu Analizi Veriseti Bu veriseti , farklı kaynaklardan derlenmiş pozitif , negatif ve nötr sınıflardan örnekler içerir. Bir çok verisetinde sadece pozitif ve negatif bulunur. Fakat kullanıcı input'u nötr olabilir. Bu tarz durumlar için türkçe bir dataset bulmakta zorlandım. Dolayısıyla , 3 sınıftan oluşan bu dataseti oluşturdum. Pozitif ve negatif örnekleri aldığın kaynaklar referans kısmında listelenmiştir. Nötr cümleler ise wikipedia datasından alınmıştır. Ek olarak bazı rastgele inputlar nötr olarak eklenmiştir. Örneğin: "Lorem ipsum dolor sit amet.". There are 492.782 labeled sentences. %10 of them used for testing. # References - https://www.kaggle.com/burhanbilenn/duygu-analizi-icin-urun-yorumlari - https://github.com/fthbrmnby/turkish-text-data - https://www.kaggle.com/mustfkeskin/turkish-wikipedia-dump - https://github.com/ezgisubasi/turkish-tweets-sentiment-analysis - http://humirapps.cs.hacettepe.edu.tr/ You can reach me via LinkedIn. https://www.linkedin.com/in/batuhanayhan/
winvoker/turkish-sentiment-analysis-dataset
[ "task_categories:text-classification", "task_ids:sentiment-classification", "annotations_creators:crowdsourced", "annotations_creators:expert-generated", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:unknown", "language:tr", "license:cc-by-sa-4.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["crowdsourced", "expert-generated"], "language_creators": ["crowdsourced"], "language": ["tr"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": [], "task_categories": ["text-classification"], "task_ids": ["sentiment-classification"], "pretty_name": "Turkish Sentiment Dataset"}
2023-07-19T12:15:13+00:00
[]
[ "tr" ]
TAGS #task_categories-text-classification #task_ids-sentiment-classification #annotations_creators-crowdsourced #annotations_creators-expert-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-unknown #language-Turkish #license-cc-by-sa-4.0 #region-us
# Dataset This dataset contains positive , negative and notr sentences from several data sources given in the references. In the most sentiment models , there are only two labels; positive and negative. However , user input can be totally notr sentence. For such cases there were no data I could find. Therefore I created this dataset with 3 class. Positive and negative sentences are listed below. Notr examples are extraced from turkish wiki dump. In addition, added some random text inputs like "Lorem ipsum dolor sit amet.". There are 492.782 labeled sentences. %10 of them used for testing. # Türkçe Duygu Analizi Veriseti Bu veriseti , farklı kaynaklardan derlenmiş pozitif , negatif ve nötr sınıflardan örnekler içerir. Bir çok verisetinde sadece pozitif ve negatif bulunur. Fakat kullanıcı input'u nötr olabilir. Bu tarz durumlar için türkçe bir dataset bulmakta zorlandım. Dolayısıyla , 3 sınıftan oluşan bu dataseti oluşturdum. Pozitif ve negatif örnekleri aldığın kaynaklar referans kısmında listelenmiştir. Nötr cümleler ise wikipedia datasından alınmıştır. Ek olarak bazı rastgele inputlar nötr olarak eklenmiştir. Örneğin: "Lorem ipsum dolor sit amet.". There are 492.782 labeled sentences. %10 of them used for testing. # References - URL - URL - URL - URL - URL You can reach me via LinkedIn. URL
[ "# Dataset\nThis dataset contains positive , negative and notr sentences from several data sources given in the references. In the most sentiment models , there are only two labels; positive and negative. However , user input can be totally notr sentence. For such cases there were no data I could find. Therefore I created this dataset with 3 class. Positive and negative sentences are listed below. Notr examples are extraced from turkish wiki dump. In addition, added some random text inputs like \"Lorem ipsum dolor sit amet.\".\n\nThere are 492.782 labeled sentences. %10 of them used for testing.", "# Türkçe Duygu Analizi Veriseti\nBu veriseti , farklı kaynaklardan derlenmiş pozitif , negatif ve nötr sınıflardan örnekler içerir. Bir çok verisetinde sadece pozitif ve negatif bulunur. Fakat kullanıcı input'u nötr olabilir. Bu tarz durumlar için türkçe bir dataset bulmakta zorlandım. Dolayısıyla , 3 sınıftan oluşan bu dataseti oluşturdum. Pozitif ve negatif örnekleri aldığın kaynaklar referans kısmında listelenmiştir. Nötr cümleler ise wikipedia datasından alınmıştır. Ek olarak bazı rastgele inputlar nötr olarak eklenmiştir. Örneğin: \"Lorem ipsum dolor sit amet.\".\n\nThere are 492.782 labeled sentences. %10 of them used for testing.", "# References \n- URL\n- URL\n- URL\n- URL\n- URL\n\nYou can reach me via LinkedIn. URL" ]
[ "TAGS\n#task_categories-text-classification #task_ids-sentiment-classification #annotations_creators-crowdsourced #annotations_creators-expert-generated #language_creators-crowdsourced #multilinguality-monolingual #size_categories-unknown #language-Turkish #license-cc-by-sa-4.0 #region-us \n", "# Dataset\nThis dataset contains positive , negative and notr sentences from several data sources given in the references. In the most sentiment models , there are only two labels; positive and negative. However , user input can be totally notr sentence. For such cases there were no data I could find. Therefore I created this dataset with 3 class. Positive and negative sentences are listed below. Notr examples are extraced from turkish wiki dump. In addition, added some random text inputs like \"Lorem ipsum dolor sit amet.\".\n\nThere are 492.782 labeled sentences. %10 of them used for testing.", "# Türkçe Duygu Analizi Veriseti\nBu veriseti , farklı kaynaklardan derlenmiş pozitif , negatif ve nötr sınıflardan örnekler içerir. Bir çok verisetinde sadece pozitif ve negatif bulunur. Fakat kullanıcı input'u nötr olabilir. Bu tarz durumlar için türkçe bir dataset bulmakta zorlandım. Dolayısıyla , 3 sınıftan oluşan bu dataseti oluşturdum. Pozitif ve negatif örnekleri aldığın kaynaklar referans kısmında listelenmiştir. Nötr cümleler ise wikipedia datasından alınmıştır. Ek olarak bazı rastgele inputlar nötr olarak eklenmiştir. Örneğin: \"Lorem ipsum dolor sit amet.\".\n\nThere are 492.782 labeled sentences. %10 of them used for testing.", "# References \n- URL\n- URL\n- URL\n- URL\n- URL\n\nYou can reach me via LinkedIn. URL" ]
17351fd223d575b909d038d67234611c4960d1eb
# Dataset Card for "nostradamus-propheties" ## Dataset Description ### Dataset Summary The Nostradamus propheties dataset is a set of structured files containing the "Propheties" by Nostradamus, translated in modern English. The original text consists of 10 "Centuries", every century containing 100 numbered quatrains. In the dataset, every century is a separate file named `century**.json`. For instance, all the quatrains of Century I are in the file `century01.json`. The century and the quantrain number are kept for every quatrain. Every quatrain has been split in four separate lines. For example, the second quatrain of Century I is stored in `century01.json` as follows: ``` { "century":1, "index":2, "line1":"The wand in the hand is placed in the middle of the tripod's legs.", "line2":"With water he sprinkles both the hem of his garment and his foot.", "line3":"A voice, fear: he trembles in his robes.", "line4":"Divine splendor; the God sits nearby." } ```
wpicard/nostradamus-propheties
[ "task_ids:language-modeling", "annotations_creators:no-annotation", "multilinguality:monolingual", "size_categories:unknown", "language:en", "license:unknown", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["no-annotation"], "language_creators": [], "language": ["en"], "license": ["unknown"], "multilinguality": ["monolingual"], "size_categories": ["unknown"], "source_datasets": [], "task_categories": ["sequence-modeling"], "task_ids": ["language-modeling"], "pretty_name": "nostradamus-propheties", "language_bcp47": ["en-US"]}
2022-10-23T03:54:07+00:00
[]
[ "en" ]
TAGS #task_ids-language-modeling #annotations_creators-no-annotation #multilinguality-monolingual #size_categories-unknown #language-English #license-unknown #region-us
# Dataset Card for "nostradamus-propheties" ## Dataset Description ### Dataset Summary The Nostradamus propheties dataset is a set of structured files containing the "Propheties" by Nostradamus, translated in modern English. The original text consists of 10 "Centuries", every century containing 100 numbered quatrains. In the dataset, every century is a separate file named 'century.json'. For instance, all the quatrains of Century I are in the file 'URL'. The century and the quantrain number are kept for every quatrain. Every quatrain has been split in four separate lines. For example, the second quatrain of Century I is stored in 'URL' as follows:
[ "# Dataset Card for \"nostradamus-propheties\"", "## Dataset Description", "### Dataset Summary\n\nThe Nostradamus propheties dataset is a set of structured files containing the \"Propheties\" by Nostradamus, translated in modern English.\n\nThe original text consists of 10 \"Centuries\", every century containing 100 numbered quatrains.\n\nIn the dataset, every century is a separate file named 'century.json'. For instance, all the quatrains of Century I are in the file 'URL'.\n\nThe century and the quantrain number are kept for every quatrain. Every quatrain has been split in four separate lines. For example, the second quatrain of Century I is stored in 'URL' as follows:" ]
[ "TAGS\n#task_ids-language-modeling #annotations_creators-no-annotation #multilinguality-monolingual #size_categories-unknown #language-English #license-unknown #region-us \n", "# Dataset Card for \"nostradamus-propheties\"", "## Dataset Description", "### Dataset Summary\n\nThe Nostradamus propheties dataset is a set of structured files containing the \"Propheties\" by Nostradamus, translated in modern English.\n\nThe original text consists of 10 \"Centuries\", every century containing 100 numbered quatrains.\n\nIn the dataset, every century is a separate file named 'century.json'. For instance, all the quatrains of Century I are in the file 'URL'.\n\nThe century and the quantrain number are kept for every quatrain. Every quatrain has been split in four separate lines. For example, the second quatrain of Century I is stored in 'URL' as follows:" ]
7f9bc116cf52a133d0e0e4c05f3af16319c8f98c
# Dataset Card for cantonese-mandarin-translations ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This is a machine-translated parallel corpus between Cantonese (a Chinese dialect that is mainly spoken by Guangdong (province of China), Hong Kong, Macau and part of Malaysia) and Chinese (written form, in Simplified Chinese). ### Supported Tasks and Leaderboards N/A ### Languages - Cantonese (`yue`) - Simplified Chinese (`zh-CN`) ## Dataset Structure JSON lines with `yue` field and `zh` field for the parallel corpus. ### Data Instances N/A ### Data Fields - `yue`: Cantonese corpus - `zh`: translated Chinese corpus ### Data Splits No data splitting is done as of yet. ## Dataset Creation The dataset is produced by doing the following: - Download [HKCancor Cantonese Corpus](https://github.com/fcbond/hkcancor) and [CommonVoice Cantonese (Hong Kong Chinese `yue`) text corpus](https://commonvoice.mozilla.org/en/datasets) - Extract text corpus and merge datasets - Run text against [Microsoft's Translator API](https://learn.microsoft.com/en-us/azure/ai-services/translator/language-support) from `yue` to `zh-Hans` ### Curation Rationale Currently no such corpus exists, and it is hard to find such a corpus, so we tried to generate a reasonable batch of samples using machine translation for research purposes. ### Source Data - [HKCancor](https://github.com/fcbond/hkcancor) - [CommonVoice 7.0 Chinese (Hong Kong)](https://commonvoice.mozilla.org/en/datasets) #### Initial Data Collection and Normalization Normalization scripts will be included soon. #### Who are the source language producers? - [HKCancor](https://github.com/fcbond/hkcancor) - [CommonVoice 7.0 Chinese (Hong Kong)](https://commonvoice.mozilla.org/en/datasets) ### Annotations #### Annotation process We run the Cantonese text corpus against Microsoft's Translator API. #### Who are the annotators? - [Microsoft's Translator API](https://learn.microsoft.com/en-us/azure/ai-services/translator/language-support) ### Personal and Sensitive Information N/A ## Considerations for Using the Data ### Social Impact of Dataset We would like to share this parallel corpus and welcome contributions to preserve the Cantonese dialect. ### Discussion of Biases N/A ### Other Known Limitations This parallel corpus is machine-translated, it is not 100% accurate. ## Additional Information ### Dataset Curators - [Botisan AI](https://botisan.ai) - [Haoran (Simon) Liang](https://github.com/lhr0909) ### Licensing Information [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) ### Citation Information ``` @misc {botisanAiCantoneseMandarinTranslationsDatasets, author = {Liang, H.}, title = {Cantonese Mandarin Translations Dataset}, year = {2021}, url = {https://huggingface.co/datasets/botisan-ai/cantonese-mandarin-translations}, } ``` ### Contributions Thanks to [@lhr0909](https://github.com/lhr0909) for adding this dataset.
botisan-ai/cantonese-mandarin-translations
[ "task_categories:text2text-generation", "task_categories:translation", "annotations_creators:machine-generated", "language_creators:found", "multilinguality:translation", "size_categories:unknown", "source_datasets:original", "language:zh", "license:cc-by-nc-sa-4.0", "conditional-text-generation", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["machine-generated"], "language_creators": ["found"], "language": ["zh"], "license": ["cc-by-nc-sa-4.0"], "multilinguality": ["translation"], "size_categories": ["unknown"], "source_datasets": ["original"], "task_categories": ["text2text-generation", "translation"], "task_ids": [], "pretty_name": "Cantonese - Mandarin Translations", "language_bcp47": ["zh-CN", "zh-HK"], "tags": ["conditional-text-generation"]}
2024-01-13T03:30:12+00:00
[]
[ "zh" ]
TAGS #task_categories-text2text-generation #task_categories-translation #annotations_creators-machine-generated #language_creators-found #multilinguality-translation #size_categories-unknown #source_datasets-original #language-Chinese #license-cc-by-nc-sa-4.0 #conditional-text-generation #region-us
# Dataset Card for cantonese-mandarin-translations ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: - Repository: - Paper: - Leaderboard: - Point of Contact: ### Dataset Summary This is a machine-translated parallel corpus between Cantonese (a Chinese dialect that is mainly spoken by Guangdong (province of China), Hong Kong, Macau and part of Malaysia) and Chinese (written form, in Simplified Chinese). ### Supported Tasks and Leaderboards N/A ### Languages - Cantonese ('yue') - Simplified Chinese ('zh-CN') ## Dataset Structure JSON lines with 'yue' field and 'zh' field for the parallel corpus. ### Data Instances N/A ### Data Fields - 'yue': Cantonese corpus - 'zh': translated Chinese corpus ### Data Splits No data splitting is done as of yet. ## Dataset Creation The dataset is produced by doing the following: - Download HKCancor Cantonese Corpus and CommonVoice Cantonese (Hong Kong Chinese 'yue') text corpus - Extract text corpus and merge datasets - Run text against Microsoft's Translator API from 'yue' to 'zh-Hans' ### Curation Rationale Currently no such corpus exists, and it is hard to find such a corpus, so we tried to generate a reasonable batch of samples using machine translation for research purposes. ### Source Data - HKCancor - CommonVoice 7.0 Chinese (Hong Kong) #### Initial Data Collection and Normalization Normalization scripts will be included soon. #### Who are the source language producers? - HKCancor - CommonVoice 7.0 Chinese (Hong Kong) ### Annotations #### Annotation process We run the Cantonese text corpus against Microsoft's Translator API. #### Who are the annotators? - Microsoft's Translator API ### Personal and Sensitive Information N/A ## Considerations for Using the Data ### Social Impact of Dataset We would like to share this parallel corpus and welcome contributions to preserve the Cantonese dialect. ### Discussion of Biases N/A ### Other Known Limitations This parallel corpus is machine-translated, it is not 100% accurate. ## Additional Information ### Dataset Curators - Botisan AI - Haoran (Simon) Liang ### Licensing Information CC BY-NC-SA 4.0 ### Contributions Thanks to @lhr0909 for adding this dataset.
[ "# Dataset Card for cantonese-mandarin-translations", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nThis is a machine-translated parallel corpus between Cantonese (a Chinese dialect that is mainly spoken by Guangdong (province of China), Hong Kong, Macau and part of Malaysia) and Chinese (written form, in Simplified Chinese).", "### Supported Tasks and Leaderboards\n\nN/A", "### Languages\n\n- Cantonese ('yue')\n- Simplified Chinese ('zh-CN')", "## Dataset Structure\n\nJSON lines with 'yue' field and 'zh' field for the parallel corpus.", "### Data Instances\n\nN/A", "### Data Fields\n\n- 'yue': Cantonese corpus\n- 'zh': translated Chinese corpus", "### Data Splits\n\nNo data splitting is done as of yet.", "## Dataset Creation\n\nThe dataset is produced by doing the following:\n\n- Download HKCancor Cantonese Corpus and CommonVoice Cantonese (Hong Kong Chinese 'yue') text corpus\n- Extract text corpus and merge datasets\n- Run text against Microsoft's Translator API from 'yue' to 'zh-Hans'", "### Curation Rationale\n\nCurrently no such corpus exists, and it is hard to find such a corpus, so we tried to generate a reasonable batch of samples using machine translation for research purposes.", "### Source Data\n\n- HKCancor\n- CommonVoice 7.0 Chinese (Hong Kong)", "#### Initial Data Collection and Normalization\n\nNormalization scripts will be included soon.", "#### Who are the source language producers?\n\n- HKCancor\n- CommonVoice 7.0 Chinese (Hong Kong)", "### Annotations", "#### Annotation process\n\nWe run the Cantonese text corpus against Microsoft's Translator API.", "#### Who are the annotators?\n\n- Microsoft's Translator API", "### Personal and Sensitive Information\n\nN/A", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nWe would like to share this parallel corpus and welcome contributions to preserve the Cantonese dialect.", "### Discussion of Biases\n\nN/A", "### Other Known Limitations\n\nThis parallel corpus is machine-translated, it is not 100% accurate.", "## Additional Information", "### Dataset Curators\n\n- Botisan AI\n- Haoran (Simon) Liang", "### Licensing Information\n\nCC BY-NC-SA 4.0", "### Contributions\n\nThanks to @lhr0909 for adding this dataset." ]
[ "TAGS\n#task_categories-text2text-generation #task_categories-translation #annotations_creators-machine-generated #language_creators-found #multilinguality-translation #size_categories-unknown #source_datasets-original #language-Chinese #license-cc-by-nc-sa-4.0 #conditional-text-generation #region-us \n", "# Dataset Card for cantonese-mandarin-translations", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage:\n- Repository:\n- Paper:\n- Leaderboard:\n- Point of Contact:", "### Dataset Summary\n\nThis is a machine-translated parallel corpus between Cantonese (a Chinese dialect that is mainly spoken by Guangdong (province of China), Hong Kong, Macau and part of Malaysia) and Chinese (written form, in Simplified Chinese).", "### Supported Tasks and Leaderboards\n\nN/A", "### Languages\n\n- Cantonese ('yue')\n- Simplified Chinese ('zh-CN')", "## Dataset Structure\n\nJSON lines with 'yue' field and 'zh' field for the parallel corpus.", "### Data Instances\n\nN/A", "### Data Fields\n\n- 'yue': Cantonese corpus\n- 'zh': translated Chinese corpus", "### Data Splits\n\nNo data splitting is done as of yet.", "## Dataset Creation\n\nThe dataset is produced by doing the following:\n\n- Download HKCancor Cantonese Corpus and CommonVoice Cantonese (Hong Kong Chinese 'yue') text corpus\n- Extract text corpus and merge datasets\n- Run text against Microsoft's Translator API from 'yue' to 'zh-Hans'", "### Curation Rationale\n\nCurrently no such corpus exists, and it is hard to find such a corpus, so we tried to generate a reasonable batch of samples using machine translation for research purposes.", "### Source Data\n\n- HKCancor\n- CommonVoice 7.0 Chinese (Hong Kong)", "#### Initial Data Collection and Normalization\n\nNormalization scripts will be included soon.", "#### Who are the source language producers?\n\n- HKCancor\n- CommonVoice 7.0 Chinese (Hong Kong)", "### Annotations", "#### Annotation process\n\nWe run the Cantonese text corpus against Microsoft's Translator API.", "#### Who are the annotators?\n\n- Microsoft's Translator API", "### Personal and Sensitive Information\n\nN/A", "## Considerations for Using the Data", "### Social Impact of Dataset\n\nWe would like to share this parallel corpus and welcome contributions to preserve the Cantonese dialect.", "### Discussion of Biases\n\nN/A", "### Other Known Limitations\n\nThis parallel corpus is machine-translated, it is not 100% accurate.", "## Additional Information", "### Dataset Curators\n\n- Botisan AI\n- Haoran (Simon) Liang", "### Licensing Information\n\nCC BY-NC-SA 4.0", "### Contributions\n\nThanks to @lhr0909 for adding this dataset." ]
0f52f7a38cc210a873daa7207cb023a15d0f7362
# notebookCDG This dataset designed for a recent published paper([HAConvGNN: Hierarchical Attention Based Convolutional Graph Neural Network for Code Documentation Generation in Jupyter Notebooks](https://arxiv.org/abs/2104.01002)) EMNLP'21 Finding. You can directly use dataset_notebook.pkl to run the code from the [github repository](https://github.com/xuyeliu/HAConvGNN) In the repository, we split ground truth documentation split into coms.train, coms.val, and coms.test subsets, following a 8:1:1 ratio. ast_nodes.pkl and ast_edges.pkl are the graph input in this dataset. code.seq is the code sequence input in this dataset. You can also based on the id distribution to split graph and code sequence subsets. Inspired by [Wang et al. 2021](https://dl.acm.org/doi/abs/10.1145/3411763.3451617), we decided to utilize the top-voted and well-documented Kaggle notebooks to construct the notebookCDG dataset We collected the top 10% highly-voted notebooks from the top 20 popular competitions on Kaggle (e.g. Titanic). We checked the data policy of each of the 20 competitions, none of them has copyright issues. We also contacted the Kaggle administrators to make sure our data collection complies with the platform’s policy. In total, we collected 3,944 notebooks as raw data. After data preprocessing, the final dataset contains 2,476 notebooks out of the 3,944 notebooks from the raw data. It has 28,625 code–documentation pairs. The overall code-to-markdown ratio is 2.2195 ## Bibliographic Citations Our work is published at [EMNLP'21 Finding](https://arxiv.org/abs/2104.01002). You can cite: ``` @misc{liu2021haconvgnn, title={HAConvGNN: Hierarchical Attention Based Convolutional Graph Neural Network for Code Documentation Generation in Jupyter Notebooks}, author={Xuye Liu and Dakuo Wang and April Wang and Yufang Hou and Lingfei Wu}, year={2021}, eprint={2104.01002}, archivePrefix={arXiv}, primaryClass={cs.SE} } ```
xuyeliu/notebookCDG
[ "arxiv:2104.01002", "region:us" ]
2022-03-02T23:29:22+00:00
{}
2021-12-31T18:13:39+00:00
[ "2104.01002" ]
[]
TAGS #arxiv-2104.01002 #region-us
# notebookCDG This dataset designed for a recent published paper(HAConvGNN: Hierarchical Attention Based Convolutional Graph Neural Network for Code Documentation Generation in Jupyter Notebooks) EMNLP'21 Finding. You can directly use dataset_notebook.pkl to run the code from the github repository In the repository, we split ground truth documentation split into URL, URL, and URL subsets, following a 8:1:1 ratio. ast_nodes.pkl and ast_edges.pkl are the graph input in this dataset. URL is the code sequence input in this dataset. You can also based on the id distribution to split graph and code sequence subsets. Inspired by Wang et al. 2021, we decided to utilize the top-voted and well-documented Kaggle notebooks to construct the notebookCDG dataset We collected the top 10% highly-voted notebooks from the top 20 popular competitions on Kaggle (e.g. Titanic). We checked the data policy of each of the 20 competitions, none of them has copyright issues. We also contacted the Kaggle administrators to make sure our data collection complies with the platform’s policy. In total, we collected 3,944 notebooks as raw data. After data preprocessing, the final dataset contains 2,476 notebooks out of the 3,944 notebooks from the raw data. It has 28,625 code–documentation pairs. The overall code-to-markdown ratio is 2.2195 ## Bibliographic Citations Our work is published at EMNLP'21 Finding. You can cite:
[ "# notebookCDG\n\nThis dataset designed for a recent published paper(HAConvGNN: Hierarchical Attention Based Convolutional Graph Neural Network for Code Documentation Generation in Jupyter Notebooks) EMNLP'21 Finding. \n\nYou can directly use dataset_notebook.pkl to run the code from the github repository\n\nIn the repository, we split ground truth documentation split into URL, URL, and URL subsets, following a 8:1:1 ratio. ast_nodes.pkl and ast_edges.pkl are the graph input in this dataset. URL is the code sequence input in this dataset. You can also based on the id distribution to split graph and code sequence subsets. \n\nInspired by Wang et al. 2021, we decided to utilize the top-voted and well-documented Kaggle notebooks to construct the notebookCDG dataset \n\nWe collected the top 10% highly-voted notebooks from the top 20 popular competitions on Kaggle (e.g. Titanic). We checked the data policy of each of the 20 competitions, none of them has copyright issues. We also contacted the Kaggle administrators to make sure our data collection complies with the platform’s policy. \n\nIn total, we collected 3,944 notebooks as raw data. After data preprocessing, the final dataset contains 2,476 notebooks out of the 3,944 notebooks from the raw data. It has 28,625 code–documentation pairs. The overall code-to-markdown ratio is 2.2195", "## Bibliographic Citations\n\nOur work is published at EMNLP'21 Finding. You can cite:" ]
[ "TAGS\n#arxiv-2104.01002 #region-us \n", "# notebookCDG\n\nThis dataset designed for a recent published paper(HAConvGNN: Hierarchical Attention Based Convolutional Graph Neural Network for Code Documentation Generation in Jupyter Notebooks) EMNLP'21 Finding. \n\nYou can directly use dataset_notebook.pkl to run the code from the github repository\n\nIn the repository, we split ground truth documentation split into URL, URL, and URL subsets, following a 8:1:1 ratio. ast_nodes.pkl and ast_edges.pkl are the graph input in this dataset. URL is the code sequence input in this dataset. You can also based on the id distribution to split graph and code sequence subsets. \n\nInspired by Wang et al. 2021, we decided to utilize the top-voted and well-documented Kaggle notebooks to construct the notebookCDG dataset \n\nWe collected the top 10% highly-voted notebooks from the top 20 popular competitions on Kaggle (e.g. Titanic). We checked the data policy of each of the 20 competitions, none of them has copyright issues. We also contacted the Kaggle administrators to make sure our data collection complies with the platform’s policy. \n\nIn total, we collected 3,944 notebooks as raw data. After data preprocessing, the final dataset contains 2,476 notebooks out of the 3,944 notebooks from the raw data. It has 28,625 code–documentation pairs. The overall code-to-markdown ratio is 2.2195", "## Bibliographic Citations\n\nOur work is published at EMNLP'21 Finding. You can cite:" ]
5eb45e864fc638e587e87b27063afada64e1794a
Meine wunderschönen Haare wehen im Morgenwind
yannobla/Sunshine
[ "region:us" ]
2022-03-02T23:29:22+00:00
{}
2021-09-07T10:42:47+00:00
[]
[]
TAGS #region-us
Meine wunderschönen Haare wehen im Morgenwind
[]
[ "TAGS\n#region-us \n" ]
6414bae7a39b5f41feab2fd6a1cb773033254c93
## Usage For testing purpose, you can use the hosted dummy dataset (`dummy_data`) as follows: ``` import datasets ds = datasets.load_dataset("ydshieh/coco_dataset_script", "2017", data_dir="./dummy_data/") ``` For using the COCO dataset (2017), you need to download it manually first: ``` wget http://images.cocodataset.org/zips/train2017.zip wget http://images.cocodataset.org/zips/val2017.zip wget http://images.cocodataset.org/zips/test2017.zip wget http://images.cocodataset.org/annotations/annotations_trainval2017.zip wget http://images.cocodataset.org/annotations/image_info_test2017.zip ``` Then to load the dataset: ``` COCO_DIR = ...(path to the downloaded dataset directory)... ds = datasets.load_dataset("ydshieh/coco_dataset_script", "2017", data_dir=COCO_DIR) ```
ydshieh/coco_dataset_script
[ "region:us" ]
2022-03-02T23:29:22+00:00
{}
2022-02-14T17:32:43+00:00
[]
[]
TAGS #region-us
## Usage For testing purpose, you can use the hosted dummy dataset ('dummy_data') as follows: For using the COCO dataset (2017), you need to download it manually first: Then to load the dataset:
[ "## Usage\n\nFor testing purpose, you can use the hosted dummy dataset ('dummy_data') as follows:\n\n\n\nFor using the COCO dataset (2017), you need to download it manually first:\n\n\nThen to load the dataset:" ]
[ "TAGS\n#region-us \n", "## Usage\n\nFor testing purpose, you can use the hosted dummy dataset ('dummy_data') as follows:\n\n\n\nFor using the COCO dataset (2017), you need to download it manually first:\n\n\nThen to load the dataset:" ]
3673fb0d96829eb005d6d0816ed0be21bbac249f
Thyroid ultrasound images, classified into 5 classes that correspond to the European EU-TIRADS scale, this consists of: EU-TIRADS 1: no nodule EU-TIRADS 2: benign EU-TIRADS 3: low risk (oval, smooth margin, iso / hyperechoic, no high risk features) EU-TIRADS 4: intermediate risk (oval, smooth margin, mildly hypoechoic, no high risk features) EU-TIRADS 5: any high risk features (non-oval, irregular margin, microcalcifications, marked hypoechogenicity) Ultrasound images of the thyroid that were taken from the ultrasound scanners of the FOSCAL/FOSUNAB clinic, as a final master's project for the Polytechnic University of Valencia, in collaboration with doctors Federico Lubinus and Boris Marconi, who together with Yhary Arias have worked on the classification of said ultrasounds that are saved in .DICOM format and then transformed to PNG to make the process lighter. The strategy that was carried out for the collection of images and later their labeling was: for each examination that was carried out on patients with or without a possible diagnosis, only the images without personal or sensitive information were kept, all this on a hard drive. , then a pre-processing of the images was done, their format was changed and finally they were mounted on a web page with a single view to facilitate the classification of the doctors who were in charge of this arduous task. Ultrasounds were classified into 5 classes that correspond to the European EU-TIRADS scale, this consists of: EU-TIRADS 1: no nodule EU-TIRADS 2: benign EU-TIRADS 3: low risk (oval, smooth margin, iso / hyperechoic, no high risk features) EU-TIRADS 4: intermediate risk (oval, smooth margin, mildly hypoechoic, no high risk features) EU-TIRADS 5: any high risk features (non-oval, irregular margin, microcalcifications, marked hypoechogenicity) Risk of malignancy EU-TIRADS 1: n/a EU-TIRADS 2: 0% EU-TIRADS 3: low risk (2-4%) EU-TIRADS 4: intermediate risk (6-17%) EU-TIRADS 5: high risk (26-87%) References 1. Gilles Russ, Steen J. Bonnema, Murat Faik Erdogan, Cosimo Durante, Rose Ngu, Laurence Leenhardt. European Thyroid Association Guidelines for Ultrasound Malignancy Risk Stratification of Thyroid Nodules in Adults: The EU-TIRADS. (2019) European ThyroidJournal. 6 (5): 225. doi:10.1159/000478927 - Pubmed 2. Gilles Russ, Bénédicte Royer, Claude Bigorgne, Agnès Rouxel, Marie Bienvenu-Perrard, Laurence Leenhardt. Prospective evaluation of thyroid imaging reporting and data system on 4550 nodules with and without elastography. (2013) European Journal of Endocrinology. 168 (5): 649. doi:10.1530/EJE-12-0936 - Pubmed 3. Jung Hyun Yoon, Kyunghwa Han, Eun-Kyung Kim, Hee Jung Moon, Jin Young Kwak. Diagnosis and Management of Small Thyroid Nodules: A Comparative Study with Six Guidelines for Thyroid Nodules. (2016) Radiology. 283 (2): 560-569. doi:10.1148/radiol.2016160641 - Pubmed 4. Ting Xu, Ya Wu, Run-Xin Wu, Yu-Zhi Zhang, Jing-Yu Gu, Xin-Hua Ye, Wei Tang, Shu-Hang Xu, Chao Liu, Xiao-Hong Wu. Validation and comparison of three newly-released Thyroid Imaging Reporting and Data Systems for cancer risk determination. (2019). Endocrine. 64 (2): 299. doi:10.1007/s12020-018-1817-8 - Pubmed 5. Ting Xu, Ya Wu, Run-Xin Wu, Yu-Zhi Zhang, Jing-Yu Gu, Xin-Hua Ye, Wei Tang, Shu-Hang Xu, Chao Liu, Xiao-Hong Wu. Validation and comparison of three newly-released Thyroid Imaging Reporting and Data Systems for cancer risk determination. (2019). Endocrine. 64 (2): 299. doi:10.1007/s12020-018-1817-8 - Pubmed 6. Grani, Giorgio, Lamartina, Livia, Ascoli, Valeria, Bosco, Daniela, Biffoni, Marco, Giacomelli, Laura, Maranghi, Marianna, Falcone, Rosa, Ramundo, Valeria, Cantisani, Vito, Filetti, Sebastiano, Durante, Cosimo. Reducing the Number of Unnecessary Thyroid Biopsies While Improving Diagnostic Accuracy: Toward the “Right” TIRADS. (2019) The Journal of Clinical Endocrinology & Metabolism. 104 (1): 95. doi:10.1210/jc.2018-01674 - Pubmed 7. Giorgio Grani, Livia Lamartina, Vito Cantisani, Marianna Maranghi, Piernatale Lucia, Cosimo Durante. Interobserver agreement of various thyroid imaging reporting and data systems. (2018) Endocrine Connections. 7 (1): 1. doi:10.1530/EC-17-0336 - Pubmed Taken from: https://radiopaedia.org/articles/european-thyroid-association-tirads *Citation Information* @yharyarias{tirads_tiroides:2022, author = {Yhary Arias, Federico Lubinus, Boris Marconi}, title = {Common Voice: Thyroid Ultrasound Imaging Dataset}, thesistitle = {Sistema para la clasificación y reconocimiento de imágenes de ultrasonido en tiroides, basado en técnicas de aprendizaje profundo para el apoyo en el proceso de diagnóstico según la escala EU-TIRADS}, year = 2022 } Bucaramanga, Santander, 2022
yharyarias/tirads_tiroides
[ "region:us" ]
2022-03-02T23:29:22+00:00
{}
2022-01-24T01:53:21+00:00
[]
[]
TAGS #region-us
Thyroid ultrasound images, classified into 5 classes that correspond to the European EU-TIRADS scale, this consists of: EU-TIRADS 1: no nodule EU-TIRADS 2: benign EU-TIRADS 3: low risk (oval, smooth margin, iso / hyperechoic, no high risk features) EU-TIRADS 4: intermediate risk (oval, smooth margin, mildly hypoechoic, no high risk features) EU-TIRADS 5: any high risk features (non-oval, irregular margin, microcalcifications, marked hypoechogenicity) Ultrasound images of the thyroid that were taken from the ultrasound scanners of the FOSCAL/FOSUNAB clinic, as a final master's project for the Polytechnic University of Valencia, in collaboration with doctors Federico Lubinus and Boris Marconi, who together with Yhary Arias have worked on the classification of said ultrasounds that are saved in .DICOM format and then transformed to PNG to make the process lighter. The strategy that was carried out for the collection of images and later their labeling was: for each examination that was carried out on patients with or without a possible diagnosis, only the images without personal or sensitive information were kept, all this on a hard drive. , then a pre-processing of the images was done, their format was changed and finally they were mounted on a web page with a single view to facilitate the classification of the doctors who were in charge of this arduous task. Ultrasounds were classified into 5 classes that correspond to the European EU-TIRADS scale, this consists of: EU-TIRADS 1: no nodule EU-TIRADS 2: benign EU-TIRADS 3: low risk (oval, smooth margin, iso / hyperechoic, no high risk features) EU-TIRADS 4: intermediate risk (oval, smooth margin, mildly hypoechoic, no high risk features) EU-TIRADS 5: any high risk features (non-oval, irregular margin, microcalcifications, marked hypoechogenicity) Risk of malignancy EU-TIRADS 1: n/a EU-TIRADS 2: 0% EU-TIRADS 3: low risk (2-4%) EU-TIRADS 4: intermediate risk (6-17%) EU-TIRADS 5: high risk (26-87%) References 1. Gilles Russ, Steen J. Bonnema, Murat Faik Erdogan, Cosimo Durante, Rose Ngu, Laurence Leenhardt. European Thyroid Association Guidelines for Ultrasound Malignancy Risk Stratification of Thyroid Nodules in Adults: The EU-TIRADS. (2019) European ThyroidJournal. 6 (5): 225. doi:10.1159/000478927 - Pubmed 2. Gilles Russ, Bénédicte Royer, Claude Bigorgne, Agnès Rouxel, Marie Bienvenu-Perrard, Laurence Leenhardt. Prospective evaluation of thyroid imaging reporting and data system on 4550 nodules with and without elastography. (2013) European Journal of Endocrinology. 168 (5): 649. doi:10.1530/EJE-12-0936 - Pubmed 3. Jung Hyun Yoon, Kyunghwa Han, Eun-Kyung Kim, Hee Jung Moon, Jin Young Kwak. Diagnosis and Management of Small Thyroid Nodules: A Comparative Study with Six Guidelines for Thyroid Nodules. (2016) Radiology. 283 (2): 560-569. doi:10.1148/radiol.2016160641 - Pubmed 4. Ting Xu, Ya Wu, Run-Xin Wu, Yu-Zhi Zhang, Jing-Yu Gu, Xin-Hua Ye, Wei Tang, Shu-Hang Xu, Chao Liu, Xiao-Hong Wu. Validation and comparison of three newly-released Thyroid Imaging Reporting and Data Systems for cancer risk determination. (2019). Endocrine. 64 (2): 299. doi:10.1007/s12020-018-1817-8 - Pubmed 5. Ting Xu, Ya Wu, Run-Xin Wu, Yu-Zhi Zhang, Jing-Yu Gu, Xin-Hua Ye, Wei Tang, Shu-Hang Xu, Chao Liu, Xiao-Hong Wu. Validation and comparison of three newly-released Thyroid Imaging Reporting and Data Systems for cancer risk determination. (2019). Endocrine. 64 (2): 299. doi:10.1007/s12020-018-1817-8 - Pubmed 6. Grani, Giorgio, Lamartina, Livia, Ascoli, Valeria, Bosco, Daniela, Biffoni, Marco, Giacomelli, Laura, Maranghi, Marianna, Falcone, Rosa, Ramundo, Valeria, Cantisani, Vito, Filetti, Sebastiano, Durante, Cosimo. Reducing the Number of Unnecessary Thyroid Biopsies While Improving Diagnostic Accuracy: Toward the “Right” TIRADS. (2019) The Journal of Clinical Endocrinology & Metabolism. 104 (1): 95. doi:10.1210/jc.2018-01674 - Pubmed 7. Giorgio Grani, Livia Lamartina, Vito Cantisani, Marianna Maranghi, Piernatale Lucia, Cosimo Durante. Interobserver agreement of various thyroid imaging reporting and data systems. (2018) Endocrine Connections. 7 (1): 1. doi:10.1530/EC-17-0336 - Pubmed Taken from: URL *Citation Information* @yharyarias{tirads_tiroides:2022, author = {Yhary Arias, Federico Lubinus, Boris Marconi}, title = {Common Voice: Thyroid Ultrasound Imaging Dataset}, thesistitle = {Sistema para la clasificación y reconocimiento de imágenes de ultrasonido en tiroides, basado en técnicas de aprendizaje profundo para el apoyo en el proceso de diagnóstico según la escala EU-TIRADS}, year = 2022 } Bucaramanga, Santander, 2022
[]
[ "TAGS\n#region-us \n" ]
109f542a17db9421961979bb3a30717c46420b67
# Dataset Card for Clean Dutch mC4 ## Table of Contents - [Dataset Card for Clean](#dataset-card-for-mc4) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Preprocessing](#preprocessing) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Original Homepage:** [HF Hub](https://huggingface.co/datasets/allenai/c4) - **Paper:** [ArXiv](https://arxiv.org/abs/1910.10683) ### Dataset Summary A cleaned version (151GB) of the Dutch part (277GB) of the C4 multilingual dataset (mC4). While this dataset is monolingual, it is possible to download `en-nl` interleaved data, see the Dataset Config section below. Based on the [Common Crawl dataset](https://commoncrawl.org). The original version was prepared by [AllenAI](https://allenai.org/), hosted at the address [https://huggingface.co/datasets/allenai/c4](https://huggingface.co/datasets/allenai/c4). ### Preprocessing The Dutch portion of mC4 was cleaned in a similar fashion as the English cleaned C4 version. See [GitLab](https://gitlab.com/yhavinga/c4nlpreproc) for details. In summary, the preprocessing procedure includes: - Removing documents containing words from a selection of the [Dutch and English List of Dirty Naught Obscene and Otherwise Bad Words](https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words). - Removing sentences containing: - Less than 3 words. - A word longer than 250 characters. - An end symbol not matching end-of-sentence punctuation. - Strings associated to javascript code (e.g. `{`), lorem ipsum, policy information in Dutch or English. - Removing documents (after sentence filtering): - Containing less than 5 sentences. - Containing less than 500 or more than 50'000 characters. - Not identified as prevalently Dutch by the `LangDetect` package. Using parallel processing with 96 CPU cores on a TPUv3 via Google Cloud to perform the complete clean of all the original Dutch shards of mC4 (1024 of ~220Mb train, 4 of ~24Mb validation) required roughly 10 hours due to the demanding steps of sentence tokenization and language detection. The total size of compressed `.json.gz` files is roughly halved after the procedure. ## Dataset Structure ### Data Instances An example from the dataset: ``` { 'timestamp': '2019-02-22T15:37:25Z', 'url': 'https://ondernemingen.bnpparibasfortis.be/nl/artikel?n=vijf-gouden-tips-voor-succesvol-zaken-doen-met-japan', 'text': 'Japanse bedrijven zijn niet alleen hondstrouw aan hun leveranciers , ze betalen ook nog eens erg stipt. Alleen is het niet zo makkelijk er een voet tussen de deur te krijgen. Met de volgende tips hebt u alvast een streepje voor.\nIn Japan draait alles om vertrouwen. Neem voldoende tijd om een relatie op te bouwen.Aarzel niet om tijdig een lokale vertrouwenspersoon in te schakelen.\nJapan is een erg competitieve markt.Kwaliteit en prijs zijn erg belangrijk, u zult dus het beste van uzelf moeten geven. Gelukkig is de beloning groot. Japanse zakenlui zijn loyaal en betalen stipt!\nJapanners houden er eigenzinnige eisen op na. Kom dus niet aanzetten met uw standaardproducten voor de Europese markt. Zo moet een producent van diepvriesfrieten bijvoorbeeld perfect identieke frietjes kunnen leveren in mini- verpakkingen. Het goede nieuws is dat Japanners voor kwaliteit graag diep in hun buidel tasten.\nEn u dacht dat Europa lijdt aan reglementitis? Japanners kennen er ook wat van. Tal van voorschriften zeggen wat je wel en niet mag doen. Gelukkig zijn de regels helder geformuleerd.\nHet gebruik van het Engels is niet echt ingeburgerd in Japan. Betrek een tolk bij uw onderhandelingen en zorg voor correcte vertalingen van handleidingen of softwareprogramma’s.' } ``` ### Data Fields The data contains the following fields: - `url`: url of the source as a string - `text`: text content as a string - `timestamp`: timestamp of extraction as a string ### Data Configs To build mC4, the original authors used [CLD3](https://github.com/google/cld3) to identify over 100 languages. For Dutch, the whole corpus of scraped text was divided in `1032` jsonl files, `1024` for training following the naming style `c4-nl-cleaned.tfrecord-0XXXX-of-01024.json.gz` and 4 for validation following the naming style `c4-nl-cleaned.tfrecord-0000X-of-00004.json.gz`. The full set of pre-processed files takes roughly 208GB of disk space to download with Git LFS. For ease of use under different storage capacities, the following incremental configs are available: (note: files on disk are compressed) | config | train size (docs, words, download + preproc disk space) | validation size | |:-------|--------------------------------------------------------:|----------------:| | micro | 125k docs, 23M words (<1GB) | 16k docs | | tiny | 6M docs, 2B words (6 GB + 15 GB) | 16k docs | | small | 15M docs, 6B words (14 GB + 36 GB) | 16k docs | | medium | 31M docs, 12B words (28 GB + 72 GB) | 32k docs | | large | 47M docs, 19B words (42 GB + 108 GB) | 48k docs | | full | 64M docs, 25B words (58 GB + 148 GB) | 64k docs | For each config above there also exists a config `<name>_en_nl` that interleaves `nl` and `en` examples from the cleaned `en` variant of C4. You can load any config like this: ```python from datasets import load_dataset datasets = load_dataset('yhavinga/mc4_nl_cleaned', 'tiny', streaming=True) print(datasets) ``` This will print ``` DatasetDict({ train: Dataset({ features: ['text', 'timestamp', 'url'], num_rows: 6303893 }) validation: Dataset({ features: ['text', 'timestamp', 'url'], num_rows: 16189 }) }) ``` Since the configs are quite large, you may want to traverse them using the streaming mode available starting from — Datasets v1.9.0: ```python from datasets import load_dataset mc4_nl_full_stream = load_dataset('yhavinga/mc4_nl_cleaned', "full", split='train', streaming=True) print(next(iter(mc4_nl_full_stream))) # Prints the example presented above ``` ## Dataset Creation Refer to the original paper for more considerations regarding the choice of sources and the scraping process for creating `mC4`. ## Considerations for Using the Data ### Social Impact of Dataset With more than 151GB (58GB compressed) of cleaned Dutch text and more than 23B estimated words, this is by far the largest available cleaned corpus for the Dutch language. The second largest dataset available is [OSCAR](https://oscar-corpus.com/), which is only 39GB in size for its deduplicated variant, and contains vulgarity. Using this corpus for training language models with adequate computational resources will allow researchers to reach parity with the performances observed for the English language. This can in turn have important repercussions for the development of commercial language technology applications for the Dutch language. ### Discussion of Biases Despite the cleaning procedure aimed at removing vulgarity and profanity, it must be considered that model trained on this scraped corpus will inevitably reflect biases present in blog articles and comments on the Internet. This makes the corpus especially interesting in the context of studying data biases and how to limit their impacts. ## Additional Information ### Licensing Information AllenAI are releasing this dataset under the terms of ODC-BY. By using this, you are also bound by the Common Crawl terms of use in respect of the content contained in the dataset. ### Citation Information ``` @article{2019t5, author = {Colin Raffel and Noam Shazeer and Adam Roberts and Katherine Lee and Sharan Narang and Michael Matena and Yanqi Zhou and Wei Li and Peter J. Liu}, title = {Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer}, journal = {arXiv e-prints}, year = {2019}, archivePrefix = {arXiv}, eprint = {1910.10683}, } ``` ### Contributions Thanks to [[email protected]](mailto:[email protected]), [@dirkgr](https://github.com/dirkgr) and [@lhoestq](https://github.com/lhoestq) for providing the `cleaned_it_mc4` example that shows how upload a dataset to the Huggingface hub.
yhavinga/mc4_nl_cleaned
[ "task_categories:text-generation", "task_ids:language-modeling", "annotations_creators:no-annotation", "language_creators:found", "multilinguality:monolingual", "multilinguality:en-nl", "size_categories:n<1K", "size_categories:1K<n<10K", "size_categories:10K<n<100K", "size_categories:100K<n<1M", "size_categories:1M<n<10M", "size_categories:10M<n<100M", "size_categories:100M<n<1B", "size_categories:1B<n<10B", "source_datasets:extended", "language:nl", "language:en", "license:odc-by", "arxiv:1910.10683", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["no-annotation"], "language_creators": ["found"], "language": ["nl", "en"], "license": ["odc-by"], "multilinguality": ["monolingual", "en-nl"], "size_categories": ["n<1K", "1K<n<10K", "10K<n<100K", "100K<n<1M", "1M<n<10M", "10M<n<100M", "100M<n<1B", "1B<n<10B"], "source_datasets": ["extended"], "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "paperswithcode_id": "mc4", "pretty_name": "mC4_nl_cleaned"}
2024-01-02T13:45:07+00:00
[ "1910.10683" ]
[ "nl", "en" ]
TAGS #task_categories-text-generation #task_ids-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #multilinguality-en-nl #size_categories-n<1K #size_categories-1K<n<10K #size_categories-10K<n<100K #size_categories-100K<n<1M #size_categories-1M<n<10M #size_categories-10M<n<100M #size_categories-100M<n<1B #size_categories-1B<n<10B #source_datasets-extended #language-Dutch #language-English #license-odc-by #arxiv-1910.10683 #region-us
Dataset Card for Clean Dutch mC4 ================================ Table of Contents ----------------- * Dataset Card for Clean + Table of Contents + Dataset Description - Dataset Summary - Preprocessing - Languages + Dataset Structure - Data Instances - Data Fields - Data Splits + Dataset Creation + Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations + Additional Information - Licensing Information - Citation Information - Contributions Dataset Description ------------------- * Original Homepage: HF Hub * Paper: ArXiv ### Dataset Summary A cleaned version (151GB) of the Dutch part (277GB) of the C4 multilingual dataset (mC4). While this dataset is monolingual, it is possible to download 'en-nl' interleaved data, see the Dataset Config section below. Based on the Common Crawl dataset. The original version was prepared by AllenAI, hosted at the address URL ### Preprocessing The Dutch portion of mC4 was cleaned in a similar fashion as the English cleaned C4 version. See GitLab for details. In summary, the preprocessing procedure includes: * Removing documents containing words from a selection of the Dutch and English List of Dirty Naught Obscene and Otherwise Bad Words. * Removing sentences containing: + Less than 3 words. + A word longer than 250 characters. + An end symbol not matching end-of-sentence punctuation. + Strings associated to javascript code (e.g. '{'), lorem ipsum, policy information in Dutch or English. * Removing documents (after sentence filtering): + Containing less than 5 sentences. + Containing less than 500 or more than 50'000 characters. + Not identified as prevalently Dutch by the 'LangDetect' package. Using parallel processing with 96 CPU cores on a TPUv3 via Google Cloud to perform the complete clean of all the original Dutch shards of mC4 (1024 of ~220Mb train, 4 of ~24Mb validation) required roughly 10 hours due to the demanding steps of sentence tokenization and language detection. The total size of compressed '.URL' files is roughly halved after the procedure. Dataset Structure ----------------- ### Data Instances An example from the dataset: ### Data Fields The data contains the following fields: * 'url': url of the source as a string * 'text': text content as a string * 'timestamp': timestamp of extraction as a string ### Data Configs To build mC4, the original authors used CLD3 to identify over 100 languages. For Dutch, the whole corpus of scraped text was divided in '1032' jsonl files, '1024' for training following the naming style 'URL' and 4 for validation following the naming style 'URL'. The full set of pre-processed files takes roughly 208GB of disk space to download with Git LFS. For ease of use under different storage capacities, the following incremental configs are available: (note: files on disk are compressed) For each config above there also exists a config '\_en\_nl' that interleaves 'nl' and 'en' examples from the cleaned 'en' variant of C4. You can load any config like this: This will print Since the configs are quite large, you may want to traverse them using the streaming mode available starting from — Datasets v1.9.0: Dataset Creation ---------------- Refer to the original paper for more considerations regarding the choice of sources and the scraping process for creating 'mC4'. Considerations for Using the Data --------------------------------- ### Social Impact of Dataset With more than 151GB (58GB compressed) of cleaned Dutch text and more than 23B estimated words, this is by far the largest available cleaned corpus for the Dutch language. The second largest dataset available is OSCAR, which is only 39GB in size for its deduplicated variant, and contains vulgarity. Using this corpus for training language models with adequate computational resources will allow researchers to reach parity with the performances observed for the English language. This can in turn have important repercussions for the development of commercial language technology applications for the Dutch language. ### Discussion of Biases Despite the cleaning procedure aimed at removing vulgarity and profanity, it must be considered that model trained on this scraped corpus will inevitably reflect biases present in blog articles and comments on the Internet. This makes the corpus especially interesting in the context of studying data biases and how to limit their impacts. Additional Information ---------------------- ### Licensing Information AllenAI are releasing this dataset under the terms of ODC-BY. By using this, you are also bound by the Common Crawl terms of use in respect of the content contained in the dataset. ### Contributions Thanks to gabriele.sarti996@URL, @dirkgr and @lhoestq for providing the 'cleaned\_it\_mc4' example that shows how upload a dataset to the Huggingface hub.
[ "### Dataset Summary\n\n\nA cleaned version (151GB) of the Dutch part (277GB) of the C4 multilingual dataset (mC4).\nWhile this dataset is monolingual, it is possible to download 'en-nl' interleaved data, see the Dataset Config section below.\nBased on the Common Crawl dataset.\nThe original version was prepared by AllenAI, hosted at the address URL", "### Preprocessing\n\n\nThe Dutch portion of mC4 was cleaned in a similar fashion as the English cleaned C4 version.\nSee GitLab for details.\n\n\nIn summary, the preprocessing procedure includes:\n\n\n* Removing documents containing words from a selection of the Dutch and English List of Dirty Naught Obscene and Otherwise Bad Words.\n* Removing sentences containing:\n\n\n\t+ Less than 3 words.\n\t+ A word longer than 250 characters.\n\t+ An end symbol not matching end-of-sentence punctuation.\n\t+ Strings associated to javascript code (e.g. '{'), lorem ipsum, policy information in Dutch or English.\n* Removing documents (after sentence filtering):\n\n\n\t+ Containing less than 5 sentences.\n\t+ Containing less than 500 or more than 50'000 characters.\n\t+ Not identified as prevalently Dutch by the 'LangDetect' package.\n\n\nUsing parallel processing with 96 CPU cores on a TPUv3 via Google Cloud to perform the complete clean of all the original Dutch\nshards of mC4 (1024 of ~220Mb train, 4 of ~24Mb validation) required roughly 10 hours due to the demanding steps of sentence\ntokenization and language detection. The total size of compressed '.URL' files is roughly halved after the procedure.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example from the dataset:", "### Data Fields\n\n\nThe data contains the following fields:\n\n\n* 'url': url of the source as a string\n* 'text': text content as a string\n* 'timestamp': timestamp of extraction as a string", "### Data Configs\n\n\nTo build mC4, the original authors used CLD3 to identify over 100 languages.\nFor Dutch, the whole corpus of scraped text was divided in '1032' jsonl files, '1024' for training following\nthe naming style 'URL' and 4 for validation following the\nnaming style 'URL'. The full set of pre-processed files takes roughly 208GB of disk space to download with Git LFS.\n\n\nFor ease of use under different storage capacities, the following incremental configs are available: (note: files on disk are compressed)\n\n\n\nFor each config above there also exists a config '\\_en\\_nl' that interleaves 'nl' and 'en' examples from the cleaned\n'en' variant of C4.\n\n\nYou can load any config like this:\n\n\nThis will print\n\n\nSince the configs are quite large, you may want to traverse them using the streaming mode available starting from — Datasets v1.9.0:\n\n\nDataset Creation\n----------------\n\n\nRefer to the original paper for more considerations regarding the choice of sources and the scraping process for creating 'mC4'.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\nWith more than 151GB (58GB compressed) of cleaned Dutch text and more than 23B estimated words, this is by far the largest available cleaned corpus for the Dutch language.\nThe second largest dataset available is OSCAR, which is only 39GB in size for its deduplicated variant, and contains vulgarity.\nUsing this corpus for training language models with adequate computational resources will allow researchers to reach parity with the performances observed for the English language.\nThis can in turn have important repercussions for the development of commercial language technology applications for the Dutch language.", "### Discussion of Biases\n\n\nDespite the cleaning procedure aimed at removing vulgarity and profanity, it must be considered that model trained on this scraped corpus will\ninevitably reflect biases present in blog articles and comments on the Internet.\nThis makes the corpus especially interesting in the context of studying data biases and how to limit their impacts.\n\n\nAdditional Information\n----------------------", "### Licensing Information\n\n\nAllenAI are releasing this dataset under the terms of ODC-BY. By using this, you are also bound by the Common Crawl terms of use in respect of the content contained in the dataset.", "### Contributions\n\n\nThanks to gabriele.sarti996@URL, @dirkgr and @lhoestq for\nproviding the 'cleaned\\_it\\_mc4' example that shows how upload a dataset to the Huggingface hub." ]
[ "TAGS\n#task_categories-text-generation #task_ids-language-modeling #annotations_creators-no-annotation #language_creators-found #multilinguality-monolingual #multilinguality-en-nl #size_categories-n<1K #size_categories-1K<n<10K #size_categories-10K<n<100K #size_categories-100K<n<1M #size_categories-1M<n<10M #size_categories-10M<n<100M #size_categories-100M<n<1B #size_categories-1B<n<10B #source_datasets-extended #language-Dutch #language-English #license-odc-by #arxiv-1910.10683 #region-us \n", "### Dataset Summary\n\n\nA cleaned version (151GB) of the Dutch part (277GB) of the C4 multilingual dataset (mC4).\nWhile this dataset is monolingual, it is possible to download 'en-nl' interleaved data, see the Dataset Config section below.\nBased on the Common Crawl dataset.\nThe original version was prepared by AllenAI, hosted at the address URL", "### Preprocessing\n\n\nThe Dutch portion of mC4 was cleaned in a similar fashion as the English cleaned C4 version.\nSee GitLab for details.\n\n\nIn summary, the preprocessing procedure includes:\n\n\n* Removing documents containing words from a selection of the Dutch and English List of Dirty Naught Obscene and Otherwise Bad Words.\n* Removing sentences containing:\n\n\n\t+ Less than 3 words.\n\t+ A word longer than 250 characters.\n\t+ An end symbol not matching end-of-sentence punctuation.\n\t+ Strings associated to javascript code (e.g. '{'), lorem ipsum, policy information in Dutch or English.\n* Removing documents (after sentence filtering):\n\n\n\t+ Containing less than 5 sentences.\n\t+ Containing less than 500 or more than 50'000 characters.\n\t+ Not identified as prevalently Dutch by the 'LangDetect' package.\n\n\nUsing parallel processing with 96 CPU cores on a TPUv3 via Google Cloud to perform the complete clean of all the original Dutch\nshards of mC4 (1024 of ~220Mb train, 4 of ~24Mb validation) required roughly 10 hours due to the demanding steps of sentence\ntokenization and language detection. The total size of compressed '.URL' files is roughly halved after the procedure.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nAn example from the dataset:", "### Data Fields\n\n\nThe data contains the following fields:\n\n\n* 'url': url of the source as a string\n* 'text': text content as a string\n* 'timestamp': timestamp of extraction as a string", "### Data Configs\n\n\nTo build mC4, the original authors used CLD3 to identify over 100 languages.\nFor Dutch, the whole corpus of scraped text was divided in '1032' jsonl files, '1024' for training following\nthe naming style 'URL' and 4 for validation following the\nnaming style 'URL'. The full set of pre-processed files takes roughly 208GB of disk space to download with Git LFS.\n\n\nFor ease of use under different storage capacities, the following incremental configs are available: (note: files on disk are compressed)\n\n\n\nFor each config above there also exists a config '\\_en\\_nl' that interleaves 'nl' and 'en' examples from the cleaned\n'en' variant of C4.\n\n\nYou can load any config like this:\n\n\nThis will print\n\n\nSince the configs are quite large, you may want to traverse them using the streaming mode available starting from — Datasets v1.9.0:\n\n\nDataset Creation\n----------------\n\n\nRefer to the original paper for more considerations regarding the choice of sources and the scraping process for creating 'mC4'.\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset\n\n\nWith more than 151GB (58GB compressed) of cleaned Dutch text and more than 23B estimated words, this is by far the largest available cleaned corpus for the Dutch language.\nThe second largest dataset available is OSCAR, which is only 39GB in size for its deduplicated variant, and contains vulgarity.\nUsing this corpus for training language models with adequate computational resources will allow researchers to reach parity with the performances observed for the English language.\nThis can in turn have important repercussions for the development of commercial language technology applications for the Dutch language.", "### Discussion of Biases\n\n\nDespite the cleaning procedure aimed at removing vulgarity and profanity, it must be considered that model trained on this scraped corpus will\ninevitably reflect biases present in blog articles and comments on the Internet.\nThis makes the corpus especially interesting in the context of studying data biases and how to limit their impacts.\n\n\nAdditional Information\n----------------------", "### Licensing Information\n\n\nAllenAI are releasing this dataset under the terms of ODC-BY. By using this, you are also bound by the Common Crawl terms of use in respect of the content contained in the dataset.", "### Contributions\n\n\nThanks to gabriele.sarti996@URL, @dirkgr and @lhoestq for\nproviding the 'cleaned\\_it\\_mc4' example that shows how upload a dataset to the Huggingface hub." ]
9111d6987c89a76a1a640bfc661ccdb712e9e4cd
https://www.geogebra.org/m/cwcveget https://www.geogebra.org/m/b8dzxk6z https://www.geogebra.org/m/nqanttum https://www.geogebra.org/m/pd3g8a4u https://www.geogebra.org/m/jw8324jz https://www.geogebra.org/m/wjbpvz5q https://www.geogebra.org/m/qm3g3ma6 https://www.geogebra.org/m/sdajgph8 https://www.geogebra.org/m/e3ghhcbf https://www.geogebra.org/m/msne4bfm https://www.geogebra.org/m/nmcv2te5 https://www.geogebra.org/m/hguqx6cn https://www.geogebra.org/m/jnyvpgqu https://www.geogebra.org/m/syctd97g https://www.geogebra.org/m/nq9erdby https://www.geogebra.org/m/au4har8c https://network.aza.org/network/members/profile?UserKey=811de229-7f08-4360-863c-ac04181ba9c0 https://network.aza.org/network/members/profile?UserKey=31b495a0-36f7-4a50-ba3e-d76e3487278c https://network.aza.org/network/members/profile?UserKey=753c0ddd-bded-4b03-8c68-11dacdd1f676 https://network.aza.org/network/members/profile?UserKey=db9d0a25-1615-4e39-b61f-ad68766095b3 https://network.aza.org/network/members/profile?UserKey=59279f52-50cf-4686-9fb0-9ab613211ead https://network.aza.org/network/members/profile?UserKey=67b3ce20-cc3a-420f-8933-10796f301060 https://network.aza.org/network/members/profile?UserKey=f5e610c3-6400-4429-b42b-97eeeeb284a9 https://network.aza.org/network/members/profile?UserKey=ccda0739-f5f5-4ecc-a729-77c9a6825897 https://network.aza.org/network/members/profile?UserKey=3983471f-cf43-4a4a-90d3-148040f92dd9 https://network.aza.org/network/members/profile?UserKey=9f16d7a8-3502-4904-a99a-38362de78973 https://network.aza.org/network/members/profile?UserKey=961981d5-9743-44ac-8525-d4c8b708eb5a https://network.aza.org/network/members/profile?UserKey=178276d7-c64d-408e-af52-96d1ebd549fc
yluisfern/PBU
[ "region:us" ]
2022-03-02T23:29:22+00:00
{}
2021-04-02T15:39:30+00:00
[]
[]
TAGS #region-us
URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL URL
[]
[ "TAGS\n#region-us \n" ]
06ee53dad2bab38ab0c45f13cd6d3c1c85d640ee
- hoge - fuga
yonesuke/Ising2D
[ "region:us" ]
2022-03-02T23:29:22+00:00
{}
2022-01-18T11:50:23+00:00
[]
[]
TAGS #region-us
- hoge - fuga
[]
[ "TAGS\n#region-us \n" ]
3368ab40c719d3fc556a2d11b8c1d32fac9278be
This dataset contains scripts for all episodes of Rick and Morty season 1,2, and 3. Columns : index, season no., episode no., episode name, (character) name, line (dialogue)
ysharma/rickandmorty
[ "region:us" ]
2022-03-02T23:29:22+00:00
{}
2022-01-02T00:45:54+00:00
[]
[]
TAGS #region-us
This dataset contains scripts for all episodes of Rick and Morty season 1,2, and 3. Columns : index, season no., episode no., episode name, (character) name, line (dialogue)
[]
[ "TAGS\n#region-us \n" ]
86de7d45936fe0885b6783dff6bdd6e6eca8eff0
# Dataset Card for annotated_reference_strings ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://www.github.com/kylase](https://www.github.com/kylase) - **Repository:** [https://www.github.com/kylase](https://www.github.com/kylase) - **Point of Contact:** [Yuan Chuan Kee](https://www.github.com/kylase) ### Dataset Summary The `annotated_reference_strings` dataset comprises millions of the annotated reference strings, i.e. each token of the strings have an associated label such as author, title, year, etc. These strings are synthesized using citation processor on millions of citations obtained from various sources, spanning different scientific domains. ### Supported Tasks This dataset can be used for structure prediction. ### Languages The dataset is composed of reference strings that are in English. ## Dataset Structure ### Data Instances ```json { "source": "pubmed", "lang": "en", "entry_type": "article", "doi_prefix": "pubmed19n0001", "csl_style": "annual-reviews", "content": "<citation-number>8.</citation-number> <author>Mohr W.</author> <year>1977.</year> <title>[Morphology of bone tumors. 2. Morphology of benign bone tumors].</title> <container-title>Aktuelle Probleme in Chirurgie und Orthopadie.</container-title> <volume>5:</volume> <page>29–42</page>" } ``` #### Important Note 1. Each citation is rendered to _at most_ **17** CSL styles. Therefore, there will be near duplicates. 2. All characters (including punctuations) of a segment (**a segment consists of 1 or more token**) are enclosed by tag(s). 1. Only tokens that act as "conjunctions" are not enclosed in tags. These tokens will be labelled as `other`. 3. There will be instances which a segment can be enclosed by more than one tag e.g. `<issued><year>2021</year></issued>`. This depends on how the styles' author(s). ### Data Fields - `source`: Describe the source of the citation. `{pubmed, jstor, crossref}` - `lang`: Describe the language of the citation. `{en}` - `entry_type`: Describe the BibTeX entry type. `{article, book, inbook, misc, techreport, phdthesis, incollection, inproceedings}` - `doi_prefix`: For JSTOR and CrossRef, it is the prefix of the DOI. For PubMed, it is the directory (e.g. `pubmed19nXXXX` where `XXXX` is 4 digits) of which the citation is generated from. - `csl_style`: The CSL style which the citation is rendered as. - `content`: The rendered citation of a specific style with each segment enclosed by tags named after the CSL variables ### Data Splits Data splits are not available yet. ## Dataset Creation ### Source Data #### Initial Data Collection and Normalization The citations that are used to generate these reference strings are obtained from 3 main sources: - [PubMed](https://www.nlm.nih.gov/databases/download/pubmed_medline.html) (2019 Baseline) - CrossRef via [Open Academic Graph v2](https://www.microsoft.com/en-us/research/project/open-academic-graph/) - JSTOR Sample Datasets (not available online as of publication date) If the citation is not in BibTeX format, [bibutils](https://sourceforge.net/p/bibutils/home/Bibutils/) is used to convert it to BibTeX. #### Who are the source language producers? The manner which the citations are rendered as reference strings are based on rules/specifications dictated by the publisher. [Citation Style Language](https://citationstyles.org/) (CSL) is an established standard which such specifications are prescribed. Thousands of citation styles are available. ### Annotations #### Annotation process The annotation process involves 2 main interventions: 1. Modification of the styles' CSL specification to inject the CSL variable names as part of the render process 2. Sanitization of the rendered strings using regular expressions to ensure all tokens and characters are enclosed in the tags #### Who are the annotators? The original CSL specification are available on [GitHub](https://github.com/citation-style-language/styles). The modification of the styles and the sanitization process are done by the author of this work. ## Additional Information ### Licensing Information This dataset is licensed under [Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/). ### Citation Information This dataset is a product of a Master Project done in the National University of Singapore. If you are using it, please cite the following: ```bibtex @techreport{kee2021, author = {Yuan Chuan Kee}, title = {Synthesis of a large dataset of annotated reference strings for developing citation parsers}, institution = {National University of Singapore}, year = {2021} } ``` ### Contributions Thanks to [@kylase](https://github.com/kylase) for adding this dataset.
yuanchuan/annotated_reference_strings
[ "task_categories:token-classification", "task_ids:parsing", "annotations_creators:other", "language_creators:found", "multilinguality:monolingual", "size_categories:10M<n<100M", "source_datasets:original", "language:en", "license:cc-by-4.0", "region:us" ]
2022-03-02T23:29:22+00:00
{"annotations_creators": ["other"], "language_creators": ["found"], "language": ["en"], "license": ["cc-by-4.0"], "multilinguality": ["monolingual"], "size_categories": ["10M<n<100M"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["parsing"], "pretty_name": "Annotated Reference Strings"}
2022-10-26T13:53:23+00:00
[]
[ "en" ]
TAGS #task_categories-token-classification #task_ids-parsing #annotations_creators-other #language_creators-found #multilinguality-monolingual #size_categories-10M<n<100M #source_datasets-original #language-English #license-cc-by-4.0 #region-us
# Dataset Card for annotated_reference_strings ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: URL - Repository: URL - Point of Contact: Yuan Chuan Kee ### Dataset Summary The 'annotated_reference_strings' dataset comprises millions of the annotated reference strings, i.e. each token of the strings have an associated label such as author, title, year, etc. These strings are synthesized using citation processor on millions of citations obtained from various sources, spanning different scientific domains. ### Supported Tasks This dataset can be used for structure prediction. ### Languages The dataset is composed of reference strings that are in English. ## Dataset Structure ### Data Instances #### Important Note 1. Each citation is rendered to _at most_ 17 CSL styles. Therefore, there will be near duplicates. 2. All characters (including punctuations) of a segment (a segment consists of 1 or more token) are enclosed by tag(s). 1. Only tokens that act as "conjunctions" are not enclosed in tags. These tokens will be labelled as 'other'. 3. There will be instances which a segment can be enclosed by more than one tag e.g. '<issued><year>2021</year></issued>'. This depends on how the styles' author(s). ### Data Fields - 'source': Describe the source of the citation. '{pubmed, jstor, crossref}' - 'lang': Describe the language of the citation. '{en}' - 'entry_type': Describe the BibTeX entry type. '{article, book, inbook, misc, techreport, phdthesis, incollection, inproceedings}' - 'doi_prefix': For JSTOR and CrossRef, it is the prefix of the DOI. For PubMed, it is the directory (e.g. 'pubmed19nXXXX' where 'XXXX' is 4 digits) of which the citation is generated from. - 'csl_style': The CSL style which the citation is rendered as. - 'content': The rendered citation of a specific style with each segment enclosed by tags named after the CSL variables ### Data Splits Data splits are not available yet. ## Dataset Creation ### Source Data #### Initial Data Collection and Normalization The citations that are used to generate these reference strings are obtained from 3 main sources: - PubMed (2019 Baseline) - CrossRef via Open Academic Graph v2 - JSTOR Sample Datasets (not available online as of publication date) If the citation is not in BibTeX format, bibutils is used to convert it to BibTeX. #### Who are the source language producers? The manner which the citations are rendered as reference strings are based on rules/specifications dictated by the publisher. Citation Style Language (CSL) is an established standard which such specifications are prescribed. Thousands of citation styles are available. ### Annotations #### Annotation process The annotation process involves 2 main interventions: 1. Modification of the styles' CSL specification to inject the CSL variable names as part of the render process 2. Sanitization of the rendered strings using regular expressions to ensure all tokens and characters are enclosed in the tags #### Who are the annotators? The original CSL specification are available on GitHub. The modification of the styles and the sanitization process are done by the author of this work. ## Additional Information ### Licensing Information This dataset is licensed under Creative Commons Attribution 4.0 International (CC BY 4.0). This dataset is a product of a Master Project done in the National University of Singapore. If you are using it, please cite the following: ### Contributions Thanks to @kylase for adding this dataset.
[ "# Dataset Card for annotated_reference_strings", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Point of Contact: Yuan Chuan Kee", "### Dataset Summary\n\nThe 'annotated_reference_strings' dataset comprises millions of the annotated reference strings, i.e. each token of the strings have an associated label such as author, title, year, etc.\n\nThese strings are synthesized using citation processor on millions of citations obtained from various sources, spanning different scientific domains.", "### Supported Tasks\n\nThis dataset can be used for structure prediction.", "### Languages\n\nThe dataset is composed of reference strings that are in English.", "## Dataset Structure", "### Data Instances", "#### Important Note \n\n1. Each citation is rendered to _at most_ 17 CSL styles. Therefore, there will be near duplicates.\n2. All characters (including punctuations) of a segment (a segment consists of 1 or more token) are enclosed by tag(s). \n 1. Only tokens that act as \"conjunctions\" are not enclosed in tags. These tokens will be labelled as 'other'.\n3. There will be instances which a segment can be enclosed by more than one tag e.g. '<issued><year>2021</year></issued>'. This depends on how the styles' author(s).", "### Data Fields\n\n- 'source': Describe the source of the citation. '{pubmed, jstor, crossref}'\n- 'lang': Describe the language of the citation. '{en}'\n- 'entry_type': Describe the BibTeX entry type. '{article, book, inbook, misc, techreport, phdthesis, incollection, inproceedings}'\n- 'doi_prefix': For JSTOR and CrossRef, it is the prefix of the DOI. For PubMed, it is the directory (e.g. 'pubmed19nXXXX' where 'XXXX' is 4 digits) of which the citation is generated from.\n- 'csl_style': The CSL style which the citation is rendered as.\n- 'content': The rendered citation of a specific style with each segment enclosed by tags named after the CSL variables", "### Data Splits\n\nData splits are not available yet.", "## Dataset Creation", "### Source Data", "#### Initial Data Collection and Normalization\n\nThe citations that are used to generate these reference strings are obtained from 3 main sources:\n\n- PubMed (2019 Baseline)\n- CrossRef via Open Academic Graph v2\n- JSTOR Sample Datasets (not available online as of publication date)\n\nIf the citation is not in BibTeX format, bibutils is used to convert it to BibTeX.", "#### Who are the source language producers?\n\nThe manner which the citations are rendered as reference strings are based on rules/specifications dictated by the publisher.\nCitation Style Language (CSL) is an established standard which such specifications are prescribed. \nThousands of citation styles are available.", "### Annotations", "#### Annotation process\n\nThe annotation process involves 2 main interventions:\n1. Modification of the styles' CSL specification to inject the CSL variable names as part of the render process\n2. Sanitization of the rendered strings using regular expressions to ensure all tokens and characters are enclosed in the tags", "#### Who are the annotators?\n\nThe original CSL specification are available on GitHub.\n\nThe modification of the styles and the sanitization process are done by the author of this work.", "## Additional Information", "### Licensing Information\n\nThis dataset is licensed under Creative Commons Attribution 4.0 International (CC BY 4.0).\n\n\n\nThis dataset is a product of a Master Project done in the National University of Singapore. \n\nIf you are using it, please cite the following:", "### Contributions\n\nThanks to @kylase for adding this dataset." ]
[ "TAGS\n#task_categories-token-classification #task_ids-parsing #annotations_creators-other #language_creators-found #multilinguality-monolingual #size_categories-10M<n<100M #source_datasets-original #language-English #license-cc-by-4.0 #region-us \n", "# Dataset Card for annotated_reference_strings", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Point of Contact: Yuan Chuan Kee", "### Dataset Summary\n\nThe 'annotated_reference_strings' dataset comprises millions of the annotated reference strings, i.e. each token of the strings have an associated label such as author, title, year, etc.\n\nThese strings are synthesized using citation processor on millions of citations obtained from various sources, spanning different scientific domains.", "### Supported Tasks\n\nThis dataset can be used for structure prediction.", "### Languages\n\nThe dataset is composed of reference strings that are in English.", "## Dataset Structure", "### Data Instances", "#### Important Note \n\n1. Each citation is rendered to _at most_ 17 CSL styles. Therefore, there will be near duplicates.\n2. All characters (including punctuations) of a segment (a segment consists of 1 or more token) are enclosed by tag(s). \n 1. Only tokens that act as \"conjunctions\" are not enclosed in tags. These tokens will be labelled as 'other'.\n3. There will be instances which a segment can be enclosed by more than one tag e.g. '<issued><year>2021</year></issued>'. This depends on how the styles' author(s).", "### Data Fields\n\n- 'source': Describe the source of the citation. '{pubmed, jstor, crossref}'\n- 'lang': Describe the language of the citation. '{en}'\n- 'entry_type': Describe the BibTeX entry type. '{article, book, inbook, misc, techreport, phdthesis, incollection, inproceedings}'\n- 'doi_prefix': For JSTOR and CrossRef, it is the prefix of the DOI. For PubMed, it is the directory (e.g. 'pubmed19nXXXX' where 'XXXX' is 4 digits) of which the citation is generated from.\n- 'csl_style': The CSL style which the citation is rendered as.\n- 'content': The rendered citation of a specific style with each segment enclosed by tags named after the CSL variables", "### Data Splits\n\nData splits are not available yet.", "## Dataset Creation", "### Source Data", "#### Initial Data Collection and Normalization\n\nThe citations that are used to generate these reference strings are obtained from 3 main sources:\n\n- PubMed (2019 Baseline)\n- CrossRef via Open Academic Graph v2\n- JSTOR Sample Datasets (not available online as of publication date)\n\nIf the citation is not in BibTeX format, bibutils is used to convert it to BibTeX.", "#### Who are the source language producers?\n\nThe manner which the citations are rendered as reference strings are based on rules/specifications dictated by the publisher.\nCitation Style Language (CSL) is an established standard which such specifications are prescribed. \nThousands of citation styles are available.", "### Annotations", "#### Annotation process\n\nThe annotation process involves 2 main interventions:\n1. Modification of the styles' CSL specification to inject the CSL variable names as part of the render process\n2. Sanitization of the rendered strings using regular expressions to ensure all tokens and characters are enclosed in the tags", "#### Who are the annotators?\n\nThe original CSL specification are available on GitHub.\n\nThe modification of the styles and the sanitization process are done by the author of this work.", "## Additional Information", "### Licensing Information\n\nThis dataset is licensed under Creative Commons Attribution 4.0 International (CC BY 4.0).\n\n\n\nThis dataset is a product of a Master Project done in the National University of Singapore. \n\nIf you are using it, please cite the following:", "### Contributions\n\nThanks to @kylase for adding this dataset." ]
14ab48911e45af72b8aec9f6eda9906694c3f094
# Italian Male Voice This dataset is an Italian version of [LJSpeech](https://keithito.com/LJ-Speech-Dataset/), that merge all female audio of the same speaker finded into [M-AILABS Speech Dataset](https://www.caito.de/2019/01/the-m-ailabs-speech-dataset/). This dataset contains 8h 23m of one speacker recorded at 16000Hz. This is a valid choiche to train an italian TTS deep model with female voice.
z-uo/female-LJSpeech-italian
[ "multilinguality:monolingual", "language:it", "region:us" ]
2022-03-02T23:29:22+00:00
{"language": ["it"], "multilinguality": ["monolingual"], "task_categories": ["tts"], "task_ids": ["tts"]}
2022-10-23T03:56:44+00:00
[]
[ "it" ]
TAGS #multilinguality-monolingual #language-Italian #region-us
# Italian Male Voice This dataset is an Italian version of LJSpeech, that merge all female audio of the same speaker finded into M-AILABS Speech Dataset. This dataset contains 8h 23m of one speacker recorded at 16000Hz. This is a valid choiche to train an italian TTS deep model with female voice.
[ "# Italian Male Voice\nThis dataset is an Italian version of LJSpeech, that merge all female audio of the same speaker finded into M-AILABS Speech Dataset.\n\nThis dataset contains 8h 23m of one speacker recorded at 16000Hz. This is a valid choiche to train an italian TTS deep model with female voice." ]
[ "TAGS\n#multilinguality-monolingual #language-Italian #region-us \n", "# Italian Male Voice\nThis dataset is an Italian version of LJSpeech, that merge all female audio of the same speaker finded into M-AILABS Speech Dataset.\n\nThis dataset contains 8h 23m of one speacker recorded at 16000Hz. This is a valid choiche to train an italian TTS deep model with female voice." ]
ac9f1f8c8831eb367b460ff1c87b991ad1996519
# Italian Male Voice This dataset is an Italian version of [LJSpeech](https://keithito.com/LJ-Speech-Dataset/), that merge all male audio of the same speaker finded into [M-AILABS Speech Dataset](https://www.caito.de/2019/01/the-m-ailabs-speech-dataset/). This dataset contains 31h 45m of one speacker recorded at 16000Hz. This is a valid choiche to train an italian TTS deep model with male voice.
z-uo/male-LJSpeech-italian
[ "multilinguality:monolingual", "language:it", "region:us" ]
2022-03-02T23:29:22+00:00
{"language": ["it"], "multilinguality": ["monolingual"], "task_categories": ["tts"], "task_ids": ["tts"]}
2022-10-23T03:57:26+00:00
[]
[ "it" ]
TAGS #multilinguality-monolingual #language-Italian #region-us
# Italian Male Voice This dataset is an Italian version of LJSpeech, that merge all male audio of the same speaker finded into M-AILABS Speech Dataset. This dataset contains 31h 45m of one speacker recorded at 16000Hz. This is a valid choiche to train an italian TTS deep model with male voice.
[ "# Italian Male Voice\nThis dataset is an Italian version of LJSpeech, that merge all male audio of the same speaker finded into M-AILABS Speech Dataset.\n\nThis dataset contains 31h 45m of one speacker recorded at 16000Hz. This is a valid choiche to train an italian TTS deep model with male voice." ]
[ "TAGS\n#multilinguality-monolingual #language-Italian #region-us \n", "# Italian Male Voice\nThis dataset is an Italian version of LJSpeech, that merge all male audio of the same speaker finded into M-AILABS Speech Dataset.\n\nThis dataset contains 31h 45m of one speacker recorded at 16000Hz. This is a valid choiche to train an italian TTS deep model with male voice." ]
d73d22a877588114280072b6639292f9c3a99e5b
# Squad-it This dataset is an adapted version of that [squad-it](https://github.com/crux82/squad-it) to train on HuggingFace models. It contains: - train samples: 87599 - test samples : 10570 This dataset is for question answering and his format is the following: ``` [ { "answers": [ { "answer_start": [1], "text": ["Questo è un testo"] }, ], "context": "Questo è un testo relativo al contesto.", "id": "1", "question": "Questo è un testo?", "title": "train test" } ] ``` It can be used to train many models like T5, Bert, Distilbert...
z-uo/squad-it
[ "task_categories:question-answering", "task_ids:extractive-qa", "multilinguality:monolingual", "size_categories:8k<n<10k", "language:it", "region:us" ]
2022-03-02T23:29:22+00:00
{"language": ["it"], "multilinguality": ["monolingual"], "size_categories": ["8k<n<10k"], "task_categories": ["question-answering"], "task_ids": ["extractive-qa"]}
2022-10-25T09:01:57+00:00
[]
[ "it" ]
TAGS #task_categories-question-answering #task_ids-extractive-qa #multilinguality-monolingual #size_categories-8k<n<10k #language-Italian #region-us
# Squad-it This dataset is an adapted version of that squad-it to train on HuggingFace models. It contains: - train samples: 87599 - test samples : 10570 This dataset is for question answering and his format is the following: It can be used to train many models like T5, Bert, Distilbert...
[ "# Squad-it\nThis dataset is an adapted version of that squad-it to train on HuggingFace models.\n\nIt contains:\n- train samples: 87599\n- test samples : 10570\n\nThis dataset is for question answering and his format is the following:\n\n\nIt can be used to train many models like T5, Bert, Distilbert..." ]
[ "TAGS\n#task_categories-question-answering #task_ids-extractive-qa #multilinguality-monolingual #size_categories-8k<n<10k #language-Italian #region-us \n", "# Squad-it\nThis dataset is an adapted version of that squad-it to train on HuggingFace models.\n\nIt contains:\n- train samples: 87599\n- test samples : 10570\n\nThis dataset is for question answering and his format is the following:\n\n\nIt can be used to train many models like T5, Bert, Distilbert..." ]
beefaac934f54882041d2840222dbd0b7f48ea34
annotations_creators: - crowdsourced language_creators: - crowdsourced languages: - en multilinguality: - monolingual size_categories: - 100K<n<1M source_datasets: - original task_categories: - tableqa, data2text task_ids: - tableqa
zhoujun/hitab
[ "region:us" ]
2022-03-02T23:29:22+00:00
{}
2022-02-08T08:35:57+00:00
[]
[]
TAGS #region-us
annotations_creators: - crowdsourced language_creators: - crowdsourced languages: - en multilinguality: - monolingual size_categories: - 100K<n<1M source_datasets: - original task_categories: - tableqa, data2text task_ids: - tableqa
[]
[ "TAGS\n#region-us \n" ]
b37680e9413ca148de6f60b3c4b9c956a11974c4
# Dataset Card ## Dataset Summary We split [the original xquad dataset] (https://github.com/deepmind/xquad) into subsets. We keep the original data format. ## Supported Tasks extractive question answering ## Language Thai ## Dataset Split There are 876/161/153 question-answer pairs from 34/7/7 articles for train/validation/test separately.
zhufy/xquad_split
[ "region:us" ]
2022-03-02T23:29:22+00:00
{}
2022-02-24T02:29:43+00:00
[]
[]
TAGS #region-us
# Dataset Card ## Dataset Summary We split [the original xquad dataset] (URL into subsets. We keep the original data format. ## Supported Tasks extractive question answering ## Language Thai ## Dataset Split There are 876/161/153 question-answer pairs from 34/7/7 articles for train/validation/test separately.
[ "# Dataset Card", "## Dataset Summary\n\nWe split [the original xquad dataset] (URL into subsets.\nWe keep the original data format.", "## Supported Tasks\nextractive question answering", "## Language\nThai", "## Dataset Split\nThere are 876/161/153 question-answer pairs from 34/7/7 articles for train/validation/test separately." ]
[ "TAGS\n#region-us \n", "# Dataset Card", "## Dataset Summary\n\nWe split [the original xquad dataset] (URL into subsets.\nWe keep the original data format.", "## Supported Tasks\nextractive question answering", "## Language\nThai", "## Dataset Split\nThere are 876/161/153 question-answer pairs from 34/7/7 articles for train/validation/test separately." ]
c574d814c1502e2cdbe22ad61ae0e56013f08a9a
# AutoNLP Dataset for project: traffic_nlp_binary ## Table of content - [Dataset Description](#dataset-description) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) ## Dataset Descritpion This dataset has been automatically processed by AutoNLP for project traffic_nlp_binary. ### Languages The BCP-47 code for the dataset's language is en. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ { "text": "1 train is still delayed in both directions", "target": 1 }, { "text": "maybe there was no train traffic ????. i know the feeling.", "target": 1 } ] ``` ### Dataset Fields The dataset has the following fields (also called "features"): ```json { "target": "ClassLabel(num_classes=2, names=['0', '1'], names_file=None, id=None)", "text": "Value(dtype='string', id=None)" } ``` ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow: | Split name | Num samples | | ------------ | ------------------- | | train | 2195 | | valid | 549 |
zwang199/autonlp-data-traffic_nlp_binary
[ "task_categories:text-classification", "language:en", "region:us" ]
2022-03-02T23:29:22+00:00
{"language": ["en"], "task_categories": ["text-classification"]}
2022-10-25T09:02:03+00:00
[]
[ "en" ]
TAGS #task_categories-text-classification #language-English #region-us
AutoNLP Dataset for project: traffic\_nlp\_binary ================================================= Table of content ---------------- * Dataset Description + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits Dataset Descritpion ------------------- This dataset has been automatically processed by AutoNLP for project traffic\_nlp\_binary. ### Languages The BCP-47 code for the dataset's language is en. Dataset Structure ----------------- ### Data Instances A sample from this dataset looks as follows: ### Dataset Fields The dataset has the following fields (also called "features"): ### Dataset Splits This dataset is split into a train and validation split. The split sizes are as follow:
[ "### Languages\n\n\nThe BCP-47 code for the dataset's language is en.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA sample from this dataset looks as follows:", "### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):", "### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:" ]
[ "TAGS\n#task_categories-text-classification #language-English #region-us \n", "### Languages\n\n\nThe BCP-47 code for the dataset's language is en.\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nA sample from this dataset looks as follows:", "### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):", "### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:" ]
ad25d57e9499f8417e25ac06dd57f6010786aa65
# Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage: [HomePage](https://fancyerii.github.io)** - **Repository: fancyerii** - **Paper: No Paper** - **Leaderboard: No** - **Point of Contact:** ### Dataset Summary 测试数据集 ### Supported Tasks and Leaderboards [More Information Needed] ### Languages 中文 ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@fancyerii](https://github.com/fancyerii) for adding this dataset.
fancyerii/test
[ "task_categories:text-classification", "task_ids:semantic-similarity-classification", "size_categories:10K<n<100K", "region:us" ]
2022-03-03T07:42:22+00:00
{"annotations_creators": [], "language_creators": [], "language": [], "license": [], "multilinguality": [], "size_categories": ["10K<n<100K"], "source_datasets": [], "task_categories": ["text-classification"], "task_ids": ["semantic-similarity-classification"], "pretty_name": "demo"}
2022-10-25T09:02:14+00:00
[]
[]
TAGS #task_categories-text-classification #task_ids-semantic-similarity-classification #size_categories-10K<n<100K #region-us
# Dataset Card for [Dataset Name] ## Table of Contents - Table of Contents - Dataset Description - Dataset Summary - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information - Contributions ## Dataset Description - Homepage: HomePage - Repository: fancyerii - Paper: No Paper - Leaderboard: No - Point of Contact: ### Dataset Summary 测试数据集 ### Supported Tasks and Leaderboards ### Languages 中文 ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information ### Contributions Thanks to @fancyerii for adding this dataset.
[ "# Dataset Card for [Dataset Name]", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: HomePage\n- Repository: fancyerii\n- Paper: No Paper\n- Leaderboard: No\n- Point of Contact:", "### Dataset Summary\n\n测试数据集", "### Supported Tasks and Leaderboards", "### Languages\n\n中文", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @fancyerii for adding this dataset." ]
[ "TAGS\n#task_categories-text-classification #task_ids-semantic-similarity-classification #size_categories-10K<n<100K #region-us \n", "# Dataset Card for [Dataset Name]", "## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions", "## Dataset Description\n\n- Homepage: HomePage\n- Repository: fancyerii\n- Paper: No Paper\n- Leaderboard: No\n- Point of Contact:", "### Dataset Summary\n\n测试数据集", "### Supported Tasks and Leaderboards", "### Languages\n\n中文", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information", "### Contributions\n\nThanks to @fancyerii for adding this dataset." ]
67ebcf8c69b45feb3883d695f04227078a6c9da9
# Dataset Card for anime-faces ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://www.kaggle.com/soumikrakshit/anime-faces - **Repository:** https://www.kaggle.com/soumikrakshit/anime-faces - **Paper:** [Needs More Information] - **Leaderboard:** [Needs More Information] - **Point of Contact:** https://github.com/Mckinsey666 ### Dataset Summary This is a dataset consisting of 21551 anime faces scraped from www.getchu.com, which are then cropped using the anime face detection algorithm in https://github.com/nagadomi/lbpcascade_animeface. All images are resized to 64 * 64 for the sake of convenience. Please also cite the two sources when using this dataset. Some outliers are still present in the dataset: Bad cropping results Some non-human faces. Feel free to contribute to this dataset by adding images of similar quality or adding image labels. ### Supported Tasks and Leaderboards [Needs More Information] ### Languages [Needs More Information] ## Dataset Structure ### Data Instances [Needs More Information] ### Data Fields Has a data folder with png files inside. ### Data Splits Only training set ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information [Needs More Information] --- annotations_creators: - found language_creators: - found languages: - unknown licenses: - unknown multilinguality: - unknown pretty_name: anime-faces size_categories: - unknown source_datasets: - original task_categories: - image-classification task_ids: [] ---
huggan/anime-faces
[ "license:cc0-1.0", "region:us" ]
2022-03-03T13:15:34+00:00
{"license": "cc0-1.0"}
2022-03-22T10:01:22+00:00
[]
[]
TAGS #license-cc0-1.0 #region-us
# Dataset Card for anime-faces ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: URL - Repository: URL - Paper: - Leaderboard: - Point of Contact: URL ### Dataset Summary This is a dataset consisting of 21551 anime faces scraped from URL, which are then cropped using the anime face detection algorithm in URL All images are resized to 64 * 64 for the sake of convenience. Please also cite the two sources when using this dataset. Some outliers are still present in the dataset: Bad cropping results Some non-human faces. Feel free to contribute to this dataset by adding images of similar quality or adding image labels. ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields Has a data folder with png files inside. ### Data Splits Only training set ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information --- annotations_creators: - found language_creators: - found languages: - unknown licenses: - unknown multilinguality: - unknown pretty_name: anime-faces size_categories: - unknown source_datasets: - original task_categories: - image-classification task_ids: [] ---
[ "# Dataset Card for anime-faces", "## Table of Contents\r\n- Dataset Description\r\n - Dataset Summary\r\n - Supported Tasks\r\n - Languages\r\n- Dataset Structure\r\n - Data Instances\r\n - Data Fields\r\n - Data Splits\r\n- Dataset Creation\r\n - Curation Rationale\r\n - Source Data\r\n - Annotations\r\n - Personal and Sensitive Information\r\n- Considerations for Using the Data\r\n - Social Impact of Dataset\r\n - Discussion of Biases\r\n - Other Known Limitations\r\n- Additional Information\r\n - Dataset Curators\r\n - Licensing Information\r\n - Citation Information", "## Dataset Description\r\n\r\n- Homepage: URL\r\n- Repository: URL\r\n- Paper: \r\n- Leaderboard: \r\n- Point of Contact: URL", "### Dataset Summary\r\n\r\nThis is a dataset consisting of 21551 anime faces scraped from URL, which are then cropped using the anime face detection algorithm in URL All images are resized to 64 * 64 for the sake of convenience. Please also cite the two sources when using this dataset.\r\n\r\nSome outliers are still present in the dataset:\r\n\r\nBad cropping results\r\nSome non-human faces.\r\nFeel free to contribute to this dataset by adding images of similar quality or adding image labels.", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields\r\n\r\nHas a data folder with png files inside.", "### Data Splits\r\n\r\nOnly training set", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n---\r\nannotations_creators:\r\n- found\r\nlanguage_creators:\r\n- found\r\nlanguages:\r\n- unknown\r\nlicenses:\r\n- unknown\r\nmultilinguality:\r\n- unknown\r\npretty_name: anime-faces\r\nsize_categories:\r\n- unknown\r\nsource_datasets:\r\n- original\r\ntask_categories:\r\n- image-classification\r\ntask_ids: []\r\n---" ]
[ "TAGS\n#license-cc0-1.0 #region-us \n", "# Dataset Card for anime-faces", "## Table of Contents\r\n- Dataset Description\r\n - Dataset Summary\r\n - Supported Tasks\r\n - Languages\r\n- Dataset Structure\r\n - Data Instances\r\n - Data Fields\r\n - Data Splits\r\n- Dataset Creation\r\n - Curation Rationale\r\n - Source Data\r\n - Annotations\r\n - Personal and Sensitive Information\r\n- Considerations for Using the Data\r\n - Social Impact of Dataset\r\n - Discussion of Biases\r\n - Other Known Limitations\r\n- Additional Information\r\n - Dataset Curators\r\n - Licensing Information\r\n - Citation Information", "## Dataset Description\r\n\r\n- Homepage: URL\r\n- Repository: URL\r\n- Paper: \r\n- Leaderboard: \r\n- Point of Contact: URL", "### Dataset Summary\r\n\r\nThis is a dataset consisting of 21551 anime faces scraped from URL, which are then cropped using the anime face detection algorithm in URL All images are resized to 64 * 64 for the sake of convenience. Please also cite the two sources when using this dataset.\r\n\r\nSome outliers are still present in the dataset:\r\n\r\nBad cropping results\r\nSome non-human faces.\r\nFeel free to contribute to this dataset by adding images of similar quality or adding image labels.", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields\r\n\r\nHas a data folder with png files inside.", "### Data Splits\r\n\r\nOnly training set", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n---\r\nannotations_creators:\r\n- found\r\nlanguage_creators:\r\n- found\r\nlanguages:\r\n- unknown\r\nlicenses:\r\n- unknown\r\nmultilinguality:\r\n- unknown\r\npretty_name: anime-faces\r\nsize_categories:\r\n- unknown\r\nsource_datasets:\r\n- original\r\ntask_categories:\r\n- image-classification\r\ntask_ids: []\r\n---" ]
f0f49db9aeb2fe8e7640ae7ee10da1582ecd9569
# GEM Submission Submission name: This is a test
GEM-submissions/lewtun__this-is-a-test__1646314818
[ "benchmark:gem", "evaluation", "benchmark", "region:us" ]
2022-03-03T13:40:20+00:00
{"benchmark": "gem", "type": "prediction", "submission_name": "This is a test", "tags": ["evaluation", "benchmark"]}
2022-03-03T13:40:29+00:00
[]
[]
TAGS #benchmark-gem #evaluation #benchmark #region-us
# GEM Submission Submission name: This is a test
[ "# GEM Submission\n\nSubmission name: This is a test" ]
[ "TAGS\n#benchmark-gem #evaluation #benchmark #region-us \n", "# GEM Submission\n\nSubmission name: This is a test" ]
2a1eb941a4459be7ac03c51e4c2875d938aee9bf
# GEM Submission Submission name: This is a test
GEM-submissions/lewtun__this-is-a-test__1646316929
[ "benchmark:gem", "evaluation", "benchmark", "region:us" ]
2022-03-03T14:15:31+00:00
{"benchmark": "gem", "type": "prediction", "submission_name": "This is a test", "tags": ["evaluation", "benchmark"]}
2022-03-03T14:15:35+00:00
[]
[]
TAGS #benchmark-gem #evaluation #benchmark #region-us
# GEM Submission Submission name: This is a test
[ "# GEM Submission\n\nSubmission name: This is a test" ]
[ "TAGS\n#benchmark-gem #evaluation #benchmark #region-us \n", "# GEM Submission\n\nSubmission name: This is a test" ]
d8ff10fc5ffd05877bf61ea19f0833565c5a6fd8
# AnanyaSinhalaNERDataset --- annotations_creators: [] language: - si license: - mit --- This is part of the dataset used in the paper: Manamini, S.A.P.M., Ahamed, A.F., Rajapakshe, R.A.E.C., Reemal, G.H.A., Jayasena, S., Dias, G.V. and Ranathunga, S., 2016, April. Ananya-a Named-Entity-Recognition (NER) system for Sinhala language. In 2016 Moratuwa Engineering Research Conference (MERCon) (pp. 30-35). IEEE.
NLPC-UOM/AnanyaSinhalaNERDataset
[ "region:us" ]
2022-03-04T08:32:54+00:00
{}
2022-10-25T09:02:18+00:00
[]
[]
TAGS #region-us
# AnanyaSinhalaNERDataset --- annotations_creators: [] language: - si license: - mit --- This is part of the dataset used in the paper: Manamini, S.A.P.M., Ahamed, A.F., Rajapakshe, R.A.E.C., Reemal, G.H.A., Jayasena, S., Dias, G.V. and Ranathunga, S., 2016, April. Ananya-a Named-Entity-Recognition (NER) system for Sinhala language. In 2016 Moratuwa Engineering Research Conference (MERCon) (pp. 30-35). IEEE.
[ "# AnanyaSinhalaNERDataset\n---\nannotations_creators: []\nlanguage:\n- si\nlicense:\n- mit\n---\nThis is part of the dataset used in the paper: Manamini, S.A.P.M., Ahamed, A.F., Rajapakshe, R.A.E.C., Reemal, G.H.A., Jayasena, S., Dias, G.V. and Ranathunga, S., 2016, April. Ananya-a Named-Entity-Recognition (NER) system for Sinhala language. In 2016 Moratuwa Engineering Research Conference (MERCon) (pp. 30-35). IEEE." ]
[ "TAGS\n#region-us \n", "# AnanyaSinhalaNERDataset\n---\nannotations_creators: []\nlanguage:\n- si\nlicense:\n- mit\n---\nThis is part of the dataset used in the paper: Manamini, S.A.P.M., Ahamed, A.F., Rajapakshe, R.A.E.C., Reemal, G.H.A., Jayasena, S., Dias, G.V. and Ranathunga, S., 2016, April. Ananya-a Named-Entity-Recognition (NER) system for Sinhala language. In 2016 Moratuwa Engineering Research Conference (MERCon) (pp. 30-35). IEEE." ]
10e2ca5f1dc12387e94e13477c4da59e20584b59
[Needs More Information] # Dataset Card for GFS-Reforecast ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [Needs More Information] - **Repository:** [Needs More Information] - **Paper:** [Needs More Information] - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Jacob Bieker](mailto:[email protected]) ### Dataset Summary This dataset consists of various sets of historical operational GFS forecasts, and analysis files from 2016-2022. The analysis files and forecasts are initialized at 00, 06, 12, and 18 UTC every day and ran for multiple hours. Additionally, raw observations are also included, which are the observations that are used to initialize the analysis and forecasts. The dataset is being expanded over time as more historical data is processed, and more observations as well. The `data/forecasts/GFSv16/` folder holds the historical operational forecasts out to 48 hours from initialization, on all pressure levels, and for all variables that are present in every timestep (so not any accumulated values). The data is all stored as zipped Zarr stores, openable by xarray. ### Supported Tasks and Leaderboards [Needs More Information] ### Languages [Needs More Information] ## Dataset Structure ### Data Instances [Needs More Information] ### Data Fields [Needs More Information] ### Data Splits [Needs More Information] ## Dataset Creation ### Curation Rationale This dataset was constructed to help create a similar and expanded dataset to that used in Kiesler 2022 paper, where graph networks were used for weather forecasting. ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information US Government License, no restrictions ### Citation Information @article(gfs, author = {Jacob Bieker} title = {GFS NWP Weather Dataset} year = {2022} }
openclimatefix/gfs-reforecast
[ "region:us" ]
2022-03-04T09:08:46+00:00
{}
2023-03-03T17:19:15+00:00
[]
[]
TAGS #region-us
# Dataset Card for GFS-Reforecast ## Table of Contents - Dataset Description - Dataset Summary - Supported Tasks - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Annotations - Personal and Sensitive Information - Considerations for Using the Data - Social Impact of Dataset - Discussion of Biases - Other Known Limitations - Additional Information - Dataset Curators - Licensing Information - Citation Information ## Dataset Description - Homepage: - Repository: - Paper: - Leaderboard: - Point of Contact: Jacob Bieker ### Dataset Summary This dataset consists of various sets of historical operational GFS forecasts, and analysis files from 2016-2022. The analysis files and forecasts are initialized at 00, 06, 12, and 18 UTC every day and ran for multiple hours. Additionally, raw observations are also included, which are the observations that are used to initialize the analysis and forecasts. The dataset is being expanded over time as more historical data is processed, and more observations as well. The 'data/forecasts/GFSv16/' folder holds the historical operational forecasts out to 48 hours from initialization, on all pressure levels, and for all variables that are present in every timestep (so not any accumulated values). The data is all stored as zipped Zarr stores, openable by xarray. ### Supported Tasks and Leaderboards ### Languages ## Dataset Structure ### Data Instances ### Data Fields ### Data Splits ## Dataset Creation ### Curation Rationale This dataset was constructed to help create a similar and expanded dataset to that used in Kiesler 2022 paper, where graph networks were used for weather forecasting. ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations ## Additional Information ### Dataset Curators ### Licensing Information US Government License, no restrictions @article(gfs, author = {Jacob Bieker} title = {GFS NWP Weather Dataset} year = {2022} }
[ "# Dataset Card for GFS-Reforecast", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact: Jacob Bieker", "### Dataset Summary\n\nThis dataset consists of various sets of historical operational GFS forecasts, and analysis files from 2016-2022. The analysis files and forecasts are initialized at 00, 06, 12, and 18 UTC every day and ran for multiple hours. Additionally, raw observations are also included, which are the observations that are used to initialize the analysis and forecasts. The dataset is being expanded over time as more historical data is processed, and more observations as well.\n\nThe 'data/forecasts/GFSv16/' folder holds the historical operational forecasts out to 48 hours from initialization, on all pressure levels, and for all variables that are present in every timestep (so not any accumulated values). The data is all stored as zipped Zarr stores, openable by xarray.", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale\n\nThis dataset was constructed to help create a similar and expanded dataset to that used in Kiesler 2022 paper, where graph networks were used for weather forecasting.", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nUS Government License, no restrictions\n\n\n\n@article(gfs,\nauthor = {Jacob Bieker}\ntitle = {GFS NWP Weather Dataset}\nyear = {2022}\n}" ]
[ "TAGS\n#region-us \n", "# Dataset Card for GFS-Reforecast", "## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information", "## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact: Jacob Bieker", "### Dataset Summary\n\nThis dataset consists of various sets of historical operational GFS forecasts, and analysis files from 2016-2022. The analysis files and forecasts are initialized at 00, 06, 12, and 18 UTC every day and ran for multiple hours. Additionally, raw observations are also included, which are the observations that are used to initialize the analysis and forecasts. The dataset is being expanded over time as more historical data is processed, and more observations as well.\n\nThe 'data/forecasts/GFSv16/' folder holds the historical operational forecasts out to 48 hours from initialization, on all pressure levels, and for all variables that are present in every timestep (so not any accumulated values). The data is all stored as zipped Zarr stores, openable by xarray.", "### Supported Tasks and Leaderboards", "### Languages", "## Dataset Structure", "### Data Instances", "### Data Fields", "### Data Splits", "## Dataset Creation", "### Curation Rationale\n\nThis dataset was constructed to help create a similar and expanded dataset to that used in Kiesler 2022 paper, where graph networks were used for weather forecasting.", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information", "## Considerations for Using the Data", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations", "## Additional Information", "### Dataset Curators", "### Licensing Information\n\nUS Government License, no restrictions\n\n\n\n@article(gfs,\nauthor = {Jacob Bieker}\ntitle = {GFS NWP Weather Dataset}\nyear = {2022}\n}" ]
080f677a026e304c38666d759ef625d621dc8cb9
# Dataset Card for FiNER-139 ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [SEC-BERT](#sec-bert) - [About Us](#about-us) ## Dataset Description - **Homepage:** [FiNER](https://github.com/nlpaueb/finer) - **Repository:** [FiNER](https://github.com/nlpaueb/finer) - **Paper:** [FiNER, Loukas et al. (2022)](https://arxiv.org/abs/2203.06482) - **Point of Contact:** [Manos Fergadiotis](mailto:[email protected]) ### Dataset Summary <div style="text-align: justify"> <strong>FiNER-139</strong> is comprised of 1.1M sentences annotated with <strong>eXtensive Business Reporting Language (XBRL)</strong> tags extracted from annual and quarterly reports of publicly-traded companies in the US. Unlike other entity extraction tasks, like named entity recognition (NER) or contract element extraction, which typically require identifying entities of a small set of common types (e.g., persons, organizations), FiNER-139 uses a much larger label set of <strong>139 entity types</strong>. Another important difference from typical entity extraction is that FiNER focuses on numeric tokens, with the correct tag depending mostly on context, not the token itself. </div> ### Supported Tasks <div style="text-align: justify"> To promote transparency among shareholders and potential investors, publicly traded companies are required to file periodic financial reports annotated with tags from the eXtensive Business Reporting Language (XBRL), an XML-based language, to facilitate the processing of financial information. However, manually tagging reports with XBRL tags is tedious and resource-intensive. We, therefore, introduce <strong>XBRL tagging</strong> as a <strong>new entity extraction task</strong> for the <strong>financial domain</strong> and study how financial reports can be automatically enriched with XBRL tags. To facilitate research towards automated XBRL tagging we release FiNER-139. </div> ### Languages **FiNER-139** is compiled from approximately 10k annual and quarterly **English** reports ## Dataset Structure ### Data Instances This is a "train" split example: ```json { 'id': 40 'tokens': ['In', 'March', '2014', ',', 'the', 'Rialto', 'segment', 'issued', 'an', 'additional', '$', '100', 'million', 'of', 'the', '7.00', '%', 'Senior', 'Notes', ',', 'at', 'a', 'price', 'of', '102.25', '%', 'of', 'their', 'face', 'value', 'in', 'a', 'private', 'placement', '.'] 'ner_tags': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 37, 0, 0, 0, 41, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] } ``` ### Data Fields **id**: ID of the example <br> **tokens**: List of tokens for the specific example. <br> **ner_tags**: List of tags for each token in the example. Tags are provided as integer classes.<br> If you want to use the class names you can access them as follows: ```python import datasets finer_train = datasets.load_dataset("nlpaueb/finer-139", split="train") finer_tag_names = finer_train.features["ner_tags"].feature.names ``` **finer_tag_names** contains a list of class names corresponding to the integer classes e.g. ``` 0 -> "O" 1 -> "B-AccrualForEnvironmentalLossContingencies" ``` ### Data Splits | Training | Validation | Test | -------- | ---------- | ------- | 900,384 | 112,494 | 108,378 ## Dataset Creation ### Curation Rationale The dataset was curated by [Loukas et al. (2022)](https://arxiv.org/abs/2203.06482) <br> ### Source Data #### Initial Data Collection and Normalization <div style="text-align: justify"> FiNER-139 is compiled from approximately 10k annual and quarterly English reports (filings) of publicly traded companies downloaded from the [US Securities and Exchange Commission's (SEC)](https://www.sec.gov/) [Electronic Data Gathering, Analysis, and Retrieval (EDGAR)](https://www.sec.gov/edgar.shtml) system. The reports span a 5-year period, from 2016 to 2020. They are annotated with XBRL tags by professional auditors and describe the performance and projections of the companies. XBRL defines approximately 6k entity types from the US-GAAP taxonomy. FiNER-139 is annotated with the 139 most frequent XBRL entity types with at least 1,000 appearances. We used regular expressions to extract the text notes from the Financial Statements Item of each filing, which is the primary source of XBRL tags in annual and quarterly reports. We used the <strong>IOB2</strong> annotation scheme to distinguish tokens at the beginning, inside, or outside of tagged expressions, which leads to 279 possible token labels. </div> ### Annotations #### Annotation process <div style="text-align: justify"> All the examples were annotated by professional auditors as required by the Securities & Exchange Commission (SEC) legislation. Even though the gold XBRL tags come from professional auditors there are still some discrepancies. Consult [Loukas et al. (2022)](https://arxiv.org/abs/2203.06482), (Section 9.4) for more details </div> #### Who are the annotators? Professional auditors ### Personal and Sensitive Information The dataset contains publicly available annual and quarterly reports (filings) ## Additional Information ### Dataset Curators [Loukas et al. (2022)](https://arxiv.org/abs/2203.06482) ### Licensing Information <div style="text-align: justify"> Access to SEC's EDGAR public database is free, allowing research of public companies' financial information and operations by reviewing the filings the companies makes with the SEC. </div> ### Citation Information If you use this dataset cite the following ``` @inproceedings{loukas-etal-2022-finer, title = {FiNER: Financial Numeric Entity Recognition for XBRL Tagging}, author = {Loukas, Lefteris and Fergadiotis, Manos and Chalkidis, Ilias and Spyropoulou, Eirini and Malakasiotis, Prodromos and Androutsopoulos, Ion and Paliouras George}, booktitle = {Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (ACL 2022)}, publisher = {Association for Computational Linguistics}, location = {Dublin, Republic of Ireland}, year = {2022}, url = {https://arxiv.org/abs/2203.06482} } ``` ## SEC-BERT <img align="center" src="https://i.ibb.co/0yz81K9/sec-bert-logo.png" alt="SEC-BERT" width="400"/> <div style="text-align: justify"> We also pre-train our own BERT models (<strong>SEC-BERT</strong>) for the financial domain, intended to assist financial NLP research and FinTech applications. <br> <strong>SEC-BERT</strong> consists of the following models: * [**SEC-BERT-BASE**](https://huggingface.co/nlpaueb/sec-bert-base): Same architecture as BERT-BASE trained on financial documents. * [**SEC-BERT-NUM**](https://huggingface.co/nlpaueb/sec-bert-num): Same as SEC-BERT-BASE but we replace every number token with a [NUM] pseudo-token handling all numeric expressions in a uniform manner, disallowing their fragmentation * [**SEC-BERT-SHAPE**](https://huggingface.co/nlpaueb/sec-bert-shape): Same as SEC-BERT-BASE but we replace numbers with pseudo-tokens that represent the number’s shape, so numeric expressions (of known shapes) are no longer fragmented, e.g., '53.2' becomes '[XX.X]' and '40,200.5' becomes '[XX,XXX.X]'. These models were pre-trained on 260,773 10-K filings (annual reports) from 1993-2019, publicly available at [U.S. Securities and Exchange Commission (SEC)](https://www.sec.gov/) </div> ## About Us <div style="text-align: justify"> [**AUEB's Natural Language Processing Group**](http://nlp.cs.aueb.gr) develops algorithms, models, and systems that allow computers to process and generate natural language texts. The group's current research interests include: * question answering systems for databases, ontologies, document collections, and the Web, especially biomedical question answering, * natural language generation from databases and ontologies, especially Semantic Web ontologies, text classification, including filtering spam and abusive content, * information extraction and opinion mining, including legal text analytics and sentiment analysis, * natural language processing tools for Greek, for example parsers and named-entity recognizers, machine learning in natural language processing, especially deep learning. The group is part of the Information Processing Laboratory of the Department of Informatics of the Athens University of Economics and Business. </div> [Manos Fergadiotis](https://manosfer.github.io) on behalf of [AUEB's Natural Language Processing Group](http://nlp.cs.aueb.gr)
nlpaueb/finer-139
[ "task_ids:named-entity-recognition", "annotations_creators:expert-generated", "language_creators:expert-generated", "multilinguality:monolingual", "size_categories:1M<n<10M", "language:en", "license:cc-by-sa-4.0", "arxiv:2203.06482", "region:us" ]
2022-03-04T10:00:23+00:00
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["en"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": [], "task_categories": ["structure-prediction", "named-entity-recognition", "entity-extraction"], "task_ids": ["named-entity-recognition"], "pretty_name": "FiNER-139"}
2022-10-23T04:05:03+00:00
[ "2203.06482" ]
[ "en" ]
TAGS #task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-1M<n<10M #language-English #license-cc-by-sa-4.0 #arxiv-2203.06482 #region-us
Dataset Card for FiNER-139 ========================== Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Additional Information + Dataset Curators + Licensing Information + Citation Information * SEC-BERT * About Us Dataset Description ------------------- * Homepage: FiNER * Repository: FiNER * Paper: FiNER, Loukas et al. (2022) * Point of Contact: Manos Fergadiotis ### Dataset Summary **FiNER-139** is comprised of 1.1M sentences annotated with **eXtensive Business Reporting Language (XBRL)** tags extracted from annual and quarterly reports of publicly-traded companies in the US. Unlike other entity extraction tasks, like named entity recognition (NER) or contract element extraction, which typically require identifying entities of a small set of common types (e.g., persons, organizations), FiNER-139 uses a much larger label set of **139 entity types**. Another important difference from typical entity extraction is that FiNER focuses on numeric tokens, with the correct tag depending mostly on context, not the token itself. ### Supported Tasks To promote transparency among shareholders and potential investors, publicly traded companies are required to file periodic financial reports annotated with tags from the eXtensive Business Reporting Language (XBRL), an XML-based language, to facilitate the processing of financial information. However, manually tagging reports with XBRL tags is tedious and resource-intensive. We, therefore, introduce **XBRL tagging** as a **new entity extraction task** for the **financial domain** and study how financial reports can be automatically enriched with XBRL tags. To facilitate research towards automated XBRL tagging we release FiNER-139. ### Languages FiNER-139 is compiled from approximately 10k annual and quarterly English reports Dataset Structure ----------------- ### Data Instances This is a "train" split example: ### Data Fields id: ID of the example tokens: List of tokens for the specific example. ner\_tags: List of tags for each token in the example. Tags are provided as integer classes. If you want to use the class names you can access them as follows: finer\_tag\_names contains a list of class names corresponding to the integer classes e.g. ### Data Splits Training: 900,384, Validation: 112,494, Test: 108,378 Dataset Creation ---------------- ### Curation Rationale The dataset was curated by Loukas et al. (2022) ### Source Data #### Initial Data Collection and Normalization FiNER-139 is compiled from approximately 10k annual and quarterly English reports (filings) of publicly traded companies downloaded from the US Securities and Exchange Commission's (SEC) Electronic Data Gathering, Analysis, and Retrieval (EDGAR) system. The reports span a 5-year period, from 2016 to 2020. They are annotated with XBRL tags by professional auditors and describe the performance and projections of the companies. XBRL defines approximately 6k entity types from the US-GAAP taxonomy. FiNER-139 is annotated with the 139 most frequent XBRL entity types with at least 1,000 appearances. We used regular expressions to extract the text notes from the Financial Statements Item of each filing, which is the primary source of XBRL tags in annual and quarterly reports. We used the **IOB2** annotation scheme to distinguish tokens at the beginning, inside, or outside of tagged expressions, which leads to 279 possible token labels. ### Annotations #### Annotation process All the examples were annotated by professional auditors as required by the Securities & Exchange Commission (SEC) legislation. Even though the gold XBRL tags come from professional auditors there are still some discrepancies. Consult Loukas et al. (2022), (Section 9.4) for more details #### Who are the annotators? Professional auditors ### Personal and Sensitive Information The dataset contains publicly available annual and quarterly reports (filings) Additional Information ---------------------- ### Dataset Curators Loukas et al. (2022) ### Licensing Information Access to SEC's EDGAR public database is free, allowing research of public companies' financial information and operations by reviewing the filings the companies makes with the SEC. If you use this dataset cite the following SEC-BERT -------- <img align="center" src="https://i.URL alt="SEC-BERT" width="400"/> We also pre-train our own BERT models (**SEC-BERT**) for the financial domain, intended to assist financial NLP research and FinTech applications. **SEC-BERT** consists of the following models: * SEC-BERT-BASE: Same architecture as BERT-BASE trained on financial documents. * SEC-BERT-NUM: Same as SEC-BERT-BASE but we replace every number token with a [NUM] pseudo-token handling all numeric expressions in a uniform manner, disallowing their fragmentation * SEC-BERT-SHAPE: Same as SEC-BERT-BASE but we replace numbers with pseudo-tokens that represent the number’s shape, so numeric expressions (of known shapes) are no longer fragmented, e.g., '53.2' becomes '[XX.X]' and '40,200.5' becomes '[XX,XXX.X]'. These models were pre-trained on 260,773 10-K filings (annual reports) from 1993-2019, publicly available at U.S. Securities and Exchange Commission (SEC) About Us -------- AUEB's Natural Language Processing Group develops algorithms, models, and systems that allow computers to process and generate natural language texts. The group's current research interests include: * question answering systems for databases, ontologies, document collections, and the Web, especially biomedical question answering, * natural language generation from databases and ontologies, especially Semantic Web ontologies, text classification, including filtering spam and abusive content, * information extraction and opinion mining, including legal text analytics and sentiment analysis, * natural language processing tools for Greek, for example parsers and named-entity recognizers, machine learning in natural language processing, especially deep learning. The group is part of the Information Processing Laboratory of the Department of Informatics of the Athens University of Economics and Business. Manos Fergadiotis on behalf of AUEB's Natural Language Processing Group
[ "### Dataset Summary\n\n\n\n**FiNER-139** is comprised of 1.1M sentences annotated with **eXtensive Business Reporting Language (XBRL)** tags extracted from annual and quarterly reports of publicly-traded companies in the US. \nUnlike other entity extraction tasks, like named entity recognition (NER) or contract element extraction, which typically require identifying entities of a small set of common types (e.g., persons, organizations), FiNER-139 uses a much larger label set of **139 entity types**. \nAnother important difference from typical entity extraction is that FiNER focuses on numeric tokens, with the correct tag depending mostly on context, not the token itself.", "### Supported Tasks\n\n\n\nTo promote transparency among shareholders and potential investors, publicly traded companies are required to file periodic financial reports annotated with tags from the eXtensive Business Reporting Language (XBRL), an XML-based language, to facilitate the processing of financial information. \nHowever, manually tagging reports with XBRL tags is tedious and resource-intensive. \nWe, therefore, introduce **XBRL tagging** as a **new entity extraction task** for the **financial domain** and study how financial reports can be automatically enriched with XBRL tags. \nTo facilitate research towards automated XBRL tagging we release FiNER-139.", "### Languages\n\n\nFiNER-139 is compiled from approximately 10k annual and quarterly English reports\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nThis is a \"train\" split example:", "### Data Fields\n\n\nid: ID of the example \n\ntokens: List of tokens for the specific example. \n\nner\\_tags: List of tags for each token in the example. Tags are provided as integer classes. \n\n\n\nIf you want to use the class names you can access them as follows:\n\n\nfiner\\_tag\\_names contains a list of class names corresponding to the integer classes e.g.", "### Data Splits\n\n\nTraining: 900,384, Validation: 112,494, Test: 108,378\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nThe dataset was curated by Loukas et al. (2022)", "### Source Data", "#### Initial Data Collection and Normalization\n\n\n\nFiNER-139 is compiled from approximately 10k annual and quarterly English reports (filings) of publicly traded companies downloaded from the US Securities\nand Exchange Commission's (SEC) Electronic Data Gathering, Analysis, and Retrieval (EDGAR) system.\nThe reports span a 5-year period, from 2016 to 2020. They are annotated with XBRL tags by professional auditors and describe the performance and projections of the companies. XBRL defines approximately 6k entity types from the US-GAAP taxonomy. FiNER-139 is annotated with the 139 most frequent XBRL entity types with at least 1,000 appearances.\nWe used regular expressions to extract the text notes from the Financial Statements Item of each filing, which is the primary source of XBRL tags in annual and quarterly reports. We used the **IOB2** annotation scheme to distinguish tokens at the beginning, inside, or outside of tagged expressions, which leads to 279 possible token labels.", "### Annotations", "#### Annotation process\n\n\n\nAll the examples were annotated by professional auditors as required by the Securities & Exchange Commission (SEC) legislation.\nEven though the gold XBRL tags come from professional auditors there are still some discrepancies. Consult Loukas et al. (2022), (Section 9.4) for more details", "#### Who are the annotators?\n\n\nProfessional auditors", "### Personal and Sensitive Information\n\n\nThe dataset contains publicly available annual and quarterly reports (filings)\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nLoukas et al. (2022)", "### Licensing Information\n\n\n\nAccess to SEC's EDGAR public database is free, allowing research of public companies' financial information and operations by reviewing the filings the companies makes with the SEC.\n\nIf you use this dataset cite the following\n\n\nSEC-BERT\n--------\n\n\n<img align=\"center\" src=\"https://i.URL alt=\"SEC-BERT\" width=\"400\"/>\n\n\n\nWe also pre-train our own BERT models (**SEC-BERT**) for the financial domain, intended to assist financial NLP research and FinTech applications. \n\n**SEC-BERT** consists of the following models:\n* SEC-BERT-BASE: Same architecture as BERT-BASE trained on financial documents.\n* SEC-BERT-NUM: Same as SEC-BERT-BASE but we replace every number token with a [NUM] pseudo-token handling all numeric expressions in a uniform manner, disallowing their fragmentation\n* SEC-BERT-SHAPE: Same as SEC-BERT-BASE but we replace numbers with pseudo-tokens that represent the number’s shape, so numeric expressions (of known shapes) are no longer fragmented, e.g., '53.2' becomes '[XX.X]' and '40,200.5' becomes '[XX,XXX.X]'.\n\n\nThese models were pre-trained on 260,773 10-K filings (annual reports) from 1993-2019, publicly available at U.S. Securities and Exchange Commission (SEC)\n\n\n\nAbout Us\n--------\n\n\n\nAUEB's Natural Language Processing Group develops algorithms, models, and systems that allow computers to process and generate natural language texts.\n\n\nThe group's current research interests include:\n\n\n* question answering systems for databases, ontologies, document collections, and the Web, especially biomedical question answering,\n* natural language generation from databases and ontologies, especially Semantic Web ontologies,\ntext classification, including filtering spam and abusive content,\n* information extraction and opinion mining, including legal text analytics and sentiment analysis,\n* natural language processing tools for Greek, for example parsers and named-entity recognizers,\nmachine learning in natural language processing, especially deep learning.\n\n\nThe group is part of the Information Processing Laboratory of the Department of Informatics of the Athens University of Economics and Business.\n\n\n\nManos Fergadiotis on behalf of AUEB's Natural Language Processing Group" ]
[ "TAGS\n#task_ids-named-entity-recognition #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-1M<n<10M #language-English #license-cc-by-sa-4.0 #arxiv-2203.06482 #region-us \n", "### Dataset Summary\n\n\n\n**FiNER-139** is comprised of 1.1M sentences annotated with **eXtensive Business Reporting Language (XBRL)** tags extracted from annual and quarterly reports of publicly-traded companies in the US. \nUnlike other entity extraction tasks, like named entity recognition (NER) or contract element extraction, which typically require identifying entities of a small set of common types (e.g., persons, organizations), FiNER-139 uses a much larger label set of **139 entity types**. \nAnother important difference from typical entity extraction is that FiNER focuses on numeric tokens, with the correct tag depending mostly on context, not the token itself.", "### Supported Tasks\n\n\n\nTo promote transparency among shareholders and potential investors, publicly traded companies are required to file periodic financial reports annotated with tags from the eXtensive Business Reporting Language (XBRL), an XML-based language, to facilitate the processing of financial information. \nHowever, manually tagging reports with XBRL tags is tedious and resource-intensive. \nWe, therefore, introduce **XBRL tagging** as a **new entity extraction task** for the **financial domain** and study how financial reports can be automatically enriched with XBRL tags. \nTo facilitate research towards automated XBRL tagging we release FiNER-139.", "### Languages\n\n\nFiNER-139 is compiled from approximately 10k annual and quarterly English reports\n\n\nDataset Structure\n-----------------", "### Data Instances\n\n\nThis is a \"train\" split example:", "### Data Fields\n\n\nid: ID of the example \n\ntokens: List of tokens for the specific example. \n\nner\\_tags: List of tags for each token in the example. Tags are provided as integer classes. \n\n\n\nIf you want to use the class names you can access them as follows:\n\n\nfiner\\_tag\\_names contains a list of class names corresponding to the integer classes e.g.", "### Data Splits\n\n\nTraining: 900,384, Validation: 112,494, Test: 108,378\n\n\nDataset Creation\n----------------", "### Curation Rationale\n\n\nThe dataset was curated by Loukas et al. (2022)", "### Source Data", "#### Initial Data Collection and Normalization\n\n\n\nFiNER-139 is compiled from approximately 10k annual and quarterly English reports (filings) of publicly traded companies downloaded from the US Securities\nand Exchange Commission's (SEC) Electronic Data Gathering, Analysis, and Retrieval (EDGAR) system.\nThe reports span a 5-year period, from 2016 to 2020. They are annotated with XBRL tags by professional auditors and describe the performance and projections of the companies. XBRL defines approximately 6k entity types from the US-GAAP taxonomy. FiNER-139 is annotated with the 139 most frequent XBRL entity types with at least 1,000 appearances.\nWe used regular expressions to extract the text notes from the Financial Statements Item of each filing, which is the primary source of XBRL tags in annual and quarterly reports. We used the **IOB2** annotation scheme to distinguish tokens at the beginning, inside, or outside of tagged expressions, which leads to 279 possible token labels.", "### Annotations", "#### Annotation process\n\n\n\nAll the examples were annotated by professional auditors as required by the Securities & Exchange Commission (SEC) legislation.\nEven though the gold XBRL tags come from professional auditors there are still some discrepancies. Consult Loukas et al. (2022), (Section 9.4) for more details", "#### Who are the annotators?\n\n\nProfessional auditors", "### Personal and Sensitive Information\n\n\nThe dataset contains publicly available annual and quarterly reports (filings)\n\n\nAdditional Information\n----------------------", "### Dataset Curators\n\n\nLoukas et al. (2022)", "### Licensing Information\n\n\n\nAccess to SEC's EDGAR public database is free, allowing research of public companies' financial information and operations by reviewing the filings the companies makes with the SEC.\n\nIf you use this dataset cite the following\n\n\nSEC-BERT\n--------\n\n\n<img align=\"center\" src=\"https://i.URL alt=\"SEC-BERT\" width=\"400\"/>\n\n\n\nWe also pre-train our own BERT models (**SEC-BERT**) for the financial domain, intended to assist financial NLP research and FinTech applications. \n\n**SEC-BERT** consists of the following models:\n* SEC-BERT-BASE: Same architecture as BERT-BASE trained on financial documents.\n* SEC-BERT-NUM: Same as SEC-BERT-BASE but we replace every number token with a [NUM] pseudo-token handling all numeric expressions in a uniform manner, disallowing their fragmentation\n* SEC-BERT-SHAPE: Same as SEC-BERT-BASE but we replace numbers with pseudo-tokens that represent the number’s shape, so numeric expressions (of known shapes) are no longer fragmented, e.g., '53.2' becomes '[XX.X]' and '40,200.5' becomes '[XX,XXX.X]'.\n\n\nThese models were pre-trained on 260,773 10-K filings (annual reports) from 1993-2019, publicly available at U.S. Securities and Exchange Commission (SEC)\n\n\n\nAbout Us\n--------\n\n\n\nAUEB's Natural Language Processing Group develops algorithms, models, and systems that allow computers to process and generate natural language texts.\n\n\nThe group's current research interests include:\n\n\n* question answering systems for databases, ontologies, document collections, and the Web, especially biomedical question answering,\n* natural language generation from databases and ontologies, especially Semantic Web ontologies,\ntext classification, including filtering spam and abusive content,\n* information extraction and opinion mining, including legal text analytics and sentiment analysis,\n* natural language processing tools for Greek, for example parsers and named-entity recognizers,\nmachine learning in natural language processing, especially deep learning.\n\n\nThe group is part of the Information Processing Laboratory of the Department of Informatics of the Athens University of Economics and Business.\n\n\n\nManos Fergadiotis on behalf of AUEB's Natural Language Processing Group" ]
9283dd0d667c67679d54ae59bf871e765e81a8d7
# GEM Submission Submission name: SeqPlan
GEM-submissions/ratishsp__seqplan__1646397329
[ "benchmark:gem", "evaluation", "benchmark", "region:us" ]
2022-03-04T12:35:30+00:00
{"benchmark": "gem", "type": "prediction", "submission_name": "SeqPlan", "tags": ["evaluation", "benchmark"]}
2022-03-04T12:35:32+00:00
[]
[]
TAGS #benchmark-gem #evaluation #benchmark #region-us
# GEM Submission Submission name: SeqPlan
[ "# GEM Submission\n\nSubmission name: SeqPlan" ]
[ "TAGS\n#benchmark-gem #evaluation #benchmark #region-us \n", "# GEM Submission\n\nSubmission name: SeqPlan" ]