diff --git "a/README.md" "b/README.md" new file mode 100644--- /dev/null +++ "b/README.md" @@ -0,0 +1,3560 @@ +--- +license: cc-by-nc-4.0 +task_categories: +- translation +language: +- am +- ar +- ay +- bm +- bbj +- bn +- bg +- ca +- cs +- ku +- da +- de +- el +- en +- et +- ee +- fil +- fi +- fr +- fon +- gu +- ha +- he +- hi +- hu +- ig +- id +- it +- ja +- kk +- km +- ko +- lv +- lt +- lg +- luo +- mk +- mos +- my +- nl +- ne +- or +- pa +- pcm +- fa +- pl +- pt +- mg +- ro +- ru +- es +- sr +- sq +- sw +- sv +- tn +- tr +- tw +- ur +- wo +- yo +- zh +- zu +multilinguality: +- translation +- multilingual +pretty_name: PolyNewsParallel +size_categories: +- 1K + +- **Point of Contact:** [Andreea Iana](https://andreeaiana.github.io/) +- **License:** [CC-BY-4.0-NC](https://creativecommons.org/licenses/by-nc/4.0/) + +### Dataset Summary +PolyNewsParallel is a multilingual paralllel dataset containing news titles for 833 language pairs. It covers 65 languages and 17 scripts. + +### Uses + +This dataset can be used for domain adaptation of language models or machine translation. + +### Languages + +The heatmap shows the language pairs available, as well as the number of articles per language pair. + +
+ PolyNewsParallel: Number of texts per language pair +
+ +## Dataset Structure + +### Data Instances +``` +>>> from datasets import load_dataset +>>> data = load_dataset('aiana94/polynews-parallel', 'eng_Latn-ron_Latn') + +# Please, specify the language code, + +# A data point example is below: + +{ +"src": "They continue to support the view that this decision will have a lasting negative impact on the rule of law in the country. ", +"tgt": "Ei continuă să creadă că această decizie va avea efecte negative pe termen lung asupra statului de drept în țară. ", +"provenance": "globalvoices" +} + +``` + +### Data Fields + +- src (string): source news text +- tgt (string): target news text +- provenance (string) : source dataset for the news example + +### Data Splits + +For all languages, there is only the `train` split. + + +## Dataset Creation + +### Curation Rationale + +Multiple multilingual, human-translated, datasets containing news texts have been released in recent years. +However, these datasets are stored in different formats and various websites, and many contain numerous near duplicates. +With PolyNewsParallel, we aim to provide an easily-accessible, unified and deduplicated parallel dataset that combines these disparate data sources, and which can be used for domain adaptation of language models or machine translation in both high-resource and low-resource languages. + +### Source Data + +The source data consists of five multilingual news datasets. + +- [GlobalVoices](https://opus.nlpl.eu/GlobalVoices/corpus/version/GlobalVoices) (v2018q4) +- [WMT-News](https://opus.nlpl.eu/WMT-News/corpus/version/WMT-News) (v2019) +- [MAFAND](https://huggingface.co/datasets/masakhane/mafand) (`train` split) + +#### Data Collection and Processing + +We processed the data using a **working script** which covers the entire processing pipeline. It can be found [here](https://github.com/andreeaiana/nase/script/polynews). + +The data processing pipeline consists of: +1. Downloading the WMT-News and GlobalVoices News from OPUS. +2. Loading MAFAND datasets from Hugging Face Hub (only the `train` splits). +4. Concatenating, per language, all news texts from the source datasets. +5. Data cleaning (e.g., removal of exact duplicates, short texts, texts in other scripts) +6. [MinHash near-deduplication](https://github.com/bigcode-project/bigcode-dataset/blob/main/near_deduplication/minhash_deduplication.py) per language. + + +### Annotations + +We augment the original samples with the `provenance` annotation which specifies the original data source from which a particular examples stems. + + +#### Personal and Sensitive Information + +The data is sourced from newspaper sources and contains mentions of public figures and individuals. + + +## Considerations for Using the Data + +### Social Impact of Dataset +[More Information Needed] + + +### Discussion of Biases +[More Information Needed] + + +### Other Known Limitations + +Users should keep in mind that the dataset contains short news texts (e.g., mostly titles), which might limit the applicability of the developed systems to other domains. + + +## Additional Information + +### Licensing Information +The dataset is released under the [CC BY-NC Attribution-NonCommercial 4.0 International license](https://creativecommons.org/licenses/by-nc/4.0/). + +### Citation Infomation + +**BibTeX:** + +[More Information Needed] \ No newline at end of file