Datasets:
File size: 3,470 Bytes
28269ab 4bfb6b2 28269ab 4bfb6b2 d23bae7 28269ab d23bae7 6b6614b 8d28244 6b6614b cc1c062 8d28244 6b6614b cc1c062 d23bae7 8d28244 195713f cc1c062 195713f cc1c062 195713f cc1c062 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 |
---
pretty_name: Korpus Malti
language:
- mt
multilinguality:
- monolingual
size_categories:
- 10M<n<100M
annotations_creators:
- no-annotation
language_creators:
- found
source_datasets:
- original
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
license:
- cc-by-nc-sa-4.0
---
# Korpus Malti 🇲🇹
General Corpora for the Maltese Language.
This dataset is composed of texts from various genres/domains written in Maltese.
## Configurations
### Shuffled data
The default configuration (`"shuffled"`) yields the the entire corpus from all genres:
```python
import datasets
dataset = datasets.load_dataset("MLRS/korpus_malti")
```
All sentences are combined together and shuffled, without preserving the sentence order.
No other annotations are present, so an instance would be of the following form:
```json
{
"text": "Din hija sentenza."
}
```
The training/validation/testing split is what was used to train the [BERTu](https://huggingface.co/MLRS/BERTu) model.
### Domain-split data
All other configurations contain a subset of the data.
For instance, this loads the Wikipedia portion:
```python
import datasets
dataset = datasets.load_dataset("MLRS/korpus_malti", "wiki")
```
For these configurations the data is not shuffled, so the sentence order on a document level is preserved.
An instance from these configurations would take the following form:
```json
{
"text": ["Din hija sentenza.", "U hawn oħra!"],
}
```
The raw data files contain additional metadata.
Its structure differs from one instance to another, depending on what's available from the source.
This information was typically scraped from the source itself & minimal processing is performed on such data.
## Additional Information
### Dataset Curators
The dataset was created by [Albert Gatt](https://albertgatt.github.io), [Kurt Micallef](https://www.um.edu.mt/profile/kurtmicallef), [Marc Tanti](https://www.um.edu.mt/profile/marctanti), [Lonneke van der Plas](https://sites.google.com/site/lonnekenlp/) and [Claudia Borg](https://www.um.edu.mt/profile/claudiaborg).
### Licensing Information
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa].
Permissions beyond the scope of this license may be available at [https://mlrs.research.um.edu.mt/](https://mlrs.research.um.edu.mt/).
[![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa]
[cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/
[cc-by-nc-sa-image]: https://licensebuttons.net/l/by-nc-sa/4.0/88x31.png
### Citation Information
This work was first presented in [Pre-training Data Quality and Quantity for a Low-Resource Language: New Corpus and BERT Models for Maltese](https://arxiv.org/abs/2205.10517).
Cite it as follows:
```bibtex
@inproceedings{BERTu,
title = {Pre-training Data Quality and Quantity for a Low-Resource Language: New Corpus and {BERT} Models for {M}altese},
author = {Micallef, Kurt and
Gatt, Albert and
Tanti, Marc and
van der Plas, Lonneke and
Borg, Claudia},
booktitle = {Proceedings of the 3rd Workshop on Deep Learning for Low-Resource NLP (DeepLo 2022)},
day = {14},
month = {07},
year = {2022},
address = {Seattle, Washington},
publisher = {Association for Computational Linguistics},
}
```
|