Datasets:
SLPL
/

Languages:
Persian
ArXiv:
License:
File size: 5,535 Bytes
a753c0a
c09042e
 
 
 
 
 
 
6f2c09b
 
c09042e
 
 
 
a753c0a
da5f92b
c09042e
 
 
 
 
 
 
 
 
8c41eb7
c09042e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d2a5091
c09042e
 
 
8c41eb7
 
c09042e
ea4f6ec
 
 
 
 
 
 
77eed8e
ea4f6ec
 
 
77eed8e
ea4f6ec
 
c09042e
 
8c41eb7
c09042e
 
 
 
d2270c6
 
 
 
c09042e
 
 
 
 
 
 
 
 
 
 
 
 
8c41eb7
c09042e
 
 
 
 
1511ed6
 
 
 
 
c09042e
b60bbdc
 
 
 
c09042e
 
1511ed6
 
 
 
 
 
 
 
 
 
c09042e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1511ed6
c09042e
 
 
 
8631d1c
 
 
 
 
c09042e
 
 
c0d5667
c09042e
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
---
language:
- fa
license:
- mit
multilinguality:
- monolingual
task_categories:
- fill-mask
- text-generation
task_ids:
- language-modeling
- masked-language-modeling
pretty_name: naab-raw (raw version of the naab corpus)
---

# naab-raw (raw version of the naab corpus)
_[If you want to join our community to keep up with news, models and datasets from naab, click on [this](https://docs.google.com/forms/d/e/1FAIpQLSe8kevFl_ODCx-zapAuOIAQYr8IvkVVaVHOuhRL9Ha0RVJ6kg/viewform) link.]_

## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
  - [Table of Contents](#table-of-contents)
  - [Dataset Description](#dataset-description)
    - [Dataset Summary](#dataset-summary)
    - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
    - [Changelog](#changelog)
  - [Dataset Structure](#dataset-structure)
    - [Data Instances](#data-instances)
    - [Data Fields](#data-fields)
    - [Data Splits](#data-splits)
  - [Dataset Creation](#dataset-creation)
    - [Curation Rationale](#curation-rationale)
    - [Contribution Guideline](#contribution-guideline)
    - [Personal and Sensitive Information](#personal-and-sensitive-information)
  - [Additional Information](#additional-information)
    - [Dataset Curators](#dataset-curators)
    - [Licensing Information](#licensing-information)
    - [Citation Information](#citation-information)
    - [Contributions](#contributions)

## Dataset Description

- **Homepage:** [Sharif Speech and Language Processing Lab](https://huggingface.co/SLPL)
- **Paper:** [naab: A ready-to-use plug-and-play corpus for Farsi](https://arxiv.org/abs/2208.13486)
- **Point of Contact:** [Sadra Sabouri](mailto:[email protected])

### Dataset Summary

This is the raw (uncleaned) version of the [naab](https://huggingface.co/datasets/SLPL/naab) corpus. You can use also customize our [preprocess script](https://github.com/Sharif-SLPL/t5-fa/tree/main/preprocess) and make your own cleaned corpus. This repository is a hub for all Farsi corpora. Feel free to add your corpus following the [contribution guidelines](#contribution-guideline).

You can download the dataset by the command below:
```python
from datasets import load_dataset

dataset = load_dataset("SLPL/naab-raw")
```

If you wanted to download a specific part of the corpus you can set the config name to the specific corpus name:
```python
from datasets import load_dataset

dataset = load_dataset("SLPL/naab-raw", "CC-fa")
```

### Supported Tasks and Leaderboards

This corpus can be used for training all language models trained by Masked Language Modeling (MLM) or any other self-supervised objective.

- `language-modeling`
- `masked-language-modeling`

### Changelog

It's crucial to log changes on the projects which face changes periodically. Please refer to the [CHANGELOG.md](https://huggingface.co/datasets/SLPL/naab-raw/blob/main/CHANGELOG.md) for more details.

## Dataset Structure

Each row of the dataset will look like something like the below:
```json
{
  'text': "این یک تست برای نمایش یک پاراگراف در پیکره متنی ناب است.",
}
```
+ `text` : the textual paragraph.


### Data Splits

This corpus contains only a split (the `train` split).

## Dataset Creation

### Curation Rationale

Here are some details about each part of this corpus.

#### CC-fa

The Common Crawl corpus contains petabytes of data collected since 2008. It contains raw web page data, extracted metadata, and text extractions. We use the Farsi part of it here.

#### W2C

The W2C stands for Web to Corpus and it contains several corpera. We contain the Farsi part of it in this corpus.

### Contribution Guideline

In order to add your dataset, you should follow the below steps and make a pull request in order to be merged with the _naab-raw_:

1. Add your dataset to `_CORPUS_URLS` in `naab-raw.py` like:
```python
...
    "DATASET_NAME": "LINK_TO_A_PUBLIC_DOWNLOADABLE_FILE.txt"
...
```
2. Add a log of your changes to the [CHANGELOG.md](https://huggingface.co/datasets/SLPL/naab-raw/blob/main/CHANGELOG.md).
3. Add some minor descriptions to the [Curation Rationale](#curation-rationale) under a subsection with your dataset name.


### Personal and Sensitive Information

Since this corpus is briefly a compilation of some former corpora we take no responsibility for personal information included in this corpus. If you detect any of these violations please let us know, we try our best to remove them from the corpus ASAP.

We tried our best to provide anonymity while keeping the crucial information. We shuffled some parts of the corpus so the information passing through possible conversations wouldn't be harmful. 

## Additional Information

### Dataset Curators

+ Sadra Sabouri (Sharif University of Technology)
+ Elnaz Rahmati (Sharif University of Technology)

### Licensing Information

mit

### Citation Information

```
@article{sabouri2022naab,
  title={naab: A ready-to-use plug-and-play corpus for Farsi},
  author={Sabouri, Sadra and Rahmati, Elnaz and Gooran, Soroush and Sameti, Hossein},
  journal={arXiv preprint arXiv:2208.13486},
  year={2022}
}
```

DOI:[https://doi.org/10.48550/arXiv.2208.13486](https://doi.org/10.48550/arXiv.2208.13486).

### Contributions

Thanks to [@sadrasabouri](https://github.com/sadrasabouri) and [@elnazrahmati](https://github.com/elnazrahmati) for adding this dataset.

### Keywords
+ Farsi
+ Persian
+ raw text
+ پیکره فارسی
+ پیکره متنی
+ آموزش مدل زبانی