Datasets:
SLPL
/

Modalities:
Text
Languages:
Persian
ArXiv:
Libraries:
Datasets
License:
sadrasabouri commited on
Commit
4843b39
1 Parent(s): e3192f9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +23 -5
README.md CHANGED
@@ -113,7 +113,7 @@ Provide the sizes of each split. As appropriate, provide any descriptive statist
113
  | Input Sentences | 225892925 | 11083851 |
114
  | Average Sentence Length | 61 | 25 |
115
 
116
- Below you can see the histogram of word/paragraph over the two splits of the dataset.
117
 
118
  <div align="center">
119
  <img src="https://huggingface.co/datasets/SLPL/naab/resolve/main/naab-hist.png">
@@ -125,22 +125,40 @@ Below you can see the histogram of word/paragraph over the two splits of the dat
125
 
126
  Due to the lack of a huge amount of text data in lower resource languages - like Farsi - researchers working on these languages were always finding it hard to start to fine-tune such models. This phenomenon can lead to a situation in which the golden opportunity for fine-tuning models is just in hands of a few companies or countries which contributes to the weakening the open science.
127
 
128
- The last biggest cleaned merged textual corpus in Farsi is a 70GB cleaned text corpus from a compilation of 8 big data sets that have been cleaned and can be downloaded directly. Our solution to the discussed issues is called naab. It provides 126GB (including more than 224 million sequences and nearly 15 billion words) as the training corpus and 2.3GB (including nearly 11 million sequences and nearly 300 million words) as the test corpus.
129
 
130
  ### Source Data
 
 
 
131
  <div align="center">
132
  <img src="https://huggingface.co/datasets/SLPL/naab/resolve/main/naab-pie.png">
133
  </div>
134
 
135
- #### Persian NLP
 
 
136
 
 
 
 
 
 
 
 
 
 
137
  #### AGP
 
138
 
139
- #### OSCAR-fa
 
140
 
141
  #### Telegram
 
142
 
143
- #### LSCP
 
144
 
145
  #### Initial Data Collection and Normalization
146
 
 
113
  | Input Sentences | 225892925 | 11083851 |
114
  | Average Sentence Length | 61 | 25 |
115
 
116
+ Below you can see the log-based histogram of word/paragraph over the two splits of the dataset.
117
 
118
  <div align="center">
119
  <img src="https://huggingface.co/datasets/SLPL/naab/resolve/main/naab-hist.png">
 
125
 
126
  Due to the lack of a huge amount of text data in lower resource languages - like Farsi - researchers working on these languages were always finding it hard to start to fine-tune such models. This phenomenon can lead to a situation in which the golden opportunity for fine-tuning models is just in hands of a few companies or countries which contributes to the weakening the open science.
127
 
128
+ The last biggest cleaned merged textual corpus in Farsi is a 70GB cleaned text corpus from a compilation of 8 big data sets that have been cleaned and can be downloaded directly. Our solution to the discussed issues is called naab. It provides **126GB** (including more than **224 million** sequences and nearly **15 billion** words) as the training corpus and **2.3GB** (including nearly **11 million** sequences and nearly **300 million** words) as the test corpus.
129
 
130
  ### Source Data
131
+
132
+ The textual corpora that we used as our source data are illustrated in the figure below. It contains 5 corpora which are linked in the coming sections.
133
+
134
  <div align="center">
135
  <img src="https://huggingface.co/datasets/SLPL/naab/resolve/main/naab-pie.png">
136
  </div>
137
 
138
+ #### [Persian NLP](https://github.com/persiannlp/persian-raw-text)
139
+
140
+ This corpus includes eight corpora that are sorted based on their volume as below:
141
 
142
+ - [Common Crawl](https://commoncrawl.org/): 65GB ([link](https://storage.googleapis.com/danielk-files/farsi-text/merged_files/commoncrawl_fa_merged.txt))
143
+ - [MirasText](https://github.com/miras-tech/MirasText): 12G
144
+ - [W2C – Web to Corpus](https://lindat.mff.cuni.cz/repository/xmlui/handle/11858/00-097C-0000-0022-6133-9): 1GB ([link](https://storage.googleapis.com/danielk-files/farsi-text/merged_files/w2c_merged.txt))
145
+ - Persian Wikipedia (March 2020 dump): 787MB ([link](https://storage.googleapis.com/danielk-files/farsi-text/merged_files/fawiki_merged.txt))
146
+ - [Leipzig Corpora](https://corpora.uni-leipzig.de/): 424M ([link](https://storage.googleapis.com/danielk-files/farsi-text/merged_files/LeipzigCorpus.txt))
147
+ - [VOA corpus](https://jon.dehdari.org/corpora/): 66MB ([link](https://storage.googleapis.com/danielk-files/farsi-text/merged_files/voa_persian_2003_2008_cleaned.txt))
148
+ - [Persian poems corpus](https://github.com/amnghd/Persian_poems_corpus): 61MB ([link](https://storage.googleapis.com/danielk-files/farsi-text/merged_files/poems_merged.txt))
149
+ - [TEP: Tehran English-Persian parallel corpus](http://opus.nlpl.eu/TEP.php): 33MB ([link](https://storage.googleapis.com/danielk-files/farsi-text/merged_files/TEP_fa.txt))
150
+
151
  #### AGP
152
+ This corpus was a formerly private corpus for ASR Gooyesh Pardaz which is now published for all users by this project. This corpus contains more than 140 million paragraphs summed up in 23GB (after cleaning). This corpus is a mixture of both formal and informal paragraphs that are crawled from different websites and/or social media.
153
 
154
+ #### [OSCAR-fa](https://oscar-corpus.com/)
155
+ OSCAR (Abadji et al., 2022) or Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the go classy architecture. Data is distributed by language in both original and deduplicated form. We used the unshuffled-deduplicated-fa from this corpus, after cleaning there were about 36GB remaining.
156
 
157
  #### Telegram
158
+ Telegram, a cloud-based instant messaging service, is a widely used application in Iran. Following this hypothesis, we prepared a list of Telegram channels in Farsi covering various topics including sports, daily news, jokes, movies and entertainment, etc. The text data extracted from mentioned channels mainly contains informal data.
159
 
160
+ #### [LSCP](https://iasbs.ac.ir/~ansari/lscp/)
161
+ The Large Scale Colloquial Persian Language Understanding dataset has 120M sentences from 27M casual Persian sentences with its derivation tree, part-of-speech tags, sentiment polarity, and translations in English, German, Czech, Italian, and Hindi. However, we just used the Farsi part of it and after cleaning we had 2.3GB of it remaining. Since the dataset is casual, it may help our corpus have more informal sentences although its proportion to formal paragraphs is not comparable.
162
 
163
  #### Initial Data Collection and Normalization
164