Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -181,6 +181,9 @@ size_categories:
|
|
181 |
|
182 |
CrawlPT is a generic Portuguese corpus extracted from various web pages.
|
183 |
|
|
|
|
|
|
|
184 |
## Dataset Details
|
185 |
|
186 |
Dataset is composed by three corpora:
|
@@ -190,24 +193,13 @@ Dataset is composed by three corpora:
|
|
190 |
- **C100-PT**: Portuguese subset from CC-100. C100 was created for training the multilingual Transformer XLM-R, containing two terabytes of cleaned data from 2018 snapshots of the [Common Crawl project](\url{https://commoncrawl.org/about/) in 100 languages. We use the , which contains 49.1 GiB of text.
|
191 |
- **OSCAR-2301-PT**: curation from OSCAR-2301 in the Portuguese language.
|
192 |
|
193 |
-
|
194 |
|
195 |
-
- **Curated by:** [More Information Needed]
|
196 |
-
- **Funded by:** [More Information Needed]
|
197 |
- **Language(s) (NLP):** Brazilian Portuguese (pt-BR)
|
198 |
- **License:** [Creative Commons Attribution 4.0 International Public License](https://creativecommons.org/licenses/by/4.0/deed.en)
|
199 |
-
|
200 |
-
### Dataset Sources
|
201 |
-
|
202 |
- **Repository:** https://github.com/eduagarcia/roberta-legal-portuguese
|
203 |
- **Paper:** [More Information Needed]
|
204 |
|
205 |
-
## Dataset Structure
|
206 |
-
|
207 |
-
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
|
208 |
-
|
209 |
-
[More Information Needed]
|
210 |
-
|
211 |
## Data Collection and Processing
|
212 |
|
213 |
Raw corpora sizes in terms of billions of tokens and file size in GiB:
|
|
|
181 |
|
182 |
CrawlPT is a generic Portuguese corpus extracted from various web pages.
|
183 |
|
184 |
+
his version is deduplicated using MinHash algorithm and Locality Sensitive Hashing, following the approach of Lee et al. (2022).
|
185 |
+
The raw version is also available [here](https://huggingface.co/datasets/eduagarcia/CrawlPT).
|
186 |
+
|
187 |
## Dataset Details
|
188 |
|
189 |
Dataset is composed by three corpora:
|
|
|
193 |
- **C100-PT**: Portuguese subset from CC-100. C100 was created for training the multilingual Transformer XLM-R, containing two terabytes of cleaned data from 2018 snapshots of the [Common Crawl project](\url{https://commoncrawl.org/about/) in 100 languages. We use the , which contains 49.1 GiB of text.
|
194 |
- **OSCAR-2301-PT**: curation from OSCAR-2301 in the Portuguese language.
|
195 |
|
196 |
+
### Dataset Description
|
197 |
|
|
|
|
|
198 |
- **Language(s) (NLP):** Brazilian Portuguese (pt-BR)
|
199 |
- **License:** [Creative Commons Attribution 4.0 International Public License](https://creativecommons.org/licenses/by/4.0/deed.en)
|
|
|
|
|
|
|
200 |
- **Repository:** https://github.com/eduagarcia/roberta-legal-portuguese
|
201 |
- **Paper:** [More Information Needed]
|
202 |
|
|
|
|
|
|
|
|
|
|
|
|
|
203 |
## Data Collection and Processing
|
204 |
|
205 |
Raw corpora sizes in terms of billions of tokens and file size in GiB:
|