Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -43,7 +43,8 @@ task_ids: []
|
|
43 |
|
44 |
### Dataset Summary
|
45 |
|
46 |
-
The CA-PT Parallel Corpus is a Catalan-Portuguese dataset of **9.892.953** parallel sentences. The dataset was created to support Catalan NLP tasks, e.g.,
|
|
|
47 |
|
48 |
### Supported Tasks and Leaderboards
|
49 |
|
@@ -57,9 +58,9 @@ The texts in the dataset are in Catalan and Portuguese.
|
|
57 |
|
58 |
Two separated txt files are provided with the sentences sorted in the same order:
|
59 |
|
60 |
-
- ca-pt_2023_09_01_full.ca: contains 9.892.953 Catalan sentences
|
61 |
|
62 |
-
- ca-pt_2023_09_01_full.pt: contains 9.892.953 Portuguese sentences
|
63 |
|
64 |
### Data Splits
|
65 |
|
@@ -88,13 +89,13 @@ The dataset is a combination of the following authentic datasets:
|
|
88 |
All corpora except Europarl were collected from [Opus](https://opus.nlpl.eu/).
|
89 |
The Europarl corpus is a synthetic parallel corpus created from the original Spanish-Catalan corpus by [SoftCatalà](https://github.com/Softcatala/Europarl-catalan).
|
90 |
|
91 |
-
The remaining **3.733.322** sentences are synthetic parallel data created from a random sampling of the Spanish-Portuguese corpora available on [Opus](https://opus.nlpl.eu/) and translated into Catalan using the PlanTL es-ca model
|
92 |
|
93 |
### Data preparation
|
94 |
|
95 |
All datasets are deduplicated and filtered to remove any sentence pairs with a cosine similarity of less than 0.75.
|
96 |
This is done using sentence embeddings calculated using [LaBSE](https://huggingface.co/sentence-transformers/LaBSE).
|
97 |
-
The filtered datasets are then concatenated to form a final corpus of **9.892.953** parallel sentences and before training the punctuation is normalized using a modified version of the join-single-file.py script from [SoftCatalà](https://github.com/Softcatala/nmt-models/blob/master/data-processing-tools/join-single-file.py)
|
98 |
|
99 |
### Personal and Sensitive Information
|
100 |
|
@@ -118,16 +119,16 @@ The dataset contains data of a general domain. Application of this dataset in mo
|
|
118 |
## Additional Information
|
119 |
|
120 |
### Author
|
121 |
-
Language Technologies Unit (LangTech) at the Barcelona Supercomputing Center
|
122 |
|
123 |
### Contact information
|
124 |
For further information, please send an email to langtech@bsc.es.
|
125 |
|
126 |
### Copyright
|
127 |
-
Copyright Language Technologies Unit at Barcelona Supercomputing Center (2023)
|
128 |
|
129 |
### Licensing information
|
130 |
-
This work is licensed under a [Attribution-NonCommercial-ShareAlike 4.0 International](https://creativecommons.org/licenses/by-nc-sa/4.0/)
|
131 |
|
132 |
### Funding
|
133 |
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
|
|
|
43 |
|
44 |
### Dataset Summary
|
45 |
|
46 |
+
The CA-PT Parallel Corpus is a Catalan-Portuguese dataset of **9.892.953** parallel sentences. The dataset was created to support Catalan NLP tasks, e.g.,
|
47 |
+
Machine Translation.
|
48 |
|
49 |
### Supported Tasks and Leaderboards
|
50 |
|
|
|
58 |
|
59 |
Two separated txt files are provided with the sentences sorted in the same order:
|
60 |
|
61 |
+
- ca-pt_2023_09_01_full.ca: contains 9.892.953 Catalan sentences.
|
62 |
|
63 |
+
- ca-pt_2023_09_01_full.pt: contains 9.892.953 Portuguese sentences.
|
64 |
|
65 |
### Data Splits
|
66 |
|
|
|
89 |
All corpora except Europarl were collected from [Opus](https://opus.nlpl.eu/).
|
90 |
The Europarl corpus is a synthetic parallel corpus created from the original Spanish-Catalan corpus by [SoftCatalà](https://github.com/Softcatala/Europarl-catalan).
|
91 |
|
92 |
+
The remaining **3.733.322** sentences are synthetic parallel data created from a random sampling of the Spanish-Portuguese corpora available on [Opus](https://opus.nlpl.eu/) and translated into Catalan using the PlanTL es-ca model.
|
93 |
|
94 |
### Data preparation
|
95 |
|
96 |
All datasets are deduplicated and filtered to remove any sentence pairs with a cosine similarity of less than 0.75.
|
97 |
This is done using sentence embeddings calculated using [LaBSE](https://huggingface.co/sentence-transformers/LaBSE).
|
98 |
+
The filtered datasets are then concatenated to form a final corpus of **9.892.953** parallel sentences and before training the punctuation is normalized using a modified version of the join-single-file.py script from [SoftCatalà](https://github.com/Softcatala/nmt-models/blob/master/data-processing-tools/join-single-file.py).
|
99 |
|
100 |
### Personal and Sensitive Information
|
101 |
|
|
|
119 |
## Additional Information
|
120 |
|
121 |
### Author
|
122 |
+
Language Technologies Unit (LangTech) at the Barcelona Supercomputing Center.
|
123 |
|
124 |
### Contact information
|
125 |
For further information, please send an email to langtech@bsc.es.
|
126 |
|
127 |
### Copyright
|
128 |
+
Copyright Language Technologies Unit at Barcelona Supercomputing Center (2023).
|
129 |
|
130 |
### Licensing information
|
131 |
+
This work is licensed under a [Attribution-NonCommercial-ShareAlike 4.0 International](https://creativecommons.org/licenses/by-nc-sa/4.0/).
|
132 |
|
133 |
### Funding
|
134 |
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina).
|